id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2303.14249 | An Alternative Approach for Nonparametric Analysis of Random Utility
Models | We readdress the problem of nonparametric statistical testing of random
utility models proposed in Kitamura and Stoye (2018). Although their test is
elegant, it is subject to computational constraints which leaves execution of
the test infeasible in many applications. We note that much of the
computational burden in Kitamura and Stoye's test is due to their test defining
a polyhedral cone through its vertices rather than its faces. We propose an
alternative but equivalent hypothesis test for random utility models. This test
relies on a series of equality and inequality constraints which defines the
faces of the corresponding polyhedral cone. Building on our testing procedure,
we develop a novel axiomatization of the random utility model. Our new axiom
can be interpreted as a condition on surplus allocation in cooperative games. | Christopher Turansick | 2023-03-24T19:28:05Z | http://arxiv.org/abs/2303.14249v4 | # On Graphical Methods in Stochastic Choice+
###### Abstract
In recent years there has been an influx of papers which use graph theoretic tools to study stochastic choice. Fiorini (2004) serves as a base for this literature by providing a graphical representation of choice probabilities and showing that every interior node of this graph must satisfy inflow equals outflow. We show that this inflow equals outflow property is almost characteristic of choice probabilities. In doing so, we characterize choice probabilities through graph theoretic tools. As an application of this result, we provide a novel characterization of stochastic rationality on an incomplete domain.
## 1 Introduction
The stochastic choice paradigm is oft used to study the repeated choices of a single consumer as well as the aggregate choices of a population of consumers. The literature on stochastic choice posits that the apparent randomness of choice arises due to multiple potential reasons including unobserved heterogeneity across agents and time, random attention, and a preference for variation. In recent years, there has been an influx of papers which use graph theoretic tools to study stochastic choice and specifically the random utility model of Block and Marschak (1959). To our knowledge, Fiorini (2004) is the first to bring graph
theoretic tools to stochastic choice. Using these tools, Fiorini (2004) offers a short proof of the characterization of random utility from Falmagne (1978). One of the key insights of Fiorini (2004) is that when choice probabilities are represented with a specific graph, this graph satisfies inflow equals outflow at each one of its interior nodes.
Our main result shows that this preservation of flow property is almost characteristic of choice probabilities. We show that a function satisfies inflow equals outflow at each interior node of the flow diagram, the graph from Fiorini (2004), if and only if it has a constant sum in each choice set, \(\sum_{x\in A}f(x,A)=\sum_{y\in B}f(y,B)\). We are able to characterize choice probabilities as a corollary of this result with two more axioms. The first axiom asks that there is a total of one flow leaving the initial node of the flow diagram. This along with inflow equals outflow gives us that our function sums to one at every choice set. The second axiom asks that, at each node, the total flow associated with the choice \(x\) from each weakly higher node must be non-negative. This axiom ensures that choice probabilities are non-negative.
In a recent paper, Kono et al. (2023) offers a characterization of random utility when every choice set is observed but the choice probabilities of some alternatives may be unobserved. This characterization requires one additional axiom beyond what Falmagne (1978) uses to characterize random utility. This new axiom is a statement about the (augmented) value of a cut of their graphical representation. A cut is bipartition of the set of nodes of a graph. Call these two sets of nodes \(S\) and \(T\). The augmented value of a cut is the total flow from \(S\) to \(T\) minus the total flow from \(T\) to \(S\). Kono et al. (2023) asks that every cut have a non-negative augmented value. This result motivates the second part of our main result. We show that a function has a constant sum in each choice set if and only if every cut of its flow diagram has the same augmented value.
As an application of our main result, we offer a new characterization of random utility for incomplete data sets. The classic characterization of random utility for incomplete data sets is from McFadden and Richter (1990). Their method asks that there is some probability distribution over preferences which induce the observed choice probabilities. One way to get their axiom is to look at the alternative linear program of this existence question. One problem with this method is that it produces an axiom that references the underlying representation. An alternative way of posing the stochastic rationality question is by asking if there is an extension of the observed choice probabilities to a complete domain which satisfy the conditions of Falmagne (1978). Initially, this method may appear to give messy conditions as asking for an extension of choice probabilities is a statement about probabilities and asking that they satisfy the conditions of Falmagne (1978) is a statement about
the Mobius inverse of probabilities. Our main insight is that you can write the extension problem using only the Mobius inverse of choice probabilities using the flow conditions from our main result. Similar to McFadden and Richter (1990), we then get our axiom by looking at the alternative linear program of the restatement of the stochastically rational extension linear program.
The rest of this paper is organized as follows. In Section 2, we introduce our mathematical and graphical preliminaries. In Section 3, we present our main result. In Section 4, we apply our main result to the random utility model. In Section 5, we conclude with a discussion and a review of the related literature.
## 2 Preliminaries
Let \(X\) be a finite set of alternatives with typical elements \(x\). Let \(\mathcal{L}(X)\) be the set of linear orders of \(X\) with typical element \(\succ\). Let \(\Delta(\mathcal{L}(X))\) be the set of probability distributions over \(\mathcal{L}(X)\) with typical element \(\nu\). Let \(\mathcal{X}\) be a collection of nonempty subsets of \(X\) with typical element \(A\). Note that \(\mathcal{X}\) need not be \(2^{X}\setminus\{\emptyset\}\), the collection of _all_ nonempty subsets of \(X\). Let \(M(\succ,A)\) denote the element \(x\in A\) such that \(x\succ A\setminus\{x\}\). Let \(N(x,A)=\{\succ\ |x\succ A\setminus\{x\}\}\) be the set of linear orders which are maximized by \(x\) in \(A\).
We are interested in functions \(f:X\times 2^{X}\setminus\{\emptyset\}\to\mathbb{R}\) and the behavior of \(f(x,A)\) when \(x\in A\). Specifically, we are interested in characterizing set constant functions.
**Definition 2.1**.: A function \(f:X\times 2^{X}\setminus\{\emptyset\}\to\mathbb{R}\) is **set constant** if for each \(A,B\in 2^{X}\setminus\{\emptyset\}\), \(\sum_{x\in A}f(x,A)=\sum_{y\in B}f(y,B)\).
Within the class of set constant functions, our goal is to develop a better understanding of random choice rules.
**Definition 2.2**.: A function \(p:X\times 2^{X}\setminus\{\emptyset\}\to\mathbb{R}\) is called a **signed random choice rule** if \(\sum_{x\in A}p(x,A)=1\). It is called a **random choice rule** if it is a signed random choice rule and \(p(x,A)\geq 0\) for all \(x\in A\).
Random choice rules capture the choice probabilities of a given alternative \(x\) in choice set \(A\). In order to study random choice rules, we utilize their Mobius inverse.
**Definition 2.3**.: The **Mobius inverse** of a function \(f:X\times 2^{X}\setminus\{\emptyset\}\to\mathbb{R}\) is given by the function \(g:X\times 2^{X}\setminus\{\emptyset\}\to\mathbb{R}\) which is recursively defined as follows.1
Footnote 1: In the case of set inclusion, as we face here, the Möbius inverse can be defined explicitly as \(g(x,A)=\sum_{A\subseteq B}(-1)^{|B\setminus A|}f(x,B)\).
\[f(x,A)=\sum_{A\subseteq B}g(x,B)\]
Rota (1964) studies Mobius inversion and shows that the Mobius inverse is always well-defined, each function \(f\) has a unique Mobius inverse \(g\), and each Mobius inverse \(g\) has a unique generating function \(f\).
### Graphical Construction and Concepts
In this section we introduce our main graphical construction, first introduced by Fiorini (2004) to study random utility, as well as the concepts we use to analyze this construction. Given a finite set of alternatives \(X\) and a function \(f:X\times 2^{X}\setminus\{\emptyset\}\to\mathbb{R}\), we construct the **flow diagram** associated with \(f\) as follows. The nodes of the flow diagram are indexed by the elements of \(2^{X}\), the power set of \(X\). We use the set indexing a node to refer to that node. There exists an edge between two nodes \(A\) and \(B\) if one of the following is true.
1. \(A\subseteq B\) and \(|B\setminus A|=1\)
2. \(B\subseteq A\) and \(|A\setminus B|=1\)
In other words, the edge set of this graph is formed by applying the covering relation of \(\subseteq\) to \(X\). For the edge connecting \(A\) and \(A\setminus\{x\}\), we assign \(g(x,A)\), the Mobius inverse of \(f\), as its edge weight. Figure 1 presents the flow diagram given a set \(X=\{a,b,c\}\) and a function \(f\) with Mobius inverse \(g\).
Given our flow diagram, we can consider paths on this flow diagram.
**Definition 2.4**.: Given a flow diagram for \(X\) and \(f\), a **path** is a collection of nodes \(\{A_{1},\ldots,A_{n}\}\) such that \(A_{1}=X\), \(A_{n}=\emptyset\), \(n=|X|+1\), and \(i>j\implies A_{i}\subsetneq A_{j}\). We use the shorthand \(\pi\) to refer to an arbitrary path whose nodes we do not specify.
We use \(\Pi_{X}\) to denote the set of paths for the flow diagram of \(X\) and \(f\). Further, we let \(E_{X}\), with typical element \(e\), denote the set of edges of the flow diagram for \(X\) and \(f\). We say that a path \(\pi\) passes through an edge \(e\) if \(e\) connects nodes \(A\) and \(A\setminus\{x\}\) and \(A,A\setminus\{x\}\in\pi\).
**Definition 2.5**.: Given the flow diagram of a set \(X\) and a function \(f\), a **flow assignment** is a function \(r:\Pi_{X}\rightarrow\mathbb{R}\) such that for each edge \(e\) we have \(\sum_{\pi\in\Pi_{X}}r(\pi)\mathbf{1}\{\pi\text{ passes through }e\}\leq g(x,A)\) where \(g(x,A)\) is the edge weight of \(e\).
We say that the value \(u\) of a flow assignment \(r\) is given by \(u(r)=\sum_{\pi\in\Pi_{X}}r(\pi)\). The value of a flow assignment captures the total flow assigned to all paths. There is a connection between the feasible flow assignments of a graph and cuts of a graph.
**Definition 2.6**.: Given a flow diagram, a **cut**\(C\) of the flow diagram is a bipartition of the set of nodes of the flow diagram with each set in the bipartition being nonempty. A **sink-source cut** is a cut such that \(X\) and \(\emptyset\) are in different sets of the bipartition.
From here on out, whenever we use cut we refer specifically to sink-source cuts. Figure 2 gives an example of a cut of a flow diagram where \(X=\{a,b,c\}\). We let \(\mathcal{C}_{X}\) denote the collection of cuts the flow diagram associated with \(X\) and \(f\). Given a cut \(C\), we use \(S\) to refer to the set in the cut bipartition which contains \(X\) and \(T\) to refer to the set in the cut bipartition which contains \(\emptyset\). The value \(v\) of a cut \(C\) is the sum of edge weights across edges
Figure 1: The flow diagram for the set \(X=\{a,b,c\}\) and function \(f\) with Möbius inverse \(g\).
connecting sets in different sets of the cut bipartition.
\[v(C)=\sum_{A\in 2^{X}\setminus\{\emptyset\}}\sum_{x\in A}g(x,A)\mathbf{1}\{(A \in S\wedge A\setminus\{x\}\in T)\vee(A\in T\wedge A\setminus\{x\}\in S)\}\]
To calculate the value of the cut in Figure 2, we would just sum the edge weights of the edges which intersect with the dashed red line. Given a graph with non-negative edge weights, Ford and Fulkerson (1956) shows that the minimum value of a cut of a graph is equal to the maximum value of a flow assignment to that graph.2 We need a slightly non-standard definition of the value of a cut as we work with potentially negative edge weights.
Footnote 2: Theorem 3 of Dogan and Yildiz (2022) provides a similar result which allows edge weights to be negative. However, this result is not sufficient for our analysis.
**Definition 2.7**.: Given a cut \(C\), the **augmented value**\(w\) of the cut is given by the following.
\[w(C)=\sum_{A\in 2^{X}\setminus\{\emptyset\}}\sum_{x\in A}g(x,A)(\mathbf{1} \{A\in S\wedge A\setminus\{x\}\in T\}-\mathbf{1}\{A\in T\wedge A\setminus\{x \}\in S\}) \tag{1}\]
While the value of a cut rewards an edge for going from \(S\) to \(T\) and for going from \(T\) to \(S\), the augmented value of a cut rewards an edge for going from \(S\) to \(T\) but punishes an edge for going from \(T\) to \(S\). Consider a cut and a path. This path could in theory pass
Figure 2: Above, the red dashed line represents the cut of the flow diagram. In this case, \(\{\{a,b,c\},\{a,b\},\{a,c\}\}\) and \(\{\{b,c\},\{a\},\{b\},\{c\},\emptyset\}\) are the two sets which partition \(2^{X}\), the set of nodes of the flow diagram.
between \(S\) and \(T\) multiple times and thus be counted multiple times in the value of the cut. By considering the augmented value of a cut, we are able to deal with this double counting of paths.
## 3 Main Result
In this section we present our main result, a characterization of set constant functions through their Mobius inverse. We then provide further conditions on the Mobius inverse to ensure that a function is a signed random choice rule and then a random choice rule. Our first axiom is called inflow equals outflow.
**Axiom 3.1**.: _A function \(f:X\times 2^{X}\setminus\{\emptyset\}\to\mathbb{R}\) with Mobius inverse \(g\) satisfies **inflow equals outflow** if the following holds for all \(A\in 2^{X}\setminus\{X,\emptyset\}\)._
\[\sum_{x\in A}g(x,A)=\sum_{y\not\in A}g(y,A\cup\{y\}) \tag{2}\]
As a necessary step in characterizing random utility via the Mobius inverse of choice probabilities, Falmagne (1978) shows that every random choice rule satisfies inflow equals outflow. While presenting a graphical proof Falmagne's result, Fiorini (2004) shows that inflow equals outflow can be interpreted as inflow equaling outflow at every node of the flow diagram which is neither \(X\) nor \(\emptyset\). Part of our contribution is showing that inflow equals outflow almost characterizes random choice rules. Our second axiom leverages cuts and their augmented values.
**Axiom 3.2**.: _A function \(f:X\times 2^{X}\setminus\{\emptyset\}\to\mathbb{R}\) satisfies **constant cuts** if the flow diagram of \(X\) and \(f\) satisfies \(\max_{C\in\mathcal{C}_{X}}w(C)=\min_{C\in\mathcal{C}_{X}}w(C)\)._
In the previous section, we noted that our focus on augmented weights is in order to avoid double counting of paths. The constant cuts axiom is exactly asking that no matter how you count paths via a cut and sum across their corresponding edge weights, as long as there is no double counting, the total sum of edge weights will be constant.
**Theorem 3.1**.: _Consider a function \(f:X\times 2^{X}\setminus\{\emptyset\}\to\mathbb{R}\). The following are equivalent._
1. \(f\) _is set constant._
2. \(f\) _satisfies inflow equals outflow._
3. \(f\) _satisfies constant cuts._
While we relegate the proof to the appendix, we present a sketch of the proof here. The equivalence of \(f\) being set constant and \(f\) satisfying inflow equals outflow comes from the following equation.
\[\sum_{x\in A}g(x,A)-\sum_{y\in X\setminus A}g(y,A\cup\{y\})=\sum_{x\in A}f(x,A )-\sum_{y\in X}f(y,X) \tag{3}\]
We begin by showing that this is the case when \(A=X\setminus\{y\}\) which relies on the observation that the Mobius inverse of \(f\) is equal to \(f\) when evaluated at \(X\). In other words, we have \(g(x,X)=f(x,X)\). If we recall the definition of the Mobius inverse, it is immediate that the left hand side of Equation 3 reduces to the right hand side. In this case, it immediately follows that inflow equals outflow is equivalent to \(f\) being set constant. The rest of proving this equivalence relies on inductively showing that Equation 3 holds for other choices of \(A\) when either inflow equals outflow or set constant holds.
In the next part of the proof we show that inflow equals outflow is equivalent to constant cuts. To show that inflow equals outflow implies constant cuts, we show that, when inflow equals outflow holds, you can always completely decompose the flow diagram into a flow assignment. By this, we mean that you can always find a flow assignment which assigns flow at an edge equal to the edge weight of that edge. We call such a flow assignment a flow decomposition. This means that we can calculate the edge weights of any edge by calculating the flow through that edge from a flow decomposition. We then consider any cut of the flow diagram. Since the augmented value of a cut counts every path at least once and avoids double counting of paths and using the fact that we can calculate an edge weight from flows through that edge, the augmented value of any cut is equal to the value of a flow decomposition. Since the total flow of a flow decomposition is constant, this gives us constant cuts.
To show that constant cuts implies inflow equals outflow, we consider two specific cuts. The first cut we consider is given by \(S=\{A\subseteq X|n\leq|A|\}\) and the second cut we consider is \(S^{\prime}=\{A\subseteq X|n\leq|A|\}\setminus B\) where \(|B|=n\). When calculating the augmented values of these two cuts, each edge leaving \(B\) is counted for \(S\) but not for \(S^{\prime}\) and each edge going into \(B\) is counted for \(S^{\prime}\) but not for \(S\). This means that the difference between the augmented
value of these two cuts is given by \(\sum_{x\in A}g(x,B)-\sum_{y\in X\setminus A}g(y,B\cup\{y\})\). Since \(f\) satisfies constant cuts, this difference is zero and thus \(f\) satisfies inflow equals outflow.
As is clear from our proof of Theorem 3.1, we do not need to consider every cut in order to show that constant cuts implies inflow equals outflow. We can ease the computational burden of checking constant cuts by considering a subclass of cuts.
**Definition 3.1**.: A cut \(C=(S,T)\) is **single-crossing** if for every path \(\{A_{i}\}_{i=1}^{n}\), \(i>j\) and \(A_{j}\in T\) imply \(A_{i}\in T\).
Single-crossing cuts are exactly the cuts which guarantee there is no double counting of paths. As an immediate corollary of our proof, it is sufficient to check every single-crossing cut in order to show that constant cuts holds.
**Corollary 3.1**.: _Let \(\mathcal{C}_{X}^{SC}\) be the set of single-crossing cuts of the flow diagram of \(X\) and \(f\). \(f\) satisfies constant cuts if and only if \(\max_{C\in\mathcal{C}_{X}^{SC}}w(C)=\min_{C\in\mathcal{C}_{X}^{SC}}w(C)\). Further, this is equivalent to \(\max_{C\in\mathcal{C}_{X}^{SC}}v(C)=\min_{C\in\mathcal{C}_{X}^{SC}}v(C)\)._
The second part of Corollary 3.1 makes the observation that if no path travels between \(S\) and \(T\) multiple times, then it is not necessary to use the augmented weight to deal with double counting. By characterizing set constant functions, we have done most of the work involved with characterizing random choice rules in terms of their Mobius inverse. All that is left to do is to ensure that our function \(f\) sums to one in each set and is non-negative everywhere. The following corollaries of Theorem 3.1 capture these conditions.
**Corollary 3.2**.: _Let \(f\) be a set constant function with Mobius inverse \(g\). \(f\) is a signed random choice rule if and only if \(\sum_{x\in X}g(x,X)=1\)._
Corollary 3.2 says that if we have a set constant function, all we need to do to get a signed random choice rule is to guarantee that our function sums to one on some set, in this case \(X\).
**Corollary 3.3**.: _Let \(f\) be a signed random choice rule with Mobius inverse \(g\). \(f\) is a random choice rule if and only if for each nonempty \(A\subseteq X\) and \(x\in A\), we have \(\sum_{A\subseteq B}g(x,B)\geq 0\)._
The condition in Corollary 3.3 is a direct translation of the non-negativity condition of probabilities into their Mobius inverse. Taken together, Theorem 3.1 and Corollaries 3.2 and 3.3 fully characterize random choice rules. Our key axiom in this characterization is
inflow equals outflow. This axiom captures the preservation of flow at each node in the flow diagram which is neither \(X\) nor \(\emptyset\). This property is equivalent to the preservation of choice probabilities at each choice set. The equivalence between preservation of flow and the preservation of choice probabilities is exactly why the flow diagram is a natural representation of choice probabilities.
## 4 An Application to Random Utility
In this section we use Theorem 3.1 to provide a new characterization of stochastic rationality. We consider random choice rules on a potentially limited domain.
**Definition 4.1**.: A random choice rule \(p:X\times\mathcal{X}\to\mathbb{R}\) is **stochastically rational** if there exists some probability distribution over linear orders, \(\nu\in\mathcal{L}(X)\), such that for each \(A\in\mathcal{X}\) and \(x\in A\) we have the following.
\[p(x,A)=\sum_{\succ\in\mathcal{L}(X)}\nu(\succ)\mathbf{1}\{x\succ A\setminus\{x \}\}\]
We use \(q\) to denote the Mobius inverse of a random choice rule \(p\). Falmagne (1978) was the first to characterize stochastically rational random choice rules and did so through the use of the Block-Marschak polynomials (Block and Marschak, 1959). The Block-Marschak polynomials are exactly the Mobius inverse of a random choice rule.
**Theorem 4.1** (Falmagne (1978)).: _A random choice rule \(p:X\times 2^{X}\setminus\{\emptyset\}\to\mathbb{R}\) with Mobius inverse \(q\) is stochastically rational if and only if \(q(x,A)\geq 0\) for every nonempty \(A\subseteq X\) and \(x\in A\)._
Theorem 4.1 is appealing in many ways, one of which is that it characterizes stochastic rationality with a finite set of linear inequalities. However, one of the shortcomings of the characterization is that it relies on the observation of choice probabilities on _every_ nonempty subset of \(X\). This is generally an overly restrictive assumption on data. McFadden and Richter (1990) offer an alternative approach to characterizing stochastic rationality which weakens the full domain assumption.
**Theorem 4.2** (McFadden and Richter (1990)).: _A random choice rule \(p:X\times\mathcal{X}\to\mathbb{R}\) is stochastically rational if and only if for any finite sequence \(\{(x_{i},A_{i})\}_{i=1}^{n}\) with \(x_{i}\in A_{i}\in\mathcal{X}\)_
the following holds._
\[\sum_{i=1}^{n}p(x_{i},A_{i})\leq\max_{\succ\in\mathcal{L}(X)}\sum_{i=1}^{n} \mathbf{1}\{x_{i}\succ A_{i}\setminus\{x_{i}\}\}\]
Theorem 4.2 solves the complete domain problem but requires an infinite number of linear inequalities and its statement requires a reference to the underlying representation. Our upcoming characterization combines the characterization of Falmagne (1978) with the insight from our Theorem 3.1 in order to pose the stochastic rationality question for arbitrary domains without referencing the underlying representation. Our characterization relies on two supplemental functions.
**Definition 4.2**.: A function \(c:2^{X}\rightarrow\mathbb{R}\) is a **capacity** if \(c(\emptyset)=0\).
**Definition 4.3**.: A function \(a:X\times\mathcal{X}\rightarrow\mathbb{R}\) is an **assignment**.
In order to best interpret the role of capacities and assignments, we introduce the following definition.
**Definition 4.4**.: An assignment and capacity pair \((a,c)\) is **feasible** if the follow inequality holds.
\[\sum_{A\in\mathcal{X}}\sum_{x\in A}p(x,A)a(x,A)\leq c(X) \tag{4}\]
The role of an assignment \(a\) is to assign some weight to the event that \(x\) is chosen from \(A\). When combined with the probability that \(x\) is chosen from \(A\), \(p(x,A)\), this weight is given by \(a(x,A)p(x,A)\). Each set \(A\) has a capacity \(c(A)\) which must contain the total weight of the events where \(x\) is chosen from \(B\) for each \(B\subseteq A\). Feasibility simply asks that the total weight put on choosing some element from some set is less than the capacity of \(X\), the total capacity of our environment. Our characterization relies on a second type of feasibility.
**Definition 4.5**.: An assignment and capacity pair \((a,c)\) is **locally feasible** if for each \((x,A)\) with \(x\in A\in 2^{X}\setminus\{\emptyset\}\) the following inequality holds.
\[\sum_{x\in B\in\mathcal{X},B\subseteq A}a(x,B)\leq c(A)-c(A\setminus\{x\}) \tag{5}\]
Local feasibility is local in two senses. It is local in that it captures the change in capacity between two sets \(A\) and \(A\setminus\{x\}\) and in that it captures feasibility along one path of the flow diagram. In order to better understand this connection, we must first interpret the paths
of the flow diagram in terms of linear orders. A path \(\{X,X\setminus\{x_{1}\},\ldots,\{x_{n}\}\}\) on the flow diagram corresponds to the linear order \(x_{1}\succ\cdots\succ x_{n}\). Figure 3 gives an example of a path and its corresponding linear order. To better understand this bijective relationship between paths and linear orders, we turn to a result of Falmagne (1978).
**Theorem 4.3** (Falmagne (1978)).: _A distribution over linear orders \(\nu\) is a random utility representation of a random choice rule \(p:X\times 2^{X}\setminus\{\emptyset\}\to\mathbb{R}\) if and only if the following holds for all nonempty \(A\subseteq X\) and \(x\in A.\)_
\[q(x,A)=\sum_{\succ\in\mathcal{L}(X)}\nu(\succ)\mathbf{1}\{X\setminus A \succ x\succ A\setminus\{x\}\}\]
Theorem 4.3 says that a random utility representation of a random choice rule must put probability weight on linear orders which choose \(x\) from \(A\) but not from any \(A\cup\{y\}\) equal to \(q(x,A)\), the Mobius inverse of \(p(x,\cdot)\) evaluated at \(A\). Recall that our flow diagram assigns \(q(x,A)\) as the edge weight for the edge connecting nodes \(A\) and \(A\setminus\{x\}\). This means that the edge weight of the edge connecting \(A\) and \(A\setminus\{x\}\) corresponds to the event in which a preference which chooses \(x\) from \(A\) but not from any \(A\cup\{y\}\) is drawn. If we take the intersection of these events along a full path, then the unique preference in the intersection of these events is the linear order associated with that path.
We now return to the interpretation of local stability. If we first start at the empty set and then follow some path up to \(X\), local stability gives us a sequence of inequalities.
\[a(x_{n},\{x_{n}\})\leq c(\{x_{n}\})-0\] \[a(x_{n-1},\{x_{n},x_{n-1}\})+a(x_{n-1},\{x_{n-1}\})\leq c(\{x_{n},x_{n-1}\})-c(\{x_{n}\}) \tag{6}\] \[\ldots\]
If we sum across this entire sequence, we are left with the following inequality.
\[\sum_{x_{i}\in X}\sum_{x_{i}\in A,\forall j<i,x_{j}\not\in A}a(x_{i},A)\leq c(X) \tag{7}\]
Equations 4 and 7 are connected when we consider the case of classically rational choices. To see this observe the following.
**Observation 4.1**.: A random choice rule \(p:X\times 2^{X}\setminus\{\emptyset\}\rightarrow\mathbb{R}\) can be written as \(p(x,A)=\mathbf{1}\{x\succ A\setminus x\}\) for some \(\succ\in\mathcal{L}(X)\) if and only if for each \(x\in X\), \(p(x,A)=\mathbf{1}\{A\subseteq B_{x}\}\) for some \(B_{x}\in 2^{X}\setminus\{\emptyset\}\).
Observation 4.1 just restates the classic result that the independence of irrelevant alternative condition (see Chernoff (1954), Arrow (1959), and Sen (1971)) is equivalent to classic rationality. The thing to note is that when independence of irrelevant alternative is translated into choice probabilities, it states that each element's choice probabilities should be a step function. By applying Observation 4.1 to Equation 4, thus imposing classic rationality on our random choice rule, Equation 4 exactly reduces to Equation 7. In this sense, local feasibility reduces to feasibility for every classically rational choice function.
Local feasibility is a local constraint in a second way. In Equation 6, we chose to sum across every element of the list. If instead, we sum across the first \(n\) elements of the list and let \(A=\bigcup_{i=1}^{n}\{x_{i}\}\), we are left with the following inequality.
\[\sum_{x_{i}\in A}\sum_{x_{i}\in B,\forall j<i,x_{j}\not\in B}a(x_{i},B)\leq c(A) \tag{8}\]
Equation 8 is exactly Equation 7 restricted to the subdomain of \(A\). Now suppose that we want to verify that this inequality holds for \(A\cup\{x_{n+1}\}\). Further, suppose that we do not
know whether Equation 8 holds with equality. Local feasibility is exactly the necessary and sufficient condition needed to verify that Equation 8 holds for \(A\cup\{x_{n+1}\}\). Thus our local feasibility condition is a statement about local conditions that imply feasibility when choices are classically rational.
From our prior discussion, it should be clear that there is some connection between local feasibility and feasibility for stochastically rational random choice rules. This connection follows straight from the classically rational case. A stochastically rational random choice rule can be written as a convex combination of classically rational random choice rules. Given a preference \(\succ\), we can consider the path corresponding to \(\succ\) and its induced Equation 7. This preference has some probability weight on it in the random utility representation \(\nu\). If we sum across each variant of Equation 7, assigning each of these different variants weight equal to \(\nu(\succ)\), the probability weight of the linear order that generated this variant, then the inequality we are left with is exactly the feasibility condition of Equation 4. This leads us to our characterization.
**Theorem 4.4**.: _A random choice rule \(p:X\times\mathcal{X}\to\mathbb{R}\) is stochastically rational if and only if every locally feasible assignment and capacity pair \((a,c)\) is also feasible._
Our main innovation over Theorem 4.2 is that we use Theorem 3.1 to rewrite the stochastic rationality linear program without reference to any preferences. One way to interpret the condition in Theorem 4.2 is to first write down a linear program \(rM=p\) where \(M\) is a matrix that encodes the choices of each linear order, \(p\) encodes the observed choice probabilities, and \(r\) encodes a potential probability distribution over linear orders. The condition in Theorem 4.2 is then related to the alternative linear program through Farkas's Lemma (see Border (2007) and Border (2013)) We rewrite the stochastic rationality linear program as follows.
\[q\geq 0 \tag{9}\]
In Equation 9, we use \(q\) to represent the Mobius inverse of a potential full domain random choice rule. \(Dq=P\) encodes that this potential full domain random choice rule must agree with our observed random choice rule on observed choice sets. We use \(Eq=0\) to encode the inflow equals outflow axiom and use \(Fq=1\) to encode that \(\sum_{x\in X}q(x,X)=1\). From
Theorem 4.1, we know that a full domain random choice rule is stochastically rational if and only if the Mobius inverse of choice probabilities is non-negative. This condition implies \(\sum_{A\subseteq B}q(x,B)\geq 0\), the condition from Corollary 3.3. This means that the condition \(q\geq 0\) encodes both stochastic rationality as well as the last condition necessary to ensure that the potential \(q\) is in fact the Mobius inverse of some full domain random choice rule. We can now obtain the alternative linear program through Farkas's Lemma and some minor manipulation leaves us with our condition in Theorem 4.4.
## 5 Discussion
In this paper, we argue that the Mobius inverse of choice probabilities as well graph theoretic tools are well suited for studying stochastic choice. In doing so, we show that random choice rules are characterized by three properties of their Mobius inverse, with inflow equals outflow being the defining axiom. While we do not claim that these tools are the best tools for every stochastic choice problem, they provide an alternative perspective and a means by which to study the stochastic choice paradigm. As an example, consider the classic model of Luce (1959).
**Definition 5.1**.: A random choice rule \(p:X\times 2^{X}\setminus\{\emptyset\}\) is **consistent** with the Luce model if there exists a function \(h:X\to\mathbb{R}^{++}\) such that \(p(x,A)=\frac{h(x)}{\sum_{y\in A}h(y)}\).
It is well known that the Luce model is characterized by positive choice probabilities and the stochastic independence of irrelevant alternatives condition. We can reinterpret these conditions with our set of tools to get the following result.
**Theorem 5.1**.: _Consider a random choice rule \(p:X\times 2^{X}\to\mathbb{R}\) with Mobius inverse \(q\). The following are equivalent._
1. \(p\) _is consistent with the Luce model._
2. \(p(x,A)>0\) _for all_ \(x\in A\subseteq X\) _and_ \(\frac{p(x,A)}{p(y,A)}=\frac{p(x,B)}{p(y,B)}\) _for all_ \(x,y\in A\cap B\)_._
3. \(q(x,A)>0\) _for all_ \(x\in A\subseteq X\) _and_ \(\frac{q(x,A)}{q(y,A)}=\frac{q(x,B)}{q(y,B)}\) _for all_ \(x,y\in A\cap B\)_._
The equivalence of the first two conditions is the result of Luce (1959). The equivalence with the third condition is novel and tells us that choice probabilities having a constant ratio
across sets is equivalent to a constant proportional assignment of inflows to outflows at each node in our flow diagram.
We also apply our characterization of random choice rules to offer a new characterization of stochastic rationality. In a recent paper, Gonczarowski et al. (2023) shows that Theorem 4.2 can be extended to allow for infinite \(X\). While our Theorem 4.4 is focused on finite \(X\), it can be extended using the same tools as Gonczarowski et al. (2023). The condition that Gonczarowski et al. (2023) asks for is that the condition in Theorem 4.2 must hold for every finite sequence of \((x_{i},A_{i})\). This is equivalent to the condition from Theorem 4.2 holding on every finite subdomain of their potentially infinite collection of choice sets. Since Theorem 4.2 and our Theorem 4.4 both characterize stochastic rationality for finite \(X\), it follows that this is equivalent to the condition from our Theorem 4.4 holding for every finite subdomain.
Finally, we note that the graphical methods we develop in this paper may offer computational improvements over current methods. Kitamura and Stoye (2018) develops a hypothesis test for stochastic rationality and Smeulders et al. (2021) shows that this test is NP-hard. A large part of this computation complexity is due to the fact that this test involves calculating the matrix \(M\) which encodes the choice of every linear order at every (observed) choice set. When choices are observed at every choice set, this \(M\) matrix has one row for each path of the flow diagram. Alternatively, a matrix which encodes our local stability condition need only have as many rows as there are edges in the flow diagram, which is strictly less than the number of paths in the flow diagram when \(|X|\geq 5\).3 As such, there may be ways to leverage our Theorem 4.4 in order to reduce the computational burden of testing for stochastic rationality.
Footnote 3: To see this, note that the number of paths in the flow diagram is equal to the number of linear orders of \(X\), which is given by \(|X|!\) where \(!\) denotes factorial. On the other hand, the number of edges in the flow diagram is equal to the number of edges leaving a node summed over each node. This is given by \(\sum_{i=1}^{|X|}i\binom{|X|}{i}\) where \(\binom{|X|}{i}\) represents the binomial coefficient.
### Related Literature
Our paper is related to two strands of literature. The first strand applies graphical methods to study questions in the stochastic choice paradigm. To our knowledge, Fiorini (2004) is the first to bring graph theoretic tools to the study of stochastic choice. Fiorini (2004) studies the characterization of random utility presented in Falmagne (1978) and provides a novel proof which leverages the observation that choice probabilities satisfy inflow equals
outflow and that this can be naturally represented on the flow diagram. More recently, Davis-Stobar et al. (2018) studies the flow polytopes of other random utility style models including models of random weak orders, interval orders, and semiorders. Doignon and Saito (2022) studies the adjacency of vertices in the linear order polytope and its associated flow polytope, our flow diagram. Chang et al. (2022) uses the adjacency of linear orders in order to study when random-coefficient models can approximate random utility models. Turansick (2022) uses the flow diagram to study the uniqueness properties of the random utility model. Dogan and Yildiz (2022) provides an extension of the result of Ford and Fulkerson (1956) in order to show that every random choice rule can be represented as a linear combination of linear orders. Saito (2017) and Chambers et al. (2023) provide alternate proofs of this result with the latter using a flow decomposition argument similar to the one used in the proof of our Theorem 3.1. Further, Chambers et al. (2023) extends the flow diagram to allow for choice with multiple dimension in order to study which random joint choice rules have well defined marginal choice probabilities. Sprumont (2022) uses flows to study which binary choice probabilities admit an extension to the full domain while maintaining monotonicity of choice probabilities. Finally, as mentioned prior, Kono et al. (2023) uses a graphical construction as well as flows and cuts in order to characterize random utility when the choice probabilities of some alternatives are unobserved.
The second strand of literature that our paper contributes to is the one which offers characterizations of the random utility model. Falmagne (1978) is the first to characterize the random utility and does so by asking that the Mobius inverse of choice probabilities be non-negative. Monderer (1992) provides an alternate proof of this result using methods from cooperative game theory. Cohen (1980) considers an extension of the result of Falmagne (1978) to an infinite domain. Nandeibam (2009) provides a different characterization of random utility using positive linear functionals. McFadden and Richter (1990) offers a characterization of random utility when the choice domain is incomplete. Stoye (2019) offers a short proof of this result using tools from convex analysis. McFadden (2005) offers an extension of this result to an infinite domain under some regularity conditions. Recently, Gonczarowski et al. (2023) extends this result to an infinite domain without any regularity conditions. Clark (1996) offers an alternative characterization of random utility in the case of an incomplete domain using DeFinetti's coherency axiom.
Proofs
### Proof of Theorem 3.1
We begin by showing the equivalence of \(f\) satisfying inflow equals outflow and \(f\) being set constant. We begin with the necessity of inflow equals outflow. Consider a function \(f\) with Mobius inverse \(g\) such that \(f\) is set constant. We proceed via induction on the size of the complement of \(A\). For the base case, let \(A=X\setminus\{x\}\). Observe that \(f(x,X)=g(x,X)\) We have the following.
\[\sum_{x\in XA}g(x,A) =\sum_{x\in A}f(x,A)-g(x,X)\] \[=\sum_{x\in X}f(x,X)-\sum_{x\in A}f(x,X)\] \[=f(x,X)=g(x,X)\]
Above, the first equality holds by the definition of Mobius inverse. The second equality holds from \(f\) being set constant. The third equality follows after collecting like terms. This shows that the base case of inflow equals outflow holds. Now assume that inflow equals outflow holds for all \(B\) with \(|X\setminus B|<n\). Let \(A\) be such that \(|X\setminus A|=n\).
\[\sum_{x\in A}g(x,A) =\sum_{x\in A}f(x,A)-\sum_{x\in A}\sum_{A\subseteq A^{\prime}}g( x,A^{\prime})\] \[=\sum_{x\in A}f(x,A)-\sum_{A\subseteq A^{\prime}}[\sum_{x\in A^ {\prime}}g(x,A^{\prime})-\sum_{x\in A^{\prime}\setminus A}g(x,A^{\prime})]\] \[=\sum_{A\subseteq A^{\prime}}\sum_{x\in A^{\prime}\setminus A}g (x,A^{\prime})-\sum_{A\subseteq A^{\prime}\subseteq X}\sum_{x\in A^{\prime}}g (x,A^{\prime})\] \[=\sum_{A\subseteq A^{\prime}}\sum_{x\in A^{\prime}\setminus A}g (x,A^{\prime})-\sum_{A\subseteq A^{\prime}\subseteq X}\sum_{y\in X\setminus A ^{\prime}}g(y,A\cup\{y\})\] \[=\sum_{z\in X\setminus A}g(z,A\cup\{z\})\]
Above, the first equality holds by the definition of Mobius inverse. The second equality
just adds zero. The third equality holds as \(g(x,X)=f(x,X)\) and because \(f\) is set constant. The fourth equality holds by the induction hypothesis. The fifth equality follows from combining like terms. Thus the above string of equalities show that inflow equals outflow is necessary. We now show sufficiency. Now suppose \(f\) satisfies inflow equals outflow. Consider some \(A\subsetneq X\).
\[\sum_{x\in A}g(x,A) =\sum_{x\in A}[f(x,A)-\sum_{A\subsetneq A^{\prime}}g(x,A^{\prime })]\] \[=\sum_{x\in A}f(x,A)-\sum_{A\subsetneq A^{\prime}}[\sum_{x\in A^{ \prime}}g(x,A^{\prime})-\sum_{x\in A^{\prime}\setminus A}g(x,A^{\prime})]\] \[=[\sum_{x\in A}f(x,A)-\sum_{x\in X}f(x,X)]+\sum_{A\subsetneq A^{ \prime}}\sum_{x\in A^{\prime}\setminus A}g(x,A^{\prime})-\sum_{A\subsetneq A^{ \prime}\subsetneq X}\sum_{x\in A^{\prime}}g(x,A^{\prime})\] \[=[\sum_{x\in A}f(x,A)-\sum_{x\in X}f(x,X)]+\sum_{A\subsetneq A^{ \prime}}\sum_{x\in A^{\prime}\setminus A}g(x,A^{\prime})-\sum_{A\subsetneq A^{ \prime}\subsetneq X}\sum_{y\in X\setminus A^{\prime}}g(y,A\cup\{y\})\] \[=[\sum_{x\in A}f(x,A)-\sum_{x\in X}f(x,X)]+\sum_{z\in X\setminus A }g(z,A\cup\{z\})\]
The first equality above holds due to the definition of Mobius inverse. The second equality just adds zero. The third equality holds as \(f(x,X)=g(x,X)\). The fourth equality holds by inflow equals outflow. The fifth equality follows from combining like terms. By inflow equals outflow, we know that \(\sum_{x\in A}g(x,A)=\sum_{z\in X\setminus A}g(z,A\cup\{z\})\). This means that the above string of equalities gives us that \(\sum_{x\in A}f(x,A)=\sum_{x\in X}f(x,X)\). Since \(A\) is arbitrary, this tells us that \(f\) is set constant.
We now show the equivalence between \(f\) satisfying inflow equals outflow and \(f\) satisfying constant cuts. Consider a function \(f\) which satisfies inflow equals outflow and has Mobius inverse \(g\). We now show that it satisfies constant cuts. Consider the flow diagram of \(f\). We now construct a flow decomposition. Let \(s\) be a function from the set of paths of the flow diagram to \(\mathbb{R}\).
1. Initialize at \(i=0\). For each path \(\rho\), set \(s(\rho)=0\).
2. Choose an edge \(e_{i}\) such that its edge weight is the minimal among all edges with non-zero edge weight. If the edge weight of \(e_{i}<0\) proceed to step 3. If \(e_{i}>0\) proceed to step 4. If there is no non-zero edge weight, terminate the algorithm.
3. \(e_{i}\) is a part of some path \(\rho_{i}\). Set \(s(\rho_{i})=s(\rho_{i})+w(e_{i})\). For each edge \(e^{\prime}\) on path \(\rho_{i}\), set
the edge weight of \(e^{\prime}\) equal to \(w(e^{\prime})-w(e_{i})\). Return to step 2.
4. The minimal edge weight among non-zero edge weights is positive. As the flow diagram satisfies inflow equals outflow at every stage of the algorithm (see below), this means that \(e_{i}\) is an edge of some path \(\rho_{i}\) such that every edge of \(\rho_{i}\) has a strictly positive edge weight. Set \(s(\rho_{i})=s(\rho_{i})+w(e_{i})\). For each edge \(e^{\prime}\) on path \(\rho_{i}\), set the edge weight of \(e^{\prime}\) equal to \(w(e^{\prime})-w(e_{i})\). Return to step 2.
We now argue that this algorithm terminates and terminates with each edge weight being equal to zero. This algorithm only adds or subtracts edge weight along a full path. This means that inflow equals outflow is preserved at each step of the algorithm. Further, in step 4, since \(w(e_{i})\) is minimal among non-zero edge weights, the edge weights of all edges along \(\rho_{i}\) are non-negative with \(w(e_{i})\) being equal to zero at the end of step 4. So, repeated iterations of step 3 takes every edge weight and makes them non-negative. Then, repeated iterations of step 4 take non-negative edge weights and make them zero. Consider the augmented value of any cut \(C=(S,T)\). Every path starts at \(X\) and ends at \(\emptyset\). This means that every path starts in \(S\) and ends in \(T\). Further, if a path leaves \(S\) and goes to \(T\), it must return to \(S\) from \(T\) before it can leave \(S\) and go to \(T\) again. This means that a path will always leave \(S\) one more time than it leaves \(T\). Let \(P\) be the set of paths of the flow diagram.
\[w(C) =\sum_{A\in 2^{X}\setminus\{\emptyset\}}\sum_{x\in A}g(x,A)( \mathbf{1}\{A\in S\wedge A\setminus\{x\}\in T\}-\mathbf{1}\{A\in T\wedge A \setminus\{x\}\in S\})\] \[=\sum_{\rho\in P}s(\rho)\]
Above, the first inequality follows from the definition of augmented value of a cut. The second equality holds due to the logic of the prior paragraph and because the algorithm which generated \(s\) leaves zero flow everywhere on the flow diagram. In simpler terms, the total flow assigned to any edge is equal to the edge weight of that edge. Since \(C\) was an arbitrary cut, constant cuts holds.
Now suppose that constant cuts holds. Consider the cuts. \(C\) is defined by \(S=\{A\subseteq X|n\leq|A|\}\) and \(C^{\prime}\) is defined by \(S^{\prime}=\{A\subseteq X|n\leq|A|\}\setminus B\) where \(|B|=n\) and \(0<n<|X|\). By construction, the only edges which \(C\) counts that \(C^{\prime}\) does not count are the edges leaving \(B\). Similarly, the only edges which \(C^{\prime}\) counts but \(C\) does not count are the edges which enter into \(B\). This gives us the following.
\[w(C)-w(C^{\prime}) =\sum_{x\in B}g(x,A)-\sum_{y\not\in B}g(y,B\cup\{y\})\] \[=0\]
Above, the first equality follows from collecting like terms. The second equality follows from constant cuts. Thus constant cuts implies inflow equals outflow and we are done.
### Proof of Corollary 3.1
As inflow equals outflow implies constant cuts, inflow equals outflow also implies constant single-crossing cuts. Now observe that the two cuts used to show constant cuts implies inflow equals outflow are single-crossing cuts. This means that constant single-crossing cuts implies inflow equals outflow. Now observe that in single-crossing cuts, no path travels from \(T\) to \(S\). This means that the weight and augmented weight of single-crossing cuts coincide.
### Proof of Corollary 3.2
Let \(f\) with Mobius inverse \(g\) be set constant. As \(\sum_{x\in X}g(x,X)=\sum_{x\in X}f(x,X)\) by the definition of the Mobius inverse, it immediately follows that a set constant \(f\) is a signed random choice rule if and only if \(\sum_{x\in X}g(x,X)=1\).
### Proof of Corollary 3.3
Let \(f\) with Mobius inverse \(g\) be a signed random choice rule. Recall the definition of the Mobius inverse.
\[f(x,A)=\sum_{A\subseteq B}g(x,B)\]
It then immediately follows that \(f\) is a random choice rule if and only if \(\sum_{A\subseteq B}g(x,B)\geq 0\) for all such \(x\in B\subseteq X\).
### Proof of Theorem 4.4
Let \(N(x,A)=\{\succ\in\mathcal{L}(X)|x\succ A\setminus\{x\}\}\). A random choice rule on \(X\) is stochastically rational if and only if there exists \(\nu\in\Delta(\mathcal{L}(X))\) such that \(p(x,A)=\sum_{\succ\in N(x,A)}\nu(\succ)\) for all \(x\in A\in\mathcal{X}\). This is equivalent to the existence of such a \(\nu\) and the existence of choice probabilities \(p(y,B)\) for each \(B\in(2^{X}\setminus\{\emptyset\})\setminus\mathcal{X}\) such that \(p(x,A)=\sum_{\succ\in N(x,A)}\nu(\succ)\) for all \(x\in A\in 2^{X}\setminus\{\emptyset\}\). By Theorem 4.1, this is equivalent to the existence of choice probabilities \(p(y,B)\) for each \(B\in(2^{X}\setminus\{\emptyset\})\setminus\mathcal{X}\) such that \(q(x,A)\geq 0\) for each \(x\in A\in 2^{X}\setminus\{\emptyset\}\). By Theorem 3.1, this is equivalent to the existence of \(q(x,A)\geq 0\) for each \(x\in A\in 2^{X}\setminus\{\emptyset\}\) such that \(q\) satisfies inflow equals outflow, the conditions of Corollary 3.2 and Corollary 3.3, and \(\sum_{A\leq C}q(x,C)=p(x,A)\) for all \(x\in A\in\mathcal{X}\). Note that \(q(x,A)\geq 0\) implies the condition of Corollary 3.3, \(\sum_{A\leq B}q(x,B)=p(x,A)\) implies that the choice probabilities induced by \(q\) agree with our observed choice probabilities, and \(q(x,A)\geq 0\) implies that the induced choice probabilities are rationalizable by random utility.
We now construct the matrix form of this linear program. Consider a matrix \(D\) whose columns are indexed by \((x,A)\) for each \(A\in 2^{X}\setminus\{\emptyset\}\) and each \(x\in A\) and whose rows are indexed by \((y,B)\) for each \(B\in\mathcal{X}\) and \(y\in B\). The entry \(d_{(y,B),(x,A)}=1\) if \(B\subseteq A\) and \(x=y\) and is equal to zero otherwise. \(D\) encodes that our unobserved \(q\) function must induce our observed choice probabilities. Let \(P\) be a column vector indexed by \((y,B)\) for each \(B\in\mathcal{X}\) and \(y\in B\). Entry \(p_{(y,B)}\) is equal to \(p(y,B)\). Consider a matrix \(E\) whose columns are indexed by \((x,A)\) for each \(A\in 2^{X}\setminus\{\emptyset\}\) and each \(x\in A\) and whose rows are indexed by \(B\in 2^{X}\setminus\{X,\emptyset\}\). The entry \(e_{B,(x,A)}\) is given as follows.
\[e_{B,(x,A)}=\begin{cases}1&\text{ if }x\not\in B,A=B\cup\{x\}\\ -1&\text{ if }x\in B,A=B\\ 0&\text{ otherwise}\end{cases}\]
\(E\) encodes that our unobserved \(q\) satisfy inflow equals outflow. Consider a row vector \(F\) whose elements are indexed by \((x,A)\) for each \(A\in 2^{X}\setminus\{\emptyset\}\) and each \(x\in A\). The element \(f_{(x,A)}\) is equal to one if \(A=X\) and equal to zero otherwise. \(F\) encodes that our unobserved \(q\) satisfy \(\sum_{x\in X}q(x,X)\). To be stochastically rational, we must impose \(q\geq 0\) which implies the
condition of Corollary 3.3. Thus the linear program we have constructed looks as follows.
\[\left[\begin{array}{c}D\\ \hline E\\ \hline F\end{array}\right]q=\left[\begin{array}{c}P\\ \hline\mathbf{0}\\ \hline 1\end{array}\right] \tag{10}\] \[q\geq 0 \tag{11}\]
By Farkas's Lemma, (see Theorem 34 in Border (2013) for a reference), there exists a solution to this linear program if and only if there does not exist a solution \(r\in\mathbb{R}^{M}\) to the following linear program.
\[r^{T}\left[\begin{array}{c}D\\ \hline E\\ \hline F\end{array}\right]\leq 0\] (12) \[\left[\begin{array}{c}P\\ \end{array}\right]\mathbf{0}\mathbf{0}\mathbf{1}\mathbf{1}\mathbf{\ }\
### Proof of Theorem of 5.1
The equivalence between the first two conditions was shown by Luce (1959). We now show the equivalence between the third condition and the other two conditions. Suppose that \(p\) is consistent with the Luce model. Then there exists weights \(h(\cdot)\) such that \(p(x,A)=\frac{h(x)}{\sum_{y\in A}h(y)}\). We will use the notation \(h(A)\) to denote \(\sum_{y\in A}h(y)\). Consider the following.
\[q(x,A) =\sum_{A\subseteq B}(-1)^{|B\setminus A|}p(x,B)\] \[=\sum_{A\subseteq B}(-1)^{|B\setminus A|}\frac{h(x)}{h(B)}\] \[=h(x)\sum_{A\subseteq B}(-1)^{|B\setminus A|}\frac{1}{h(B)}\]
Let \(q(A,A)=\sum_{x\in A}q(x,A)\). Observe the following.
\[q(A,A) =\sum_{x\in A}q(x,A)\] \[=\sum_{x\in A}h(x)\sum_{A\subseteq B}(-1)^{|B\setminus A|}\frac{1 }{h(B)}\] \[=h(A)\sum_{A\subseteq B}(-1)^{|B\setminus A|}\frac{1}{h(B)}\]
Theorem 3.1 tells us that \(q(A,A)=\sum_{y\in X\setminus A}q(y,A\cup\{y\})\). Observe the following.
\[q(x,A)=\frac{h(x)}{h(A)}q(A,A)\]
This gives us the following.
\[q(x,A) =\frac{h(x)}{h(A)}q(A,A)\] \[=\frac{h(x)}{h(A)}\sum_{y\in X\setminus A}q(y,A\cup\{y\})\] \[=\frac{h(x)}{h(A)}\sum_{y\in X\setminus A}\frac{h(y)}{h(A\cup\{y \})}q(A\cup\{y\},A\cup\{y\})\]
The second equality above follows from inflow equals outflow. We can repeatedly apply the substitution shown on the second line above to the third line above. Since \(q(X,X)=1\) and
\(X\) is finite, repeated applications of this substitution will leave us with an equation that is a sum and product of weights, \(h(\cdot)\). Since each \(h(x)\) is strictly positive, so too must be a sum and product of these weights. This gives us \(q(x,A)>0\) for all \(x\in A\subseteq X\). Observe the following.
\[\eqalign{{q(x,A)\over q(y,A)}&={\sum_{A\subseteq B}(-1)^{|B\setminus A|}p(x,B) \over\sum_{A\subseteq B}(-1)^{|B\setminus A|}p(y,B)}\cr&={\sum_{A\subseteq B}(-1 )^{|B\setminus A|}p(x,B)\over(p(y,X)/p(x,X))\sum_{A\subseteq B}(-1)^{|B \setminus A|}p(x,B)}\cr&={p(x,X)\over p(y,X)}\cr&={q(x,X)\over q(y,X)}}\]
Above, the first equality follows from the definition of \(q\). The second equality holds from the second condition of Theorem 5.1 The third equality follows from canceling like terms. The fourth equality holds due to the definition of \(q\). This shows that the third condition of Theorem 5.1 is necessary. We now show sufficiency. Assume the third condition of Theorem 5.1 holds. Recall that \(p(x,A)=\sum_{A\subseteq B}q(x,B)\). Thus if \(q(x,B)>0\) for all \((x,B)\), it then follows that \(p(x,A)>0\) for all \((x,A)\). Observe the following.
\[\eqalign{{p(x,A)\over p(y,A)}&={\sum_{A\subseteq B}q(x,B)\over\sum_{A\subseteq B }q(y,B)}\cr&={\sum_{A\subseteq B}q(x,B)\over(q(y,X)/q(x,X))\sum_{A\subseteq B}q (x,B)}\cr&={q(x,X)\over q(y,X)}\cr&={p(x,X)\over p(y,X)}}\]
Above, the first equality holds by the definition of \(q\). The second equality holds due to the third condition of Theorem 5.1. The third equality follows from collecting like terms. The last equality holds from the definition of \(q\). This shows that \(p\) is consistent with the Luce model, so we are done.
|
2309.01179 | Cognition-Mode Aware Variational Representation Learning Framework for
Knowledge Tracing | The Knowledge Tracing (KT) task plays a crucial role in personalized
learning, and its purpose is to predict student responses based on their
historical practice behavior sequence. However, the KT task suffers from data
sparsity, which makes it challenging to learn robust representations for
students with few practice records and increases the risk of model overfitting.
Therefore, in this paper, we propose a Cognition-Mode Aware Variational
Representation Learning Framework (CMVF) that can be directly applied to
existing KT methods. Our framework uses a probabilistic model to generate a
distribution for each student, accounting for uncertainty in those with limited
practice records, and estimate the student's distribution via variational
inference (VI). In addition, we also introduce a cognition-mode aware
multinomial distribution as prior knowledge that constrains the posterior
student distributions learning, so as to ensure that students with similar
cognition modes have similar distributions, avoiding overwhelming
personalization for students with few practice records. At last, extensive
experimental results confirm that CMVF can effectively aid existing KT methods
in learning more robust student representations. Our code is available at
https://github.com/zmy-9/CMVF. | Moyu Zhang, Xinning Zhu, Chunhong Zhang, Feng Pan, Wenchen Qian, Hui Zhao | 2023-09-03T13:51:06Z | http://arxiv.org/abs/2309.01179v1 | # Cognition-Mode Aware Variational Representation Learning Framework for Knowledge Tracing
###### Abstract
The Knowledge Tracing (KT) task plays a crucial role in personalized learning, and its purpose is to predict student responses based on their historical practice behavior sequence. However, the KT task suffers from data sparsity, which makes it challenging to learn robust representations for students with few practice records and increases the risk of model overfitting. Therefore, in this paper, we propose a Cognition-Mode Aware Variational Representation Learning Framework (CMVF) that can be directly applied to existing KT methods. Our framework uses a probabilistic model to generate a distribution for each student, accounting for uncertainty in those with limited practice records, and estimate the student's distribution via variational inference (VI). In addition, we also introduce a cognition-mode aware multinomial distribution as prior knowledge that constrains the posterior student distributions learning, so as to ensure that students with similar cognition modes have similar distributions, avoiding overwhelming personalization for students with few practice records. At last, extensive experimental results confirm that CMVF can effectively aid existing KT methods in learning more robust student representations. Our code is available at [https://github.com/zmy-9/CMVF](https://github.com/zmy-9/CMVF).
Representation Learning, Knowledge Tracing, Variational Inference, Educational Data Mining
In recent decades, numerous computer-assisted learning platforms have emerged to assist students in acquiring knowledge. Recommending suitable problems to students is a crucial function of online platforms, as it prevents students from wasting time practicing questions they already mastered [10, 36]. As a result, the Knowledge Tracing (KT) [4, 16] task has received significant attention in recent years, aiming to predict the probability of students answering the target question correctly based on their historical practice sequences.
As a popular data mining application field, KT has produced many excellent models that enhance model prediction accuracy through improving model structures and introducing rich features. The current state-of-the-art KT models basically adopt an _Input_\(\rightarrow\)_Embedding_\(\rightarrow\)_Neural_\(\rightarrow\)_Prediction_ paradigm, in which the _Embedding_ module maps input factors to dense representation vectors, playing a critical role in this paradigm. Thus, exploring how to reasonable representation vectors of input factors, including questions and students, is an active research direction in the KT field. For example, PEBG [17] and MF-DAKT [42] construct question graphs with question difficulty and relationship information to pre-train question representations, while RKT [22] and HGKT [33] introduce question texts to enrich question representations. However, the aforementioned methods focus primarily on question representations, with few works focusing on student representation learning, especially for _Infrequent Students_ (students with few practice records). Infrequent students have limited practice records, causing them to have large uncertainties, which lead to low confidence in the model's scoring and makes the existing KT models prone to overfitting.
To address student representation overfitting due to data sparsity, KTM [35] previously proposed to introduce rich student attribute features. However, these attribute features are often shared by frequent and infrequent students. In the model training stage, due to the influence of data imbalance, that is, frequent student records are significantly more than infrequent student records, attribute features will better reflect the characteristics of frequent students, leading to the individualization of infrequent students being overwhelmed. Later, CL4KT [13] proposed to apply data augmentation methods, such as clipping or reordering historical practice behaviors, to improve the robustness of student representations. However, since infrequent learners have limited historical practice, clipping or reordering may lead to information diversity and result in limited information gain or even introduce noise. Furthermore, the above two methods both use point estimation to learn student representations, that is, try to learn a reliable single point for each student in the embedding space, and fail to consider the uncertainty of students, as shown in Figure 1(a). This results in the model having lower confidence in the prediction scores (i.e., overfitting). Therefore, to effectively model the uncertainty of infrequent students, we propose a general framework called Cognition-Mode Aware **V**ariational
Fig. 1: Illustration of point estimation and distribution estimation, where (a) shows that the representation vectors of students A and B can be projected as two points in the embedding space, and the model mainly learns the positions of these two points. On the contrary, (b) shows that students A and B have different distributions, each distribution contains an infinite number of points, representing the uncertainty of the students, and the model needs to learn the parameters of these two distributions.
Representation Learning Framework (CMVF) to generate robust representations for students.
To improve modeling of student uncertainty, we propose to replace point estimate in KT models with distributional estimate, shown in Figure 1(b). Specifically, we introduce a probabilistic model that learns a distribution for each student to model their uncertainty, calculating the expected value of the distribution as the final prediction score, thereby reducing uncertainty more efficiently. To avoid the computational difficulty of distribution estimate, we propose to use a variational inference-based (VI) approach to build a probabilistic latent variable model [12], which enables us to use the reparameterization trick to generate different distributions for each student, thereby facilitating integration with existing deep KT models.
Meanwhile, VI commonly sets a uniform prior distribution for all student distributions, limiting the model's ability to personalize infrequent students. Generally speaking, students with similar characteristics should have similar prior distributions to enable the model to learn similar posterior distributions for them, facilitating sharing of global characteristic information while retaining individuality of infrequent students. Previous KT research shows each student has unique learning characteristics reflected in their practice sequences [19, 30], such as different students may acquire knowledge at different speeds in practice, we refer to this characteristic as _cognition mode_. As a result, students with similar cognition modes should have similar representation distributions. To extract students' cognition modes, we propose to utilize the dynamic routing algorithm [39] to model their practice sequences, with different _capsules_ representing different cognition modes. Based on extracted cognition modes, we design a cognition-mode aware multinomial distribution as a prior during Bayesian inference, constraining the model's learning of posterior student distributions by ensuring students with similar cognition modes have similar prior distributions. Additionally, to further avoid overfitting, we also use the standard normal distribution as a kind of prior knowledge to constrain model training.
At last, extensive experimental results confirm that CMVF can effectively aid existing KT methods in learning more robust student representations, especially for infrequent students. The contributions of our paper can be summarized as follows:
* To the best of our knowledge, we are the first KT study to learn robust student representations by building a probabilistic model that generates a distribution for each student.
* We novelly design a cognition-mode aware prior multinomial distribution to constrain the model to generate similar posterior distributions for students with similar cognition modes which are extracted from their historical practice sequences using a dynamic routing algorithm.
* CMVF is a general framework that can be incorporated into existing KT methods. Our experimental results demonstrate the superiority and compatibility of the CMVF framework.
## I Related Work
**Knowledge Tracing**. Numerous attempts have been made in KT field over the decades, including probabilistic models [27], logistic models [2, 18], and deep models [6, 23, 26, 41]. Among the existing methods, most of them revolve around improving the model architecture for modeling students' practice sequences. For example, Bayesian Knowledge Tracing (BKT) [27] uses the Hidden Markova Model (HMM) to trace students' knowledge states. Additive Factor Model (AFM) [2] and Performance Factor Analysis (PFA) [24] represent students' practices by constructing a practice factor. Inspired of the success of deep learning, Deep Knowledge Tracing (DKT) [23] novelly introduces the long short-term memory network (LSTM) [9] into KT to represent students' knowledge states with the hidden units. Later, Dynamic Key-Value Memory model (DKVMN) [41] extends DKT by utilizing a key-value memory network to update students' knowledge states. To better capture long-term practices, SAKT [26] introduces the self-attention mechanism into KT field. Convolutional Knowledge Tracing (CKT) [31] applies Convolutional Neural Networks (CNN) [14] to model students' learning rates. Moreover, there are some works exploring students' sequences in a more granular way. DKT+ [20] extends DKT to consider forgetting by incorporating multiple types of information related to forgetting. AKT [6] novelly extends both IRT [18] and SAKT [26] to enhance the interpretability and simulate students' forgetting behaviors. IEKT [19] estimates the students' cognition and assesses knowledge acquisition sensitivity for each record. LPKT [32] monitors students' knowledge states through directly modeling students' learning gains and forgetting. LFBKT [3] introduces the difficulty factor, and models the students' learning and forgetting behavior according to various influencing factors. DIMKT [30] measures the question difficulty effect and improves KT performance by establishing the relationship between student knowledge states and question difficulty level. However, most of the above methods require enough samples to train model parameters to achieve excellent performance. In reality, the data-sparsity problem is serious since most students practice few times. Although KTM and CL4KT [13] respectively introduce attribute features and data augmentation methods into the KT field, they still cannot well model the uncertainty of infrequent students. Therefore, in this paper, we propose a general framework CMVF for the KT task to learn robust student representations.
**Variational Inference**. The Variational Auto-Encoder (VAE) [12] efficiently performs inference and learning by optimizing a latent model using any common estimator. This enables very efficient approximate posterior inference through simple ancestral sampling, allowing for efficient learning of the model parameters without requiring expensive iterative inference schemes (such as MCMC) per datapoint. The learned approximate posterior inference model can be applied in various research fields, including Computer Vision (CV), Natural Language Processing (NLP), and recommendation systems. Despite its success in other fields, VAE has not been utilized in KT to model the uncertainty of infrequent students.
**Dynamic Routing Algorithm**. It is proposed in the Capsule Network [29], in which a _capsule_ denotes a group of neurons
assembled to output a vector. Dynamic routing algorithm is used to learn the weights on the connections between capsules where parameters are optimized by Expectation-Maximization algorithm to overcome several deficiencies and achieves better accuracy. The _capsule_ and _dynamic routing mechanism_ make the capsule network achieve better performance than conventional deep neural network and achieves advanced performance in multiple research fields [15, 21, 38, 40].
## II Preliminaries
This section provides a brief introduction to the formulation of the KT task and the fundamentals of Variational Inference.
### _Knowledge Tracing Task_
The KT task is a binary classification problem utilizing multi-field factors that may impact a student's learning [23, 41]. Given a student's historical practice sequence before time-step \(t\), KT aims to predict the probability of the student correctly answer a target question at time \(t\). Suppose an online platform collects a dataset \(\mathcal{D}\), where each sample \((\mathbf{x},y)\in\mathcal{D}\) denotes a student's practice record. \(\mathbf{x}\) implies the factors of {_student, question, concept, historical practices_}. \(y\in\{0,1\}\) denotes the student's response, where 1 means the answer is correct, and 0 means wrong answer. Let \(u\) denote the student ID, \(q\) denote the question ID, \(c_{q}\) represent the set of concept IDs related to \(q\), and \(\chi_{u}\) denotes the practice sequences. As a result, \(\mathbf{x}\) can be expressed as \([u,q,c_{q},\chi_{u}]\).
The widespread use of deep learning in KT has led to the adoption of the _Input_\(\rightarrow\)_Embedding_\(\rightarrow\)_Neural Network_\(\rightarrow\)_Prediction_ paradigm in current methods. The _Embedding_ layer is a crucial module in KT models, as it compresses high-dimensional ID features into low-dimensional dense representations that are easier to process in subsequent neural network layers, which can be expressed mathematically as follows:
\[\mathbf{z}=g_{\phi}(\mathbf{x}) \tag{1}\]
where \(\mathbf{z}\) denotes the latent embedding space. \(g_{\phi}(\cdot)\) represents the mapping function of the embedding layer, and \(\phi\) represents the parameters of the factor embedding. The resulting embeddings are then concatenated to make the final prediction, which is estimated according to the log-likelihood, as follows:
\[\hat{y}=f_{\theta}(\mathbf{z}) \tag{2}\] \[\mathcal{L}(\phi,\theta)=-ylog(\hat{y})-(1-y)log(1-\hat{y}) \tag{3}\]
where \(\theta\) denotes the parameters of _Neural Network_. \(\mathcal{L}(\phi,\theta)\) is the optimization objective function in KT [6, 23].
### _Variational Inference_
Bayesian approaches are known for their robustness in handling sparse data. However, they are computationally expensive and may struggle to efficiently learn model parameters using costly iterative inference methods. Variational Auto-Encoder (VAE) [12] is designed to re-parameterize the variational lower bound, which can be easily optimized using standard gradient descent techniques. To facilitate comprehension of this paper, we present the fundamental assumptions behind VAE. Specifically, we assume that the data can be generated by a latent variable \(\mathbf{z}\) that conforms to a prior distribution \(p(\mathbf{z})\). According to Bayes theorem, the integral of the marginal likelihood can be expressed as \(p(\mathbf{x})=\int p(\mathbf{z})p_{\theta}(\mathbf{x}|\mathbf{z})\), which is intractable in general. Therefore, VAE approximates \(p_{\theta}(\mathbf{z}|\mathbf{x})\) with a variational approximation \(q_{\phi}(\mathbf{z}|\mathbf{x})\) based on the evidence lower bound (ELBO) as follows:
\[\begin{split} log\,p(\mathbf{x})&\geq\mathcal{L}(\phi, \theta;\mathbf{x})\\ &=\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}[logp_{\theta}(\mathbf{x}|\mathbf{ z})]-KL(q_{\phi}(\mathbf{z}|\mathbf{x})||p(\mathbf{z}))\end{split} \tag{4}\]
where \(KL\) denotes the Kullback-Leibler divergence. \(q_{\phi}(\mathbf{z}|\mathbf{x})\) and \(p_{\theta}(\mathbf{x}|\mathbf{z})\) are defined as a parameterized function with \(\phi\) and \(\theta\), respectively. In this way, variational inference can be easily applied into existing deep KT methods.
## III Method
To help KT methods model the uncertainty of the infrequent students, we build a probabilistic representation learning framework to generate a distribution for each student. Simultaneously, we use cognition modes extracted from student practice sequences as a prior distribution \(p(\mathbf{z})\) for the latent space \(\mathbf{z}\) in the model, which constrains the distribution estimate in our model. This section provides a detailed description of CMV, with its structure depicted in Figure 2.
### _Variational Representation Framework_
Current KT methods utilize the student's representation vector and their historical practice sequences to forecast their responses [35]. However, infrequent students with sparse data may exhibit greater uncertainty, resulting in overfitting of the student representation vectors learned by models utilizing only a single point. In order to model the uncertainty of students, estimation of the posterior student distribution over the latent space \(\mathbf{z}\), denoted as \(p(\mathbf{z}|\mathbf{x})\), is necessary, where the variational inference is applied to reformulate the target of the KT task as \(p_{\phi,\theta}(y|\mathbf{x},\mathbf{z})\). Due to the intractability of the true posterior distribution, a recognition model \(q_{\phi}(\mathbf{z}|\mathbf{x})\) can be utilized to approximate the true distribution, where \(q_{\phi}(\mathbf{z}|\mathbf{x})\) can be interpreted as a probabilistic encoder generating a distribution (such as a Gaussian distribution) for each student. Based on Bayes theorem, the posterior of \(\mathbf{z}\) can be represented as \(p(\mathbf{z}|\mathbf{x})=\frac{p(\mathbf{z},\mathbf{x})}{p(\mathbf{x})}\). Thus, based on Jensen's inequality, the evidence lower bound (ELBO) of the marginal likelihood \(p(\mathbf{x})\) can be obtained through the following formula:
\[\begin{split} log\,p(\mathbf{x})&=log\int p(\mathbf{x}, \mathbf{z})d\mathbf{z}=log\int p(\mathbf{x},\mathbf{z})\frac{q_{\phi}(\mathbf{z}|\mathbf{x})}{q_{\phi}( \mathbf{z}|\mathbf{x})}d\mathbf{z}\\ &=log(\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}[\frac{p(\mathbf{x},\mathbf{z} )}{q_{\phi}(\mathbf{z}|\mathbf{x})}])\\ &\geq\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}[logp(\mathbf{x},\mathbf{z})- logq_{\phi}(\mathbf{z}|\mathbf{x})]\\ &=\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}[logp_{\theta}(\mathbf{x}|\mathbf{z })]+\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}[log\frac{p(\mathbf{z})}{q_{\phi}(\mathbf{z}| \mathbf{x})}]\\ &=\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}[logp_{\theta}(\mathbf{x}|\mathbf{z })]-KL(q_{\phi}(\mathbf{z}|\mathbf{x})||p(\mathbf{z}))\end{split} \tag{5}\]
where the first term can be regarded as an expected negative reconstruction error (denoted as \(\mathcal{L}_{re}\)), while the second term serves as a regularizer to constrain the approximate posterior distribution \(q_{\phi}(\mathbf{z}|\mathbf{x})\) via the prior distribution \(p(\mathbf{z})\). The objective function \(\mathcal{L}(\phi,\theta)\) is naturally equivalent to the ELBO, and maximizing \(p(\mathbf{x})\) is equivalent to minimizing the lower bound. \(\phi\) and \(\theta\) correspond to the parameters of the latent space and prediction network, respectively.
With the help of the Mean-field Theory [1], we assume factors in \(\mathbf{x}\) are mutually independent and each factor is governed by distinct factors in the variational density. Considering that the question may also suffer from sparsity issues, we also utilize the distribution estimate method to learn the question representation. Consequently, the objective function of our model can be reformulated as follows:
\[\begin{split}\mathcal{L}(\phi,\theta)=\mathcal{L}_{re}& -KL(q_{\phi_{u}}(\mathbf{z}_{u}|u)||p(\mathbf{z}_{u}))\\ &-KL(q_{\phi_{q}}(\mathbf{z}_{q}|q)||p(\mathbf{z}_{q}))\end{split} \tag{6}\]
where \(p(\mathbf{z}_{u})\) and \(p(\mathbf{z}_{q})\) usually denote the normal Gaussian distribution. However, we suspect that the fixed prior distribution limits the generalization ability of the model due to the significant variation between different students and different questions. As a solution, we propose parameterizing \(p(\mathbf{z}_{q})\) of question as \(p_{\phi_{c}}(\mathbf{z}_{q}|c)\), where \(c\) denotes the set of concepts related to the question \(q\). In this way, questions with similar concepts will have similar prior distributions restrictions.
Moreover, we propose linking students' cognition modes with their representations to better share global information among students. The student's practice sequence serves not only as a reflection of the knowledge state evolution but also as a supplement to the student's user portrait. We believe that students display commonalities while practicing questions. For instance, some students may rapidly grasp knowledge concepts and answer questions correctly, which can be interpreted as a high-efficiency cognition mode. To realize this, we suggest extracting cognition modes from students' historical practice sequences and substituting the fixed prior distribution \(p(\mathbf{z}_{u})\) with a cognition-mode aware multinomial distribution \(p_{\phi_{m}}(\mathbf{z}_{u}|m)\), where \(m\) denotes the cognition mode. This will constrain students with similar cognition modes to have similar prior distributions via KL divergence regularization.
```
1Input: students' historical practice sequence \(\chi_{u}\), iteration times \(r\), the number of capsules \(K\).
2 encode the student's practice sequence: \(\mathbf{h}=f_{e}(\chi_{u})\).
3 for all capsules: initialize \(\mathbf{b}\gets\mathbf{0}\);
4for\(r\)iterationsdo
5 for the capsule \(j\): \(w_{j}=softmax(\mathbf{b})[j]\) (Eq. 8);
6 for the capsule \(j\): \(\mathbf{s}_{j}=w_{j}\mathbf{S}_{j}\mathbf{h}\) (Eq. 9);
7 for the capsule \(j\): \(\mathbf{m}_{j}=squash(\mathbf{s}_{j})\) (Eq. 10);
8 for the capsule \(j\): \(b_{j}\gets b_{j}+\mathbf{m}_{j}^{T}\mathbf{S}_{j}\mathbf{h}\) (Eq. 11);
9
10 end for
11for the capsule \(j\): \(p(\mathbf{m}_{j}|\chi_{u})=\frac{||\mathbf{m}_{j}||}{\sum_{k=1}^{K}||\mathbf{m}_{k}||}\);
12return\(\{\mathbf{m}_{1},...,\mathbf{m}_{K}\}\), \(\{p(\mathbf{m}_{1}|\chi_{u}),...,p(\mathbf{m}_{K}|\chi_{u})\}\)
```
**Algorithm 1**: Dynamic Routing Algorithm.
**Extract Cognition Mode.** Considering the complexity of student learning, we doubt that a single cognition mode would be adequate to accurately characterize a student. To represent students' learning characteristics from multiple perspectives, we utilize the dynamic routing algorithm [15] to extract cognition modes. Each _capsule_ can be interpreted as a cognition mode, as shown in the right half of Figure 2. Specifically, we first encode the sequence as a multidimensional vector:
\[\mathbf{h}=f_{e}(\chi_{u}),\quad\mathbf{h}\in\mathbb{R}^{d} \tag{7}\]
where \(f_{e}(\cdot)\) can be any sequence modeling structures, such as the LSTM structure in DKT [23], and the self-attention structure in AKT [6]. \(d\) is the vector dimension. Then, we initialize a probability vector \(\mathbf{b}=\mathbf{0}\in\mathbb{R}^{K}\), where \(K\) is the number of capsules. Each element in \(\mathbf{b}\) denotes the probability
Fig. 2: The overall architecture of the Cognition-Mode Aware Variational Representation Learning Framework (CMVF).
of \(\mathbf{h}\) belonging to corresponding capsule. The elements in \(\mathbf{b}\) will be updated by computing the information transferred between the representations of students' sequences and each capsule in an iterative way. In each iteration, the updating process of the capsule \(j\) is computed by:
\[w_{j}=\frac{exp(b_{j})}{\sum_{k=1}^{K}exp(b_{k})} \tag{8}\] \[\mathbf{s}_{j}=w_{j}\mathbf{S}_{j}\mathbf{h} \tag{9}\]
where \(w_{j}\) denotes the probability that \(\mathbf{h}\) is divided into the capsule \(j\). \(\mathbf{S}_{j}\in\mathbb{R}^{d\times d}\) denotes the bilinear mapping matrix to be learned. With routing logits calculated, the vector of capsule \(j\) can be obtained as \(\mathbf{s}_{j}\). The vector-based capsule is expected to be able to represent different properties of an entity, in which the orientation of a capsule represents one property and the length of the capsule is used to represent the probability that the property exists. Hence, a non-linear _squash_ function is applied to \(\mathbf{s}_{j}\) as follows:
\[\mathbf{m}_{j}=\frac{||\mathbf{s}_{j}||^{2}}{1+||\mathbf{s}_{j}||^{2}}\frac{\mathbf{s}_{j}}{|| \mathbf{s}_{j}||} \tag{10}\]
where the length of the \(\mathbf{m}_{j}\) can represent the probability of the current input \(\mathbf{h}\) belonging to the capsule \(j\). The logit \(b_{j}\) will be iteratively refined as follow:
\[b_{j}=b_{j}+\mathbf{m}_{j}^{T}\mathbf{S}_{j}\mathbf{h} \tag{11}\]
After \(r\) iterations, the final output vector \(\mathbf{m}_{j}\) represents the output of the student's sequence \(\mathbf{h}\) for the cognition mode (i.e., capsule) \(j\). To ensure that the probabilities of students belonging to \(K\) capsules sum to 1, we perform a softmax calculation for all capsules as follows:
\[p(\mathbf{m}_{j}|\chi_{u})=\frac{||\mathbf{m}_{j}||}{\sum_{k=1}^{K}||\mathbf{m}_{k}||} \tag{12}\]
The procedure of dynamic routing is summarized in Algorithm 1, where \(p(\mathbf{m}_{j}|\chi_{u})\) denotes the the probability that the student belongs to the \(j\)-th cognition mode.
_Re-parameterize Trick._ Next, we apply a re-parameterize trick to generate the posterior distributions as below:
\[q_{\phi_{u}}(\mathbf{z}_{u}|u) =\mathcal{N}(\mathbf{\mu}_{u},\mathbf{\sigma}_{u}^{2}) \tag{13}\] \[q_{\phi_{q}}(\mathbf{z}_{q}|q) =\mathcal{N}(\mathbf{\mu}_{q},\mathbf{\sigma}_{q}^{2}) \tag{14}\]
where \(\mu_{u}\) or \(\mu_{q}\), and \(\sigma_{u}\) or \(\sigma_{q}\), are obtained from student ID or question ID with DNNs, as shown in 2. Similarly, we can obtain the parameterized prior distributions as follows:
\[p_{\phi_{m}}(\mathbf{z}_{u}|m) =\mathcal{N}(\mathbf{\mu}_{m},\mathbf{\sigma}_{m}^{2}) \tag{15}\] \[p_{\phi_{c}}(\mathbf{z}_{q}|c) =\mathcal{N}(\mathbf{\mu}_{c},\mathbf{\sigma}_{c}^{2}) \tag{16}\]
where \(\mu_{c}\) and \(\sigma_{c}\) are obtained from the concept IDs related to the question with DNNs. \(\mu_{m}\) and \(\sigma_{m}^{2}\) can be calculated as:
\[\mathbf{\mu}_{m} =\sum_{1\leq i<K}p(\mathbf{m}_{i}|\chi_{u})\mathbf{\mu}_{m_{i}} \tag{17}\] \[\mathbf{\sigma}_{m}^{2} =\sum_{1\leq i<K}p(\mathbf{m}_{i}|\chi_{u})\mathbf{\sigma}_{m_{i}}^{2} \tag{18}\]
where \(\mathbf{\mu}_{m_{i}}\) and \(\mathbf{\sigma}_{m_{i}}^{2}\) are obtained from the student's \(i\)-th cognition mode representation with DNNs. Based on the above estimated distributions, we can generate embedding vectors for students and questions by random sampling, as follows:
\[\mathbf{e}_{u} =\mathbf{\mu}_{u}+\mathbf{\sigma}_{u}\odot\mathbf{e}_{u},\quad\mathbf{e}_{u}\in \mathcal{N}(0,\mathbf{I}) \tag{19}\] \[\mathbf{e}_{q} =\mathbf{\mu}_{q}+\mathbf{\sigma}_{q}\odot\mathbf{e}_{q},\quad\mathbf{e}_{q}\in \mathcal{N}(0,\mathbf{I}) \tag{20}\]
### _Training Phase_
#### Iii-B1 Prediction
We concatenate the variational embeddings of students and questions to conduct predictions as follows:
\[\hat{y}=f_{\theta}(\mathbf{e}_{u},\,\mathbf{e}_{q},\,\mathbf{M},\,\mathbf{e}_{c}) \tag{21}\]
where \(\hat{y}\) is the prediction probability value. \(\mathbf{M}=\sum_{1\leq i<K}[p(\mathbf{m}_{i}|\chi_{u})\mathbf{m}_{i}]\) denotes the pooling of the student's cognition modes. \(\mathbf{e}_{c}\) denotes the mean embedding of concepts related to \(q\). \(f_{\theta}(\cdot)\) represents the prediction layer, which can be FM [28], DNNs, and other structures. The resulting reconstruction error can be calculated by:
\[\mathcal{L}_{re}=-\tfrac{1}{L}\sum_{1\leq i\leq L}ylog\hat{y}_{i}+(1-y)log(1- \hat{y}_{i}) \tag{22}\]
where \(\hat{y}_{i}\) denotes the predicted probability of sampled \(\epsilon_{u}\) and \(\epsilon_{q}\). \(L\) is the Monte Carlo sampling number for each record.
#### Iii-B2 Regularized Priors
Although earlier KT research concurs that student representations can embody student cognition modes, we suspect that representations of active students could be learned from a vast number of observed samples and have the potential to reflect more characteristics apart from the cognition mode. Therefore, though \(KL(q_{\phi_{u}}(\mathbf{z}_{u}|u)||p_{\phi_{m}}(\mathbf{z}_{u}|m))\) may be advantageous for modeling infrequent students, there is a risk of impairing the richness of representations for frequent students. Consequently, we propose a personalized prior weight to adaptively adjust the regularization strength of the aforementioned regularization term:
\[\beta_{u}=1-\frac{1}{1+e^{-n_{u}}} \tag{23}\]
where \(n_{u}\) is the number of students' practice in the training dataset. In this way, the regularization of cognition mode will get weaker as the number of students practice increases, i.e. the \(\beta_{u}\) will get closer to 0. Similarly, we also introduce an individualized prior weight for each questions to help questions that occur frequently in the dataset to learn rich information apart from the information of concepts, i.e. \(\beta_{q}=1-\frac{1}{1+e^{-n_{q}}}\).
At the same time, if we only depend on the above regularization terms in Eq. 6, our model also tends to be over-fitting. For example, the model may make both \(\sigma_{u}\) and \(\sigma_{q}\) get close to 0 and only optimize the parameters of \(\mu_{u}\) and \(\mu_{q}\), which will make the model degrade into normal deep neural networks. Therefore, to avoid the over-fitting issue, we propose to add regularization terms on both distributions of students and questions by forcing the parameterized distributions to be
close to a standard normal Gaussian distributions. In this way, the final objective function can be reformulated as below:
\[\mathcal{L}(\phi,\theta)=\mathcal{L}_{re}-\beta_{u}KL(q_{\phi_{u}}( \boldsymbol{z}_{u}|u)||p_{\phi_{m}}(\boldsymbol{z}_{u}|m))\] \[-\beta_{q}KL(q_{\phi_{q}}(\boldsymbol{z}_{q}|q)||p_{\phi_{c}}( \boldsymbol{z}_{q}|c))\] \[-\alpha[KL(q_{\phi_{u}}(\boldsymbol{z}_{u}|u)||\mathcal{N}(0, \boldsymbol{I}))-KL(q_{\phi_{q}}(\boldsymbol{z}_{q}|q)||\mathcal{N}(0, \boldsymbol{I}))]\] \[-\alpha[KL(p_{\phi_{m}}(\boldsymbol{z}_{u}|m)||\mathcal{N}(0, \boldsymbol{I}))-KL(p_{\phi_{c}}(\boldsymbol{z}_{q}|c)||\mathcal{N}(0, \boldsymbol{I}))] \tag{24}\]
where \(\alpha\) allows to achieve a better balance between the latent space independence and reconstruction errors to achieve better prediction performance, as shown in \(\beta\)-VAE [8]. The total training procedure of CMVF is summarized in Algorithm 2.
```
1 Initialize parameters \(\theta,\phi\). while not converged do
2for mini-batch in training datasetdo
3for(t = 0; t \(<\) T; t = t + 1)do
4 encode the historical practice as \(\boldsymbol{h}\) (Eq. 7); dynamic routing to model the probability that the input belongs to each cognition mode \(p(\boldsymbol{m}_{i}|\chi_{u})\) (Algorithm 1); randomly samples from the posterior distributions via a re-parametrize trick (Eq. 13-20); compute gradient \(\nabla_{\theta}\mathcal{L}_{(}\phi,\theta)\) and \(\nabla_{\phi}\mathcal{L}_{(}\phi,\theta)\) (Eq. 24) ;
5 end for
6\(\theta,\phi\leftarrow\) Update parameters using gradients;
7
8 end for
9
10 end for
11return\(\theta,\phi\)
```
**Algorithm 2**Training procedure of CMVF.
### _Inferring Phase_
After training CMVF based on the training dataset, we can get the distribution of each student. Thus, during the inference phase, we can regard the mean values of the multinomial distribution as the representation vector of each student or each question to conduct the prediction \(\hat{y}\) as follows:
\[\boldsymbol{e}_{u}=\boldsymbol{\mu}_{u},\quad\boldsymbol{e}_{q}= \boldsymbol{\mu}_{q} \tag{25}\] \[\hat{y}=f_{\theta}(\boldsymbol{e}_{u},\,\boldsymbol{e}_{q},\, \boldsymbol{M},\,\boldsymbol{e}_{c}) \tag{26}\]
where \(\boldsymbol{e}_{u}\) and \(\boldsymbol{e}_{u}\) are obtained by replacing the random sampling vectors \(\boldsymbol{e}_{u}\) and \(\boldsymbol{e}_{q}\) in Eq. 19 and Eq. 20 with \(\boldsymbol{0}\).
## IV Experiments
In this section, we conduct experiments to evaluate the performance improvement brought by CMVF when plugged into various backbones. There are results can be highlighted: \(\bullet\) When plugged into various network backbones, our framework CMVF can achieve the state-of-the-art results compared to various KT baselines on three real-world datasets. \(\bullet\) CMVF can perform better than both KTM and CL4KT in prediction performance for infrequent students.
\(\bullet\) The multinomial likelihood with dynamic routing favorably over the common Gaussian and logistic likelihoods.
### _Datasets_
We evaluate the performance of methods on three datasets: (1) ASSIST2012, (2) EdNet, and (3) NIPS2020. In this paper, we follow [25], and define infrequent and frequent students as students whose practice times are in the top 80%-100% and top 20%, respectively, according to the reverse order of practice times. The detailed characteristics of three datasets are listed in Table I, where _infreq_ denotes the inactive students. \(\bullet\)**ASSIST2012**1: It is collected from ASSISTments intelligent tutoring system. The average length of practice sequence for infrequent student is 4.11. Attribute features of students include _student_class_id_, _teacher_id_, _school_id_, _tutor_mode_. \(\bullet\)**Ednet2: It comes from Santa platform [5]and is selected from EdNet-KT1 by previous works [17] with 5,000 students' records. Average sequence length for infrequent student is 4.87. Attribute features are _user_answer, elapsed_time_. \(\bullet\)**NIPS2020**3: It stems from the NeurIPS 2020 Education Challenge provided by Eedi, with practices from September 2018 to May 2020 [37]. Average sequence length for infrequent student is longer than the other two datasets, 44.65. The attribute features include _Gender, GroupId, SchemeOfWorkId_.
Footnote 1: [https://sites.google.com/site/assistmentsdata/home](https://sites.google.com/site/assistmentsdata/home)
Footnote 2: [https://github.com/riiid/ednet](https://github.com/riiid/ednet)
Footnote 3: [https://eedi.com/projects/neurips-education-challenge](https://eedi.com/projects/neurips-education-challenge)
Figure 3 displays the distribution of frequent and infrequent students. In ASSIST2012 and EdNet datasets, the correct answer rate for frequent students is generally more concentrated, while the distribution for infrequent students is more scattered, indicating higher uncertainty. This finding verifies the necessity of modeling uncertainty. In the NIPS2020 dataset, our definition of infrequent students resulted in an average sequence length of 44.65, as shown in Table I, leading to a similar distribution of infrequent students to frequent students.
### _Baselines_
\(\bullet\)**DKT**[23]: This method is the first to apply deep neural network into the KT field. It utilizes LSTMs to model the evolution of students' knowledge states with the hidden units. \(\bullet\)**DKVMN**[41]: This method extends DKT by using a key-value memory network to store and update students' knowledge states based on a static key matrix and a dynamic value matrix and can mine the relationship between concepts. \(\bullet\)**KTM**[35]: This method is the first to apply the FM [28] into modeling multiple factors related to students' learning and is
a quite generic framework for factor analysis methods.
\(\bullet\)**SAKT**[26]: This method applies the structure of Transformer to model students' practice sequences to enhance the ability to capture long-term dependencies between practices.
\(\bullet\)**AKT**[6]: This method proposes to update the practice representation with awareness of contexts and introduces the IRT [18] to acquire question embeddings.
\(\bullet\)**DIMKT**[30]: This method measures students' subjective feelings of question difficulty and estimates students' knowledge acquisition while answering questions of different difficulty levels to model the evolution of knowledge states.
\(\bullet\)**KTM+**[35]: KTM can freely incorporate attribute features into predicting students' responses. In this paper, we define the KTM+ as the KTM-based method with attribute features.
\(\bullet\)**CL4KT**[13]: This method introduces a contrastive learning framework that reveals semantically similar or dissimilar examples of a learning history with data augmentation methods.
To assess the effectiveness of CMVF, we select multiple classic methods including DKT (the cornerstone of deep KT methods), KTM (a general framework for factor analysis methods), and DIMKT (a state-of-the-art method) as the network backbone for the CMVF framework.
\(\bullet\)**CMVF+DKT**: We select the hidden state of LSTM in DKT, i.e. \(\mathbf{h}\), as the input of the dynamic routing algorithm to extract cognition mode \(\mathbf{M}\). To incorporate student embedding into the DKT, we apply a fully-connected layer for final prediction, i.e. \(y=\sigma(MLP(\mathbf{M}\oplus\mathbf{e}_{u}\oplus\mathbf{e}_{q}))\), where \(\mathbf{e}_{u}\) and \(\mathbf{e}_{q}\) denote the variational embeddings of students and questions. \(\sigma(\cdot)\) denotes the sigmoid function.
\(\bullet\)**CMVF+KTM**: We exploit _wins_ and _fails_ factors in KTM [35] to extract the cognition modes, and replace the original embeddings of factors with variational embeddings. Moreover, we replace the representations of _wins_ and _fails_ factors in FM with the students' cognition mode representation.
\(\bullet\)**CMVF+DIMKT**: The output of _knowledge state updating_ in DIMKT is utilized to extract cognition modes \(\mathbf{M}\). As the original DIMKT did not incorporate student representations, we fuse difficulty-enhanced question embedding and student embedding into \(\mathbf{x}=\mathbf{W}^{T}[\mathbf{e}_{u}\oplus\mathbf{e}_{q}\oplus\mathbf{e}_{c}\oplus\mathbf{Q}\mathbf{S }\oplus\mathbf{K}\mathbf{C}]+\mathbf{b}\), where \(\mathbf{Q}\mathbf{S}\) and \(\mathbf{K}\mathbf{C}\) denote the difficulty of question and concept, respectively. At last, we make the prediction as \(y=\sigma(\mathbf{M}\cdot\mathbf{x})\).
### _Experimental Settings_
#### Iv-C1 Implementation Details
Unlike previous works that typically discard students with less than 10 practice behavior records [13, 30], we retain more infrequent students by only discarding those with fewer than 3 records. For the three datasets, we use the top 80% of each student's records as the training set to predict their remaining behavior, running five experiments with varying random seeds for statistical significance analysis. We set the embedding size to 64 and truncate student sequences longer than 200. Mini-batch Adam [11] optimization is used for all methods, with a mini-batch size of 2048, where the learning rate is searched from \(\{1e-5,5e-4,1e-4,...,1e-2\}\). The Xavier parameterized initialization method [7] is used to initialize parameters. All models are trained on a Linux server with two Intel(R) Xeon(R) CPU E5-2620 v4 and a NVIDIA TITAN XP GPU.
#### Iv-C2 Evaluation Metrics
Accuracy (ACC) and Area Under Curve (AUC) are two common metrics for the KT task, where the threshold of ACC is set as 0.5. AUC is more robust than ACC, which aims to measure the rank performance of methods and is in line with the application requirements of KT to recommend questions to students. Therefore, AUC is the main metric in this paper. In addition, following [38], we introduce the _RealImpr_ metric to measure the relative improvement over the base method, which can be defined as follows:
\[RealImpr=(\frac{AUC(target)-0.5}{AUC(base)-0.5}-1)\times 100\% \tag{27}\]
where we adopt the classical DKT method as the base model.
### _Performance on Predicting Students' Responses_
This section presents a comparison of CMVF with state-of-the-art methods using different backbones, with Table II showing the mean results of three metrics for all methods on the three datasets, with the highest score highlighted in bold.
Based on Table II, we observe that CMVF significantly improves the accuracy of KT methods in predicting infrequent student groups, demonstrating its reliability in modeling uncertainty for these students. We also find a more significant improvement in CMVF's scored AUC for infrequent student groups compared to the overall AUC improvement, supporting our hypothesis that CMVF has a greater effect on infrequent students who practice less and exhibit greater uncertainty.
Fig. 3: Kernel density estimation of frequent and infrequent students in datasets. Frequent students are those in the top-20% of practice times.
However, since the proportion of infrequent students in the entire dataset is relatively small, the overall AUC improvement is relatively lower than that for infrequent students.
From Table II, we also observe that most KT models perform poorly on infrequent student groups, particularly in the EdNet dataset. As shown in Table I, the average practice sequence length of infrequent students in EdNet is quite short, only 4.87. Meanwhile, as shown in Figure 3, the distribution of frequent and infrequent students in EdNet is significantly different from that of the other two datasets. However, due to unbalanced training sets and a large proportion of frequent student samples, model parameter learning is often influenced more by frequent students, leading to overwhelmed personalization of infrequent students and difficulty in distinguishing the difference between two significantly different distributions.
Furthermore, Table II shows that introducing attribute features (KTM+) and data augmentation methods (CL4KT) outperforms KTM and AKT (the backbone of CL4KT) in predicting infrequent students. The method of data augmentation combined with contrastive learning is found to be better than introducing attribute features in improving the model's ability to predict infrequent students, as shown in Table II. In addition, CL4KT improves the performance of AKT more significantly in NIPS2020, where the average length of student sequences is longer, allowing for more reliable enhanced samples. However, the limited benefit of CL4KT on AKT in both EdNet and AS
Fig. 4: The sensitivity analysis of the number of capsules \(K\) in the three datasets.
Fig. 5: The sensitivity analysis of the regularization coefficient \(\alpha\) in the three datasets.
SIST2012 datasets, where the average sequences are shorter, highlights the limitations of data augmentation methods. To further compare the superiority of CMVF with the above two methods, we apply these methods uniformly to the DIMKT backbone for experiments, as shown in Table III. The results demonstrate that CMVF significantly outperforms the other two methods in all three datasets.
### _Sensitivity Analysis of Hyper-Parameters_
In this paper, CMVF includes two manually-tuned hyper-parameters: the number of capsules \(K\) and the coefficient \(\alpha\) of the regularization term. Therefore, in this section, we conducted experiments with different values of these parameters to investigate their impact on model performance.
#### Iv-E1 Sensitivity Analysis of Capsule Number
We evaluate CMVF's performance using five different capsule numbers with three backbone models (DKT, KTM, and DIMKT), namely \(K=5,10,30,50\), and \(100\), as shown in Figure 4. Small values of K (e.g., 5) are found to lead to low prediction performance because the number of capsules determines the angle of mining students' cognition modes. The more angles, the richer the information that can be captured in the students' practice sequences. However, if \(K\) is too large, the model may overfit due to a lack of data for optimizing parameters. For convenience, we should choose a value between 30 and 50 for the number of capsules that achieves good performance and is common to most datasets, as illustrated in Figure 4.
#### Iv-E2 Sensitivity Analysis of Regularization Coefficient
Since the regularization term of the standard normal Gaussian distribution should be an auxiliary task and the main task is still the reconstruction error, we set the coefficient a of the regularization term in the range of 0 to 1 and evaluate CMVF's performance with five different values: 0.1, 0.3, 0.5, 0.7, and 0.9. The experimental results on the three datasets are shown in Figure 5. The KT task is intended to predict student responses to practice, not to maximize the probability likelihood of the simulated student distribution. Therefore, the closer the value of \(\alpha\) is to 1, the more likely our task will deviate from the original intention, leading to decreased prediction performance of the model for responses. On the other hand, too small a value may cause our regularization to lose its effect and degenerate into an ordinary KT model with a deep neural network, leading to overfitting. Therefore, setting the value of \(\alpha\) in the range of 0.5-0.7 can achieve a more universal effect.
### _Ablation Study_
To get deep insights into the contributions of components in CMVF, we conduct ablation studies by applying multiple variants to the DKT model which is one of the lightest structures. The details of four variants are as follows:
\(\bullet\)**CMVF(Uniform)**. We replace the prior distribution of CMPF with the uniform prior distributions as VAE does [12].
\(\bullet\)**CMVF(R-Capsule)**. We remove the dynamic routing component and directly input the representations of students' practices into re-parametrize component to regularize training.
\(\bullet\)**CMVF(R-Reg)**. We remove the two mutual regularization terms in CMVF, i.e. \(KL(q_{\phi_{u}}(\mathbf{z}_{u}|u)||p_{\phi_{m}}(\mathbf{z}_{u}|m))\) and \(KL(q_{\phi_{\phi}}(\mathbf{z}_{q}|q)||p_{\phi_{e}}(\mathbf{z}_{q}|c))\), and only keep regularization terms of standard normal Gaussian distribution.
\(\bullet\)**CMVF(Point)**. We replace the distinct distribution estimate with the static representation estimate by directly adopting the mean values \(\mathbf{\mu}_{u}\) and \(\mathbf{\mu}_{q}\) as the embeddings.
Table IV reports the average results of five experiments. Firstly, we observe that CMVF(Uniform) and CMVF(R-Reg) exhibit similar performance, but are inferior to CMVF. This highlights the benefit of using a parameterized prior distribution for perceived cognition modes of students, which prevents infrequent students from being overwhelmed by individuation. Secondly, CMVF is more effective than CMVF(R-Capsule), because the dynamic routing algorithm can extract student practice information from multiple capsules, enhancing student sequence modeling. Furthermore, CMVF(R-Capsule) outperforms both CMVF(Uniform) and CMVF(R-Reg), indicating that even if the dynamic routing algorithm is removed, performance can still improve as long as student practice information is considered, highlighting the benefit of parameterized prior. Finally, CMVF(Point) performs the worst among the four variants, providing strong evidence that distribution estimates outperform point estimates.
## V Conclusions
This paper proposes a general representation learning framework for KT called Cognition-Mode Aware Variational Framework (CMVF) to learn robust student representations based on variational inference and the dynamic routing algorithm. Specifically, to better model the uncertainty of students with few practice records, we introduce a probabilistic model to generate a distribution for each student and use Bayesian inference for parameter estimation. Meanwhile, to better help the model estimate the distribution of infrequent students, we extract cognition modes from students' historical practice sequences through dynamic routing to set similar prior distributions for students with similar characteristics, thereby pulling distributions of students with similar cognition modes
closer. Extensive experimental results validate the superiority and compatibility of our framework CMV.
## Acknowledgment
This work is supported by 2022 Beijing Higher Education "Undergraduate Teaching Reform and Innovation Project" and 2022 Education and Teaching Reform Project of Beijing University of Posts and Telecommunications (2022JXYJ-F01).
|
2310.02513 | A Recipe for Improved Certifiable Robustness | Recent studies have highlighted the potential of Lipschitz-based methods for
training certifiably robust neural networks against adversarial attacks. A key
challenge, supported both theoretically and empirically, is that robustness
demands greater network capacity and more data than standard training. However,
effectively adding capacity under stringent Lipschitz constraints has proven
more difficult than it may seem, evident by the fact that state-of-the-art
approach tend more towards \emph{underfitting} than overfitting. Moreover, we
posit that a lack of careful exploration of the design space for Lipshitz-based
approaches has left potential performance gains on the table. In this work, we
provide a more comprehensive evaluation to better uncover the potential of
Lipschitz-based certification methods. Using a combination of novel techniques,
design optimizations, and synthesis of prior work, we are able to significantly
improve the state-of-the-art VRA for deterministic certification on a variety
of benchmark datasets, and over a range of perturbation sizes. Of particular
note, we discover that the addition of large ``Cholesky-orthogonalized residual
dense'' layers to the end of existing state-of-the-art Lipschitz-controlled
ResNet architectures is especially effective for increasing network capacity
and performance. Combined with filtered generative data augmentation, our final
results further the state of the art deterministic VRA by up to 8.5 percentage
points\footnote{Code is available at \url{https://github.com/hukkai/liresnet}}. | Kai Hu, Klas Leino, Zifan Wang, Matt Fredrikson | 2023-10-04T01:18:59Z | http://arxiv.org/abs/2310.02513v2 | # A Recipe for Improved Certifiable Robustness: Capacity and Data
###### Abstract
A key challenge, supported both theoretically and empirically, is that robustness demands greater network capacity and more data than standard training. However, effectively adding capacity under stringent Lipschitz constraints has proven more difficult than it may seem, evident by the fact that state-of-the-art approach tend more towards _underfitting_ than overfitting. Moreover, we posit that a lack of careful exploration of the design space for Lipshitz-based approaches has left potential performance gains on the table. In this work, we provide a more comprehensive evaluation to better uncover the potential of Lipschitz-based certification methods. Using a combination of novel techniques, design optimizations, and synthesis of prior work, we are able to significantly improve the state-of-the-art _verified robust accuracy_ (VRA) for deterministic certification on a variety of benchmark datasets, and over a range of perturbation sizes. Of particular note, we discover that the addition of large "Cholesky-orthogonalized residual dense" layers to the end of existing state-of-the-art Lipschitz-controlled ResNet architectures is especially effective for increasing network capacity and performance. Combined with filtered generative data augmentation, our final results further the state of the art deterministic VRA by up to 8.5 percentage points. Code is available at [https://github.com/hukkai/liresnet](https://github.com/hukkai/liresnet).
A Recipe for Unproved Certifiable Robustness: Capacity and Data
## 1 Introduction
Intentionally crafted perturbations (adversarial examples) have the potential to alter the predictions made by neural networks (Szegedy et al., 2014). Many methods have been proposed to improve the _robustness_ of deep networks, either empirically or provably. In safety-critical domains especially, guarantees against adversarial examples are indispensable. Commonly, provable defenses provide certificates of _local robustness_ to accompany a model's prediction; i.e., predictions should be guaranteed to be consistent within an \(\ell_{p}\)-norm-bounded \(\epsilon\)-ball around the input. The success of robustness certification techniques is measured by the verified robust accuracy (VRA)--the fraction of points with correct predictions that are proven to be \(\epsilon\)-locally robust.
To date, a look at the public robustness certification leaderboard (accessed Sept. 2023) shows that the best results are achieved by variants of _Randomized Smoothing_ (RS) (Cohen et al., 2019; Salman et al., 2019; Yang et al., 2021; Jeong et al., 2021; Carlini et al., 2022). However, there are two primary limitations associated with RS. To begin with, RS only offers a _probabilistic_ guarantee, typically configured to have a 0.1% false positive certification rate. Perhaps more importantly, the inference of RS involves substantial computational overhead--this limitation is
significant enough that these methods are typically tested on only a 1% subset of the ImageNet validation dataset due to timing constraints.
Another successful family of methods perform certification using Lipschitz bounds (Trockman and Kolter, 2021; Leino et al., 2021; Hu et al., 2023; Araujo et al., 2023; Wang and Manchester, 2023). Essentially, the Lipschitz constant of the neural network provides a bound on the maximum change in output for a given input perturbation, making it possible to certify local robustness. Compared with RS-based methods, Lipschitz-based methods can provide _deterministic_ certification, and are efficient enough to perform robustness certification at scale, e.g., on the full ImageNet (Hu et al., 2023). While Lipschitz-based methods are promising in terms of both deterministic certification and efficiency, there is a noticeable performance gap between these methods and RS-based methods. It is _not_ established, however, that this discrepancy is tied to a fundamental limitation of deterministic certification. In this work, we aim to narrow the gap between Lipschitz-based and RS-based methods.
One important avenue for improving the performance of Lipschitz-based certification is through increasing model capacity (ability to fit data). Bubeck and Sellke (2021) have shown that robust classification requires more capacity than is necessary for standard learning objectives, and Leino (2023) has shown more specifically that further capacity is required for tight Lipschitz-based certification. But while increasing model capacity for standard training is trivial--adding more blocks/layers, increasing the network width and using self-attention mechanisms are all possible approaches--in Lipschitz-based certified training, the picture is more nuanced because the network's Lipschitz constant is tightly controlled, limiting the function's expressiveness. Thus, even models with many parameters may still _underfit_ the training objective.
Figure 1: An overview of our recipe for training certifiable robust neural networks with Lipschitz-based certification methods.
In addition, we find that an apparent limitation preventing prior work from discovering the full potential of Lipschitz-based certification stems from the framing and evaluation setup. Specifically, most prior work is framed around a particular novel technique intended to supersede the state-of-the-art, necessitating evaluations centered on standardized benchmark hyperparameter design spaces, rather than exploring more general methods for improving performance (e.g., architecture choice, data pipeline, etc.). Although we introduce several of our own innovations, we present this work as more of a "master class" on optimizing Lipschitz-based robustness certification that draws from and synthesizes many techniques from prior work to achieve the best overall performance. This angle lets us explore design choices meant to be synergistic to the overall Lipschitz-based approach, rather than restricting us to choices tailored for head-to-head comparisons.
This work provides a more comprehensive evaluation to illuminate the potential of Lipschitz-based certification methods. First and foremost, we find that by delving more thoroughly into the design space of Lipschitz-based approaches, we can improve the state-of-the-art VRA for deterministic certification _significantly_ on a variety of benchmark datasets, and over a range of perturbation sizes. In the process, we propose a number of additional techniques not already used by the prior literature that contribute to these large performance improvements. That is, our results are achieved using a combination of design optimization, novel techniques, and synthesis of prior work.
After covering the relevant background in Section 2, we begin in Section 3 with a brief survey of the design space for Lipschitz-based certified training, focusing on three key components: (1) architecture choice, (2) methods for controlling the Lipschitz constant, and (3) data augmentation. First, we cover the various architecture innovations and building blocks that have been used in the prior literature. Based on an analysis of the challenges faced by existing work, and motivated by the goal of efficiently increasing network capacity, we propose additional directions to explore along the architecture axis, including two novel network building blocks. Next, we provide an overview of the existing methods used for controlling the Lipschitz constant during training, and propose one of our own that can be combined with other approaches. Third we discuss the role data augmentation plays in training high-capacity models. Specifically, we cover DDPM (Karras et al., 2022), which prior work has found helpful for certified training, and propose an alteration to the typical augmentation strategy that we find further boosts performance. Section 4 provides an in-depth evaluation that explores along the three dimensions identified in Section 3, shedding light on the most promising design choices, and demonstrating the significant performance improvements we achieve in this work. Finally, Section 5 concludes the paper.
## 2 Background
The problem of providing provable guarantees against adversarial examples is typically formalized using _local robustness_, which guards against the specific class of _small-norm adversarial examples_. A classification model, \(F(x)\!=\!\operatorname*{argmax}_{i}\!f(x)_{i}\), is \(\epsilon\)-locally robust at point \(x\) if
\[\forall x^{\prime}\.\ ||x\!-\!x^{\prime}||_{p}\!\leq\!\epsilon\! \Longrightarrow\!F(x)\!=\!F(x^{\prime}).\]
Given an upper bound, \(K\), of the Lipschitz constant of \(f\), which is an upper bound of \(\nicefrac{{||f(x)-f(x^{\prime})/||x-x^{\prime}||}}{\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
procedure may be very loose, both because it may be hard to obtain a tight bound on the Lipschitz constant, and because the Lipschitz constant provides a only worst-case analysis of the model's local behavior at \(x\). The key to Lipschitz-based certification is that both of these issues are mitigated by the fact that the Lipschitz constant is tightly controlled (either by hard constraints or by regularization), and the certification procedure is incorporated into the model's loss function.
Lipschitz-based certification methods in the literature have a few common elements. First, all use essentially the same aforementioned certification procedure. Second, the Lipschitz bound for the entire network is bounded using a product of the layer-wise Lipschitz constants. The key variation between methods is how they perform what we will call _Lipschitz control_, which ensures that (1) the Lipschitz bound does not explode, and (2) that the learned function can be (reasonably) tightly certified using the procedure above. We characterize these variations further in Section 3.2.
## 3 Design Space
We now turn to brief survey and analysis of the design space for Lipschitz-based certified training. We focus on three primary axes of the design space: (1) architecture choice, (2) methods for controlling the Lipschitz constant, and (3) data augmentation, covered in sections 3.1-3. We include a discussion of what prior work has done in each axis, as well as our analysis and proposals for further exploration.
### Architectures
Lipschitz-based certification has typically made use of a small set of architecture building blocks that are compatible with the overall approach described in Section 2. This includes 1-Lipschitz activation functions, dense layers, convolutional layers, and residual layers (with a few variations). Modules such as pooling, batch normalization, and attention are not frequently used, either because they lead to loose Lipschitz bounds, or because they are not Lipschitz at all. While ReLU activations are 1-Lipschitz, in the context of Lipschitz-based certification, they have been ubiquitously replaced by MinMax activations, whose gradient norm preservation property has been shown to be invaluable for this setting by Anil et al. (2019).
Unfortunately, while the space of architectures may be important to explore for maximizing deterministic VRA, prior work has had relatively little exploration here, often using benchmark architectures first proposed half a decade ago. On the other hand, new methods for performing Lipschitz control that present results on larger architectures may come across as misleading, as it becomes unclear if the performance benefits come from the added capacity or the Lipschitz control method. In this work we resolve this by exploring these axes more independently.
We begin with the LiResNet (Hu et al., 2023) architecture as a reference point because it performs best on CIFAR10/100 and Tiny-ImageNet datasets. The LiResNet architecture is composed of 4 parts: (1) the stem layer, a single convolution layer, to convert images in to feature maps; (2) the backbone, a stack of several residual convolutional blocks, to extract features from the feature maps; (3) the neck, 1\(\sim\)2 layers to convert the feature map into a flattened vector; and (4) the classification head for predictions. Prior to recent innovations that made residual blocks effective for Lipschitz-based certification, deep architectures were not practical. However, with the LiResNet architecture, Hu et al. were able to increase the model capacity by increasing the number of blocks \(L\) and the number of channels used by the block \(D\). Unfortunately, they report
diminishing returns beyond a dozen or so blocks (and \(D\sim 512\)), at which point the network capacity is not even enough to overfit the training dataset of CIFAR-10.
We posit that stacking the same block is less effective for adding capacity in Lipschitz-based training, where the network Lipschitz is tightly controlled. Specifically, since the Lipschitz constant is bounded by the product of all blocks' Lipschitz bound, we hypothesize that any looseness in the layer-wise bounds compounds, causing overly-deep models to become over-regularized, effectively destroying its capacity. We therefore propose exploration of additional architecture features that can more effectively add capacity beyond the baseline LiResNet architecture.
Attention-like Mechanisms.Attention mechanisms (Vaswani et al., 2017; Dosovitskiy et al., 2020) have shown excellent ability to improve model capabilities in standard training. However, attention cannot be directly applied to Lipschitz based training since it does not have a Lipschitz bound. One alternative could be Spatial-MLP (Touvron et al., 2022; Yu et al., 2022). Convolution layers extract local features while the Spatial-MLP can extract non-local features. Combination of the two different operations may allow richer features. Let \(\mathbf{X}\in\mathbb{R}^{C\times S\times S}\) denotes the feature map with channels \(C\), height and width \(S\) and \(\mathbf{W}\in\mathbb{R}^{S^{2}\times S^{2}}\)denotes the weights. The formulation of a Spatial-MLP block is (bias ignored):
\[\mathbf{X}[c,h,w]=\mathbf{X}[c,h,w]+\sum_{p=1}^{S}\sum_{q=1}^{S}\mathbf{W}[hS+w,pS+q]\mathbf{X}[c,p,q]. \tag{1}\]
The lipschitz constant of this operation is \(\|\mathbf{I}+\mathbf{W}\|_{2}\). We also consider using a group of Spatial-MLPs with more parameters to increase model capacity. Suppose we use \(G\) groups (The number of channels \(C\) should be divisible by \(G\)), we would have \(\mathbf{W}_{i}\in\mathbb{R}^{S^{2}\times S^{2}}\),\(1\leq i\leq G\) as the weights. The formulation of a group Res-MLP block is ( \(k\) is the integer in \([cG/C,cG/C+1)\)):
\[\mathbf{X}[c,h,w]=\mathbf{X}[c,h,w]+\sum_{p=1}^{S}\sum_{q=1}^{S}\mathbf{W}_{k}[hS+w,pS+q] \mathbf{X}[c,p,q]. \tag{2}\]
The lipschitz constant of this operation is \(\max_{i}(\|\mathbf{I}+\mathbf{W}_{i}\|_{2})\).
Dense Layers.Another solution is to add large fully connected (i.e., dense) layers after the neck. Early deep architectures like VGG employ this practice and recent work in Lipschitz based training also gets mileage from many large dense layers (Araujo et al., 2023).
We also propose a variation on standard dense layers inspired by the LiResNet block of Hu et al., which adds residual connections to a single convolutional by modifying the layer as \(f(x)=x+\text{conv}(x)\). Analogously for a dense layer with weight matrix \(W\), we can add residual connections to form what we call a _residual dense layer_ as \(f(x)=(W+I)x\).
### Lipschitz Control
Lipschitz-based certification requires the network to have a low Lipschitz constant since an upper bound on the Lipschitz constant is used to approximate output changes from input perturbations, and if it is too large, certification becomes difficult. There are two primary categories of Lipschitz control used in the literature: (1) Lipschitz regularization and (2) Lipschitz constraints.
The prevailing Lipschitz regularization approach is GloRo training proposed by Leino et al. (2021). In this approach, the layer-wise Lipschitz constant are computed as part of the forward pass
and used to incorporate Lipschitz-based certification into the training objective. Thus the gradient provides feedback to keep the Lipschitz constant under control and optimized for certification. GloRo regularization is used by Hu et al. (2023), who achieve the current state-of-the-art VRA.
A wide variety of Lipschitz constraint approaches exist, typically using special re-parameterizations that each linear layer's weights to be orthogonal (the Lipschitz constant of an orthogonal transformation is 1). We consider several of these approaches in our design space, described below.
**Cayley transformation.**(Trockman and Kolter, 2021) For skew-symmetric matrix \(V\), \(W\!=\!(I\!+\!V)^{-1}(I\!-\!V)\) is orthogonal, thus \(f(x;\!V)\!=\!Wx\) is 1-Lipschitz.
**Matrix exponential.**(Singla and Feizi, 2021) For skew-symmetric matrix \(V\), \(W\!=\!\exp(V)\) is orthogonal, thus \(f(x;\!V)\!=\!Wx\) is 1-Lipschitz.
**Layer-wise Orthogonal Training Layer.**(Xu et al., 2022) For non-singular matrix \(V\), \((VV^{\top})^{-\frac{1}{2}}V\) is orthogonal. To obtain a differentiable inverse square root, Newton's iteration steps are performed.
**Almost Orthogonal Layer.**Prach and Lampert (2022) shows \(f(x;\!V)\!=\!V\mathrm{diag}(\sum_{j}\!|V^{\top}V|_{ij})^{-1}x\) is 1-Lipschitz.
**SDP-based Lipschitz Layer.**Araujo et al. (2023) shows
\[h(x;\!W,\!q)\!=\!x\!-\!2W\mathrm{diag}(\sum_{j}\!|W^{\top}W|_{ij}\frac{q_{j}} {q_{i}})^{-1}\sigma(Wx)\]
is 1-Lipschitz with 1-Lipschitz activation \(\sigma(\cdot)\).
**Sandwich Layer.**Wang and Manchester (2023) shows
\[h(x;\!A,\!B,\!\Phi)\!=\!\sqrt{2}A^{\top}\Psi\sigma(\Psi^{-1}Bx)\]
is 1-Lipschitz with 1-Lipschitz activation \(\sigma(\cdot)\) if \(\|2A^{\top}B\|\!\leq\!1\) holds. The condition is obtained by construct a long orthogonal matrix using Cayley transformation.
In addition to the above Lipschitz constrained layers from prior work, we propose an approach to orthogonalize weights using Cholesky decomposition. Suppose \(\Sigma\) is a symmetric positive definite matrix, there exists an unique lower triangular matrix \(L\!=\!\mathrm{Cholesky}(\Sigma)\) such that \(LL^{\top}\!=\!\Sigma\). Then for non-singular matrix \(V\), SolveTriangularSystem\(\big{(}\mathrm{Cholesky}(VV^{\top}),\!V\big{)}\) is orthogonal. The motivation of this Cholesky-base orthogonalization comes from Gram-Schmidt process to obtain an orthogonal matrix. SolveTriangularSystem\(\big{(}\mathrm{Cholesky}(VV^{\top}),\!V\big{)}\) is the same as the Gram-Schmidt process result of \(V\). Cholesky-base orthogonalization is more numerically stable and efficient. Cholesky-based orthogonalization is typically twice as fast as Cayley transformation to obtain an orthogonal matrix. We propose the following 1-Lipschitz layer:
**Cholesky-Orthogonalized Residual Layer** Let \(V\!\in\!\mathbb{R}^{n\times n}\) be the parameter, and \(W\) is the Cholesky-base orthogonalization result of \(I\!+\!V\) where \(I\) is the identity matrix. The layer is formulated as \(f(x;\!V)\!=\!Wx\).
With the residual formula (analogous to the residual dense layer proposed in Section 3.1), the training of the model can be more effective in the case of stacking multiple such layers.
Although some studies may find certain approaches can approximate certain functions more smoothly, there is no direct theory showing one method has general advantages over others for Lipschitz control. Thus we conduct a fair comparison of all above approaches to find an optimal method empirically. We note that it is also possible to combine various methods of Lipschitz control. Although we do not try all combinations, in our experiments, we use GloRo
regularization for convolutional layers while combining different Lipschitz control techniques for the dense layers. See Section 4.2 for details.
### Data Augmentation with Generated models
Prior work (Hu et al., 2023) uses IDDPM (Nichol and Dhariwal, 2021) (obtain a FID of 3.27 for CIFAR-10) to generate samples. We would like to know if the performance of certificated robustness can be improved if using generative samples of better quality. We use the elucidating diffusion model (EDM) (Karras et al., 2022) to generate new samples, which obtain a FID of 1.79 for CIFAR-10. For each dataset (CIFAR10, CIFAR100 and Tiny-ImageNet), we train the diffusion models on the corresponding training set using the setting recommended by EDM. Unless otherwise specified, the diffusion models are class-conditional thus the generated images have pseudo-labels.
We also train a standard (non-robust) classification model on each dataset. We use the ResNeXt101 (32x48d) model weakly supervised pre-trained on 940 million (\(224\!\times\!224\)) images (Mahajan et al., 2018). We freeze the backbone and only fine-tune the last classification layer with the training dataset. This model archives 94%, 86% and 82% test accuracy on CIFAR-10, CIFAR-100, and Tiny-ImageNet respectively. We use this classification model's prediction probability of the pseudo-label to score every generated image. Images with the least 20% scores are filtered.
## 4 Evaluation
We now present our evaluation, which includes an exploration of the axes of the design space discussed in Section 3. We begin by showcasing our final result--i.e., the best configuration discovered from our design space exploration--and compare its performance to the best VRA results reported in prior work (Section 4.1). To summarize, our ultimate configuration is based on the L12W512 LiResNet architecture proposed by (Hu et al., 2023), i.e., its backbone contains 12 linear residual convolution blocks with 512 channels each. We modify this architecture by replacing the neck with Cholesky-orthogonalized dense layers (which were previously controlled using GloRo regularization), and adding 8 Cholesky-orthogonalized residual dense layers with 2048 neurons each to the end of the neck. This model is trained using our improved generated data augmentation pipeline (see Table 4 for details). We refer to this proposed configuration as "LiResNet++." We provide further details on the exact training parameters in Appendix B.
Next, in section 4.2, we provide an overview breaking down the various improvements we applied to reach our final configuration, followed by more detailed ablation studies comparing various Lipschitz control methods and data augmentation generation pipelines. Finally, in Section 4.3, we compare our method with randomized smoothing based methods, demonstrating that our work bridges the gap between deterministic and stochastic certification.
### Comparison with prior works
We compare LiResNet++ with the following works from the literature: GloRo Nets with TRADES loss, Cayley, Local-Lip Net, SOC with Householder and Certification Regularization (HH+CR), CPL, SLL, Sandwich and LiResNet, which are selected for having been shown to surpass other approaches. Table 1 presents the clean and certified accuracy with different radius of certification \({}^{\text{36}}\!/\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
accuracy for all values of \(\epsilon\). On CIFAR-10/100, our model improves the certificated accuracy at \(\epsilon\!=\!36\!/\!255\) by more than 8%. We also compare the empirical robustness of the proposed method with some recent work in Section A.
To date, Hu et al. (2023) is the only work to report results on ImageNet. However they do not use generated data on ImageNet. We generated 2 million samples using guided diffusion to boost our model. Other settings are the same as those on CIFAR-10/100. With the improved model and generated data, we further improve the certification accuracy on ImageNet by 3.3%.
\begin{table}
\begin{tabular}{c l c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**Clean**} & \multicolumn{2}{c}{**VRA (\%) at \(\epsilon\)**} \\ \cline{3-6} & & **Acc. (\%)** & \(\frac{36}{255}\) & \(\frac{72}{255}\) & \(\frac{108}{255}\) \\ \hline \multirow{11}{*}{CIFAR-10} & GloRo (Leino et al., 2021) & 77.0 & 58.4 & - & - \\ & Local-Lip-B (+MaxMin) (Huang et al., 2021) & 77.4 & 60.7 & 39.0 & 20.4 \\ & Cayley Large (Trockman and Kolter, 2021) & 74.6 & 61.4 & 46.4 & 32.1 \\ & SOC 20 (Singla and Feizi, 2021) & 76.3 & 62.6 & 48.7 & 36.0 \\ & CPL XL (Meunier et al., 2022) & 78.5 & 64.4 & 48.0 & 33.0 \\ & AOL Large(Prach and Lampert, 2022) & 71.6 & 64.0 & 56.4 & 49.0 \\ & SLL X-Large(Araujo et al., 2023) & 73.3 & 65.8 & 58.4 & 51.3 \\ & LiResNet (+DDPM) (Hu et al., 2023) & 82.1 & 70.0 & - & - \\ \cline{2-6} & LiResNet++ (Ours, +DDPM) & **87.0** & **78.1** & **66.6** & **53.5** \\ \hline \multirow{11}{*}{CIFAR-10} & Cayley Large (Trockman and Kolter, 2021) & 43.3 & 29.2 & 18.8 & 11.0 \\ & SOC 20 (Singla and Feizi, 2021) & 47.8 & 34.8 & 23.7 & 15.8 \\ & CPL XL (Meunier et al., 2022) & 47.8 & 33.4 & 20.9 & 12.6 \\ & AOL Large (Prach and Lampert, 2022) & 43.7 & 33.7 & 26.3 & 20.7 \\ & SLL X-Large (Araujo et al., 2023) & 46.5 & 36.5 & 29.0 & 23.3 \\ & Sandwich (Wang and Manchester, 2023) & 46.3 & 35.3 & 26.3 & 20.3 \\ & LiResNet (+DDPM) (Hu et al., 2023) & 55.5 & 41.5 & - & - \\ \cline{2-6} & LiResNet++ (Ours, +DDPM) & **62.1** & **50.1** & **38.5** & **29.0** \\ \hline \multirow{11}{*}{TinyImageNet} & GloRo(Leino et al., 2021) & 35.5 & 22.4 & - & - \\ & Local-Lip-B (+MaxMin) (Huang et al., 2021) & 36.9 & 23.4 & 12.7 & 6.1 \\ \cline{1-1} & SLL X-Large (Araujo et al., 2023) & 32.1 & 23.2 & 16.8 & 12.0 \\ \cline{1-1} & Sandwich (Wang and Manchester, 2023) & 33.4 & 24.7 & 18.1 & 13.4 \\ \cline{1-1} & LiResNet (+DDPM) (Hu et al., 2023) & 46.7 & 33.6 & - & - \\ \cline{1-1} & LiResNet++ (Ours, +DDPM) & **48.4** & **37.0** & **26.8** & **18.6** \\ \hline \multirow{11}{*}{ImageNet} & LiResNet (Hu et al., 2023) & 45.6 & 35.0 & \multirow{11}{*}{} & \multirow{11}{*}{} & \\ \cline{1-1} & LiResNet++ (Ours, +DDPM) & **49.0** & **38.3** & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: This table presents the clean and verified robust accuracy (VRA) of several concurrent works and our LiResNet++ networks on CIFAR-10/100, TinyImageNet and ImageNet datasets.
### Ablation Studies
that it performs better if parameterizing the dense layer weights are orthogonal matrices (Cayley, Matrix Exp, Cholesky). Cholesky-based Orthogonal Layer and Matrix Exponential perform similar, but Matrix Exponential is slower. It takes 32.4, 37.8 and 51.2 seconds to train one epoch with Cholesky, Cayley and Matrix Exp respectively on CIFAR-10 dataset using the same A100 machine. Cholesky-based Orthogonal Layer is the optimal Lipschitz control choice for the dense layers considering both performance and efficiency.
From Figure 2, we see that using a regular dense layer and applying GloRo-style Lipschitz regularization for the final layers performs comparatively poorly in the early training stages but surpasses SLL and Sandwich orthogonalization after \(\sim\)300-600 epochs. Because the dense layers are not constrained to be orthogonal in this case, the model requires more steps to learn a nearly orthogonal transformation. Note that this is particularly pronounced in the case of large dense layers, as opposed to the convolutional layers, which also rely on GloRo regularization. Leino (2022) has shown that a reliable gradient signal for orthogonalizing a linear transformation requires more power iterations as the dimension of the eigenvector increases. For large dense layers, the eigenvectors are high-dimensional, as compared to those of convolutions, which depend only on the size of the kernel (which is typically small). Thus we expect GloRo regularization to converge more slowly on dense layers than on convolutional ones.
**Ablation Study on the generated data augmentation** As shown in Table 2, a better pipeline to apply generated data augmentation can improve certification robustness significantly. Table 4 shows a detailed study on the effects of different pipeline. Switching to a better generator provides consistent improvements on both datasets no matter the sampled images are filtered or not. Using a stronger classification model to remove samples with least 20% low confidence pseudo-labels can also help. On CIFAR-100, the improvement is more significant. The reason is that CIFAR100 has more categories and therefore the diffusion model generates a higher proportion of images with unmatched pseudo-labels. Using these labels can harm robust training. In the second part of the table, we study the ratio of real and generated samples in a batch. We find that seeing more generated samples can significantly improve the model's certification robustness.
However, if we only use generated samples to train the model (real/generated sample ratio = 0: 1), it suffers from overfitting and the performance is decreased. From this experiment, we think the reason the generated data helps is not that the generated data is of better quality, but **generated data are easier to classify on average** (training on generated data has a faster training accuracy convergence). As we mentioned before, Lipschitz-based training suffers from underfitting and much of the model capacity is used to remember hard samples, including outliers and samples very close to the decision boundary. Learning from these hard samples do
Figure 2: Certification accuracy (i.e. VRA) of LiResNet++ on CIFAR-10 using different dense layers during training.
not improve robustness since these samples are not naturally non-robust. When trained with generated samples (which are easier), the percent of hard samples in the dataset is decreased and the model can focus more on learning a robust decision boundary. As a contrary, generated data not always improve the performance of standard accuracy (Azizi et al., 2023) even the SOTA diffusion model is used. In the standard training setting, neural network can fit hard samples more easily. Adding too many generated samples (In Azizi et al. (2023)'s setting 6 times of the original training set), the test accuracy would decrease since most hard samples help generalization and their proportion has decreased with generated data added.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Sample & Filtered by & Real/Generated & & CIFAR-10 & CIFAR-100 \\ Generator & classification score & sample ratio in a batch & & \\ \hline IDDPM & ✗ & & 75.6 & 46.5 \\ IDDPM & ✓ & 1 : 1 & 75.9 & 47.6 \\ EDM & ✗ & & 76.1 & 47.3 \\ EDM & ✓ & & 76.5 & 48.5 \\ \hline & & 1 : 2 & 77.1 & 49.2 \\ EDM & ✓ & 1 : 3 & 78.1 & 50.1 \\ & & 1 : 0 & 69.2 & 39.0 \\ & & 0 : 1 & 75.4 & 47.6 \\ \hline \hline \end{tabular}
\end{table}
Table 4: This table shows the effect of different generated data augmentation pipelines. VRA of LiResNet++ at radius \(\epsilon\!=\!36\!/\!255\) is reported.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Layer Choice**} & \multirow{2}{*}{**Clean**} & \multicolumn{2}{c}{**VRA (\%) at \(\epsilon\)**} \\ \cline{3-6} & & & \(\frac{36}{255}\) & \(\frac{72}{255}\) & \(\frac{108}{255}\) \\ \hline \multirow{6}{*}{CIFAR-10} & Regular Dense Layer (w/ GloRo regularization) & 86.1 & 77.0 & 65.6 & 52.4 \\ & Cayley Dense Layer & 86.3 & 77.2 & 65.5 & 52.1 \\ & Almost Orthogonal Layer (AOL) & 85.1 & 75.4 & 63.7 & 50.0 \\ & SDP-based Lipschitz Layer (SLL) & 85.5 & 75.6 & 65.6 & 52.3 \\ & Sandwich Layer & 85.4 & 76.1 & 64.0 & 51.0 \\ & Matrix Exponential & **87.0** & 78.1 & **66.7** & **53.6** \\ & Cholesky-based Orthogonal Layer & **87.0** & **78.1** & 66.6 & 53.5 \\ \hline \multirow{6}{*}{CIFAR-100} & Regular Dense Layer (w/ GloRo regularization) & 60.4 & 48.2 & 36.9 & 27.0 \\ & Cayley Dense Layer & 61.3 & 49.1 & 38.0 & 28.3 \\ \cline{1-1} & SDP-based Lipschitz Layer (SLL) & 61.5 & 48.9 & 37.3 & 27.3 \\ \cline{1-1} & Sandwich Layer & 61.3 & 48.3 & 36.6 & 27.1 \\ \cline{1-1} & Matrix Exponential & 61.7 & 49.8 & **38.6** & 28.8 \\ \cline{1-1} & Cholesky-based Orthogonal Layer & **62.1** & **50.1** & 38.5 & **29.0** \\ \hline \hline \end{tabular}
\end{table}
Table 3: This table presents the clean accuracy and VRA using different dense layers on CIFAR-10/100 datasets. All other settings are the same.
### Comparison with Randomized Smoothing
To date, the methods that achieve the best certified performance are derived from randomized smoothing (RS) Cohen et al. (2019). As we discussed, Lipschitz-based methods demonstrate advantages over RS has in terms of their efficiency and the guarantee that they provide. We provide the first comparison between these methods in Table 5. Notably, we are able to outperform several recent RS-based approaches on _all_ certification radii.
## 5 Conclusion
In this paper, our primary objective is to enhance the certified robustness of neural networks. We contend that a significant problem of existing Lipschitz-based models is their limited capacity, which hinders their ability to even overfit small datasets. To address this challenge, we have reexamined network architectures and basic building blocks to control network Lipschitz and have proposed three solutions to mitigate this issue. Firstly, we show that a combination of dense layers and convolutions can effectively expand the model's capacity. Secondly, we introduce the Cholesky Residual Layer, which serves as an efficient building block for achieving orthogonal weights. Thirdly, we have explored an improved pipeline for utilizing generated data to enhance Lipschitz-based training. Through extensive experiments, we have demonstrated the effectiveness of our approach. Our final results have pushed the boundaries of deterministic certified accuracy on CIFAR-10/100 datasets, surpassing the state of the art by up to 8.5 percentage points. Our method opens up a promising avenue to bridge the gap between probabilistic and deterministic certification methods.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{VRA (\%) measured at at \(\varepsilon\)} \\ \cline{2-5} _method_ & 0.25 & 0.5 & 0.75 & 1.0 \\ \hline RS (Cohen et al., 2019) & \({}^{(75.0)}61.0\) & \({}^{(75.0)}43.0\) & \({}^{(65.0)}32.0\) & \({}^{(66.0)}22.0\) \\ SmoothAdv (Salman et al., 2019) & \({}^{(75.6)}67.4\) & \({}^{(75.6)}57.6\) & \({}^{(74.8)}47.8\) & \({}^{(57.4)}38.3\) \\ SmoothAdv (Salman et al., 2019) & \({}^{(84.3)}74.9\) & \({}^{(80.1)}63.4\) & \({}^{(80.1)}**51.9**\) & \({}^{(62.2)}39.6\) \\ Consistency (Jeong and Shin, 2020) & \({}^{(77.8)}68.8\) & \({}^{(75.8)}58.1\) & \({}^{(72.9)}48.5\) & \({}^{(52.3)}37.8\) \\ MACER (Zhai et al., 2020) & \({}^{(81.0)}71.0\) & \({}^{(81.0)}59.0\) & \({}^{(66.0)}46.0\) & \({}^{(66.0)}38.0\) \\ DRT (Yang et al., 2021) & \({}^{(81.5)}70.4\) & \({}^{(72.6)}60.2\) & \({}^{(71.9)}50.5\) & \({}^{(56.1)}**39.8**\) \\ SmoothMix (Jeong et al., 2021) & \({}^{(77.1)}67.9\) & \({}^{(77.1)}57.9\) & \({}^{(74.2)}47.7\) & \({}^{(61.8)}37.2\) \\ Denoised (Salman et al., 2020) & \({}^{(72.0)}56.0\) & \({}^{(62.0)}41.0\) & \({}^{(62.0)}28.0\) & \({}^{(44.0)}19.0\) \\ DDS (Carlini et al., 2022) & \({}^{(91.2)}**79.3**\) & \({}^{(91.2)}**65.5**\) & \({}^{(87.3)}48.7\) & \({}^{(81.5)}35.5\) \\ \hline LiResNet++ (Ours) & \({}^{(87.0)}69.5\) & \({}^{(74.3)}52.2\) & \({}^{(70.0)}41.7\) & \({}^{(68.1)}35.1\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: This table presents the clean and certificated robust accuracy of several _probabilistic_ works and our _deterministic_ LiResNet++ on CIFAR-10 dataset. |
2303.15157 | Brownian motion of a particle with higher-derivative dynamics | The Brownian motion of a particle with higher-derivative dynamics (HDD)
coupling with a bath consisting of harmonic oscillators is investigated. The
Langevin equation and corresponding Fokker-Planck equation for the Brownian
motion of the HDD particle are derived. As a case study, we particularly
consider a stochastic Pais-Uhlenbeck oscillator. It is found that the Boltzmann
distribution is pathological while this distribution is the steady solution to
the Fokker-Planck equation. | Z. C. Tu | 2023-03-27T12:49:00Z | http://arxiv.org/abs/2303.15157v3 | # Brownian motion of a particle with higher-derivative dynamics
###### Abstract
The Brownian motion of a particle with higher-derivative dynamics (HDD) coupling with a bath consisting of harmonic oscillators is investigated. The Langevin equation and corresponding Fokker-Planck equation for the Brownian motion of the HDD particle are derived. As a case study, we particularly consider a stochastic Pais-Uhlenbeck oscillator. It is found that the Boltzmann distribution is pathological while this distribution is the steady solution to the Fokker-Planck equation. [Note: This manuscript is a translation of a Chinese paper contributing for 100 anniversary of Department of Physics, Beijing Normal University.]
## I Introduction
The fundamental equations of motion in classical mechanics are governed by Newton's second law (force = mass \(\times\) acceleration). Acceleration is the second derivative of coordinates with respect to time, and force in classical mechanics usually does not contain derivatives higher than velocity (the first derivative of coordinates with respect to time). Thus, the fundamental equations of motion in classical mechanics are usually second order differential equations (excluding derivatives higher than second derivatives of coordinates with respect to time). Correspondingly, the Lagrangian is only a function of generalized coordinates, generalized velocities and time, which does not contain the second or higher derivatives of generalized coordinates. Theoretically, it is natural for us to extend the equations of motion to the situations where the third or higher derivatives are included. We name equations of motion containing higher (larger than second) derivatives of coordinates with respect to time as higher-derivative dynamics (HDD).
Discussions on HDD systems may be traced back to Ostrogradsky [1]. He discussed a system where the non-degenerated Lagrangian contains second or higher derivatives of generalized coordinates with respect to time (the corresponding equations of motion contain fourth or higher derivatives). He found that the corresponding Hamiltonian has no lower bound, resulting in dynamical instability of the system. Lorentz, Abraham, and Dirac discussed the motion of electrons with radiations, and found that the force due to radiation contains third derivatives of coordinates with respect to time [2; 3]. Dirac's theory of electron motion was generalized by Bhabha to describe the motion of neutrons [4]. Pais and Uhlenbeck discussed an oscillator governed by a fourth-order differential equation [5] which became the basis for subsequent higher-derivative field theory. Chang [6] analyzed the self-acceleration behavior of Dirac's theory of electron motions and Bhabha's theory of neutron motions. He also compared the quantization equations respectively derived from the Ostrogradsky method and the Pais-Uhlenbeck method, and found that both methods lead to equivalent results [7]. As we know, the idea of HDD appears in varieties of situations such as Podolsky's generalized electrodynamics [8; 9], Polyakov's superstring theory [10], modified gravitational theory [11; 12; 13; 14], Timoshenko's beam theory [15], non-Hermitian physics [16; 17], Starobinsky's inflation theory [18; 19], and so on.
The extension of deterministic dynamics to stochastic dynamics is the other fruitful direction. At the beginning of the twentieth century, Langevin proposed a stochastic equation to describe the random motion of a Brownian particle in liquid, which became known as the Langevin equation [20]. For the sake of simplicity, but without losing its generality, we will only mention one-dimensional motion in this paper. The Langevin equation can be expressed as
\[m\frac{\mathrm{d}^{2}x}{\mathrm{d}t^{2}}=-\frac{\partial V}{\partial x}-\mu \frac{\mathrm{d}x}{\mathrm{d}t}+\xi(t), \tag{1}\]
were \(m\) and \(x\) represent the mass and the position of the Brownian particle, respectively. \(t\) is the time variable. \(V\) represents a deterministic external field loaded on the particle. It is based on the Newton second law that Langevin intuitively wrote the above equation. The force applied on the particle by liquid molecules is phenomenologically decomposed into two terms. One is the average effect due to collisions of liquid molecules on the Brownian particle, which appears as a deterministic damping force \(-\mu\mathrm{d}x/\mathrm{d}t\) with \(\mu\) being a damping coefficient. The remainder fluctuating effect due to collisions of liquid molecules is expressed as a random force \(\xi(t)\) which is usually assumed to be Gaussian noise with zero mean and very short-time correlation. That is,
\[\langle\xi(t)\rangle=0, \tag{2}\]
and
\[\langle\xi(t)\xi(t^{\prime})\rangle=2\mu\mathrm{k_{B}}T\delta(t-t^{\prime}), \tag{3}\]
where \(\mathrm{k_{B}}\) is the Boltzmann factor, and \(T\) is the temperature of the liquid. \(\delta(t)\) represents the Dirac \(\delta\)-function. The Langevin equation has became the cornerstone of stochastic thermodynamics [21; 22; 23], an emergent field of statistical physics for small systems in recent years.
How to derive the Langevin equation from the microscopic level has attracted much attention from many scientists. One of the most significant contributions is Zwanzig's scheme [24; 25] which is essentially a generalized and simplified version of the work by Ford, Kac, and Mazur [26]. Zwanzig considered a particle with mass \(m\) embedded in a bath consisting of harmonic oscillators, and the particle is linearly coupled to the harmonic oscillators. Assume that the states of harmonic oscillators initially satisfy the Boltzmann distribution at temperature \(T\). When the variables of the bath were integrated, Zwanzig derived an equation of motion for the particle which exactly the Langevin equation with the noise satisfying the Gaussian form. From the perspective of theoretical extension, it is valuable to discuss the motion of a particle with HDD coupled to a bath consisting of harmonic oscillators. We will use this model as a starting point to discuss Brownian motion of HDD particles. In this work, we will derive the Langevin equation and its corresponding Fokker-Planck equation for the Brownian motion of a HDD particle. The rest of this paper is organized as follows. In Section II, we briefly introduce Ostrogradsky's construction on HDD. In Sec. III, we generalize the Langevin equation for a HDD particle using Zwanzig's scheme. In Sec. IV, we derive the corresponding Fokker-Planck equation for a HDD particle. In Sec. V, we discuss stochastic Pais-Uhlenbeck oscillators as a case study. A brief summary is given in Sec. VI.
## II Ostrogradsky's construction on HDD
In this section, we introduce Ostrogradsky's approach on HDD [1]. For convenience, we consider a system with single degree of freedom. Let \(x_{0}\) represent the coordinate of the particle. Suppose that the Lagrangian of this system may be expressed as
\[L=L(t,x_{0},x_{1},x_{2},\cdots,x_{N}), \tag{4}\]
where \(x_{n}\equiv\mathrm{d}x_{n-1}/\mathrm{d}t\) (\(n=1,2,\cdots,N\)) represents the \(n-\)th order derivative of coordinate \(x_{0}\) with respect to time \(t\). Via the variational calculus, the Euler-Lagrange equation corresponding to Lagrangian (4) can be derived as
\[\sum_{n=0}^{N}\left(-\frac{\mathrm{d}}{\mathrm{d}t}\right)^{n}\frac{\partial L }{\partial x_{n}}=0. \tag{5}\]
Define generalized momentum
\[p_{n}=\sum_{k=n+1}^{N}\left(-\frac{\mathrm{d}}{\mathrm{d}t}\right)^{k-n-1} \frac{\partial L}{\partial x_{k}} \tag{6}\]
where \(n=0,1,2,\cdots,N-1\). In particular, when \(n=0\), the above equation leads to
\[p_{0}=\sum_{k=1}^{N}\left(-\frac{\mathrm{d}}{\mathrm{d}t}\right)^{k-1}\frac{ \partial L}{\partial x_{k}}, \tag{7}\]
from which we easily see that
\[\frac{\mathrm{d}p_{0}}{\mathrm{d}t}=-\sum_{k=1}^{N}\left(-\frac{\mathrm{d}}{ \mathrm{d}t}\right)^{k}\frac{\partial L}{\partial x_{k}}. \tag{8}\]
Thus Euler-Lagrange equation (5) can be rewritten in a new form:
\[\frac{\mathrm{d}p_{0}}{\mathrm{d}t}=\frac{\partial L}{\partial x_{0}}. \tag{9}\]
Now let us construct the Hamiltonian. Taking \(n=N-1\) in Eq. (6), we achieve
\[p_{N-1}=\frac{\partial L}{\partial x_{N}}, \tag{10}\]
which is a function of \(t,x_{0},x_{1},\cdots,x_{N}\). From the above equation, we may solve \(x_{N}\) in principle which is a function of \(t,x_{0},x_{1},\cdots,x_{N-1},p_{N-1}\) and may be formally expressed as
\[x_{N}=\varphi\left(t,x,x_{1},\ldots,x_{N-1},p_{N-1}\right). \tag{11}\]
Then, using the Legendre transformation, we obtain the Hamiltonian:
\[H=\sum_{n=0}^{N-1}p_{n}x_{n+1}-L(t,x,x_{1},x_{2},\cdots,x_{N}), \tag{12}\]
where \(x_{N}\) satisfying Eq. (11). Thus the Hamiltonian \(H\) is a function of \(t,x_{0},x_{1},\cdots,x_{N-1},p_{0},p_{1},\cdots,p_{N-1}\). Having the Hamiltonian, the canonical equations may be derived as
\[\dot{x}_{n} =\frac{\partial H}{\partial p_{n}}=x_{n+1} \tag{13}\] \[\dot{p}_{n} =-\frac{\partial H}{\partial x_{n}}=\frac{\partial L}{\partial x_ {n}}-p_{n-1}, \tag{14}\]
where \(n=0,1,2,\cdots,N-1\) and \(p_{-1}\equiv 0\). \(\left(\begin{array}{c}\cdot\\ \cdot\end{array}\right)\) represents the derivative of \(\left(\begin{array}{c}\cdot\\ \cdot\end{array}\right)\) with respect to time.
## III Langevin equation with HDD
In this section, we will disuss the Brownian motion of a particle with HDD and derive the Langevin equation with HDD following the Zwanzig scheme [24; 25].
Consider a large particle embedded in medium consisting of small particles. The motion of the large particle itself is governed by HDD. The motions of the small particles are governed by second derivative dynamics. For simplicity, the small particles are viewed as a bath of harmonic oscillators. The Hamiltonian of whole system is
\[\mathcal{H}=H+H_{B}, \tag{15}\]
where \(H\) represents the Hamiltonian (12) of HDD. \(H_{B}\) represents the Hamiltonian of the harmonic oscillators in
the bath and their linear couplings with the HDD particle, which can be expressed as
\[H_{B}=\frac{1}{2}\sum_{j}\left[P_{j}^{2}+\omega_{j}^{2}\left(Q_{j}-\frac{\gamma_{j }}{\omega_{j}^{2}}x_{0}\right)^{2}\right], \tag{16}\]
where \(Q_{j}\) and \(P_{j}\) are generalized coordinate and generalized momentum of the \(j-\)th oscillator, respectively. \(\omega_{j}\) represents the frequency of the \(j-\)th harmonic oscillator. \(\gamma_{j}\) represents the coupling strength of the \(j-\)th harmonic oscillator to the HDD particle. \(x_{0}\) represents the coordinate of the HDD particle. Note that the mass of each oscillator has been set unit.
According to the Hamilton equations, we obtain the equations of motion for the system, which read
\[\dot{Q}_{j} =P_{j}, \tag{17}\] \[\dot{P}_{j} =-\omega_{j}^{2}Q_{j}+\gamma_{j}x_{0},\] (18) \[\dot{x}_{n} =x_{n+1},\] (19) \[\dot{p}_{n} =\frac{\partial L}{\partial x_{n}}-p_{n-1}+\delta_{n0}\sum_{j} \gamma_{j}\left(Q_{j}-\frac{\gamma_{j}}{\omega_{j}^{2}}x_{0}\right), \tag{20}\]
where \(n=0,1,2,\cdots,N-1\) and \(p_{-1}\equiv 0\).
From Eqs. (17) and (18), we obtain
\[Q_{j}-\frac{\gamma_{j}}{\omega_{j}^{2}}x_{0} =\left[Q_{j}(0)-\frac{\gamma_{j}}{\omega_{j}^{2}}x_{0}(0)\right] \cos\omega_{j}t+P_{j}(0)\frac{\sin\omega_{j}t}{\omega_{j}}\] \[-\frac{\gamma_{j}}{\omega_{j}^{2}}\int_{0}^{t}\ \mathrm{d}sx_{1}(s)\cos \omega_{j}(t-s), \tag{21}\]
where \(x_{0}(0)\), \(Q_{j}(0)\) and \(P_{j}(0)\) are the initial values of \(x_{0}\), \(Q_{j}\) and \(P_{j}\) at time \(t=0\). To achieve the above result, we have used the method of integration by parts.
Substituting Eq. (21) into Eq. (20) with \(n=0\), we arrive at
\[\dot{p}_{0}=\frac{\partial L}{\partial x_{0}}-\int_{0}^{t}\mathrm{d}sK(t-s)x_ {1}(s)+\xi(t), \tag{22}\]
where the kernel function \(K(t)\) and the "noise" term \(\xi(t)\) are respectively expressed as
\[K(t)=\sum_{j}\frac{\gamma_{j}^{2}}{\omega_{j}^{2}}\cos\omega_{j}t, \tag{23}\]
and
\[\xi(t) =\sum_{j}\gamma_{j}P_{j}(0)\frac{\sin\omega_{j}t}{\omega_{j}}\] \[+\sum_{j}\gamma_{j}\left[Q_{j}(0)-\frac{\gamma_{j}}{\omega_{j}^{2 }}x_{0}(0)\right]\cos\omega_{j}t. \tag{24}\]
Substituting expression (7) into Eq. (22), we arrive at a generalized Langevin equation:
\[\sum_{n=0}^{N}\left(-\frac{\mathrm{d}}{\mathrm{d}t}\right)^{n}\frac{\partial L }{\partial x_{n}}-\int_{0}^{t}\ \mathrm{d}sK(t-s)x_{1}(s)+\xi(t)=0. \tag{25}\]
Suppose that the bath is initially in equilibrium at temperature of \(T\). The states of oscillators satisfy the Boltzmann distribution \(\rho_{eq}\left(Q_{j}(0),P_{j}(0)\right)\propto\mathrm{e}^{-H_{B}/\mathrm{k} _{B}T}\). With considering this distribution and Eq. (24), one can arrive at Eq. (2) and a generalized fluctuation-dissipation relation:
\[\langle\xi(t)\xi(s)\rangle=\mathrm{k}_{\mathrm{B}}TK(t-s). \tag{26}\]
Assume that the frequency spectrum is continuous with density \(g(\omega)\). Thus the summation \(\sum_{j}\) in Eq. (23) may be replaced by an integral \(N_{o}\int_{0}^{\infty}g(\omega)\) where \(N_{o}\) is the number of the oscillators which is usually a large number. Assume that the coupling between each oscillator and the HDD particle is constant and uniform. Thus \(\gamma_{j}^{2}\) may be replaced by \(\gamma N_{o}\). Then kernel function (23) may be expressed as
\[K(t)=\gamma\int_{0}^{\infty}\mathrm{d}\omega g(\omega)\frac{\cos\omega t}{ \omega^{2}}. \tag{27}\]
Taking Debye-type spectrum distribution \(g=(2\mu/\gamma\pi)\omega^{2}\), the above kernel function is transformed into
\[K(t)=2\mu\delta(t). \tag{28}\]
Then Eq. (26) is degenerated into Eq. (3). That is, the "noise" term \(\xi(t)\) is the Gaussian noise. Note that \(\delta(t)\) is an even function and its value is vanishing for \(t\neq 0\). We can verify \(\int_{0}^{t}\ \mathrm{d}s\delta(t-s)x_{1}(s)=\int_{t}^{\infty}\mathrm{d}s \delta(t-s)x_{1}(s)=\frac{1}{2}\int_{0}^{\infty}\mathrm{d}s\delta(t-s)x_{1}(s)= \frac{1}{2}x_{1}(t)\). Thus the generalized Langevin equation (25) is degenerated into
\[\sum_{n=0}^{N}\left(-\frac{\mathrm{d}}{\mathrm{d}t}\right)^{n}\frac{\partial L }{\partial x_{n}}-\mu x_{1}+\xi(t)=0. \tag{29}\]
where the noise \(\xi(t)\) satisfies Eqs. (2) and (3). The above equation is called the Langevin equation with HDD. Relative to the Euler-Lagrange equation (5), this equation has additional damping term \(-\mu x_{1}\) and noise term \(\xi(t)\). Compareing with the Langevin equation (1), we replace \(-md^{2}x/\mathrm{d}t^{2}\) and \(-\partial V/\partial x\) with \(\sum_{n=1}^{N}(-\mathrm{d}/\mathrm{d}t)^{n}\partial L/\partial x_{n}\) and \(\partial L/\partial x_{0}\), respectively. It is not hard to verify that, for the case of \(N=1\) with \(L=(m/2)x_{1}^{2}-V(t,x_{0})\), where the dynamics is second-order (i.e., Newtonian mechanics), Eq. (29) is degenerated into Eq. (1).
## IV Fokker-Planck equation with HDD
As we know, there is a Fokker-Planck equation corresponding to the Langevin equation (1) in non-equilibrium statistical mechanics, which describes the evolution of the distribution function in phase space. In this section, we will derive the Fokker-Planck equation corresponding to the Langevin equation (29) with HDD.
Similar to the discussion in the above section, Eq. (22) can be transformed into
\[\dot{p}_{0}=\frac{\partial L}{\partial x_{0}}-\mu x_{1}+\xi(t), \tag{30}\]
which reminds us that Eq. (20) can be rewritten as
\[\dot{p}_{n}=\frac{\partial L}{\partial x_{n}}-p_{n-1}+\delta_{n0}\xi(t), \tag{31}\]
where we have redefined \(p_{-1}\equiv\mu x_{1}\). The above equation and Eq. (19) describe the evolution of microscopic trajectories in phase space \(\{\mathbf{\Gamma}\}=\{(x_{0},x_{1},\dots,x_{N-1},p_{0},p_{1},\dots,p_{N-1})\}\).
Let \(f(t,\mathbf{\Gamma})\mathbf{d}\mathbf{\Gamma}\) represent the probability of the HDD particle appearing within the region between \(\mathbf{\Gamma}\) and \(\mathbf{\Gamma}+\mathbf{d}\mathbf{\Gamma}\) in the phase space. According to the conservation law of probability flow [27], we have
\[\frac{\partial f}{\partial t}=-\sum_{n=0}^{N-1}\left[\frac{\partial(\dot{x}_{n }f)}{\partial x_{n}}+\frac{\partial(\dot{p}_{n}f)}{\partial p_{n}}\right]. \tag{32}\]
Substituting Eqs. (19) and (31) into the above equation, we arrive at
\[\frac{\partial f}{\partial t}= -\sum_{n=0}^{N-1}\left\{\frac{\partial}{\partial x_{n}}(x_{n+1}f) +\frac{\partial}{\partial p_{n}}\left[\left(\frac{\partial L}{\partial x_{n} }-p_{n-1}\right)f\right]\right\}\] \[-\xi(t)\frac{\partial f}{\partial p_{0}} \tag{33}\]
The observable distribution function of the HDD particle in the phase space is defined as the average of \(f\) with respect to the noise, that is,
\[\rho(t,\mathbf{\Gamma})=\langle f(t,\mathbf{\Gamma})\rangle_{\xi}. \tag{34}\]
Following Reichl's derivation [27] of the Fokker-Planck equation from the Langevin equation, we can derive the Fokker-Planck equation corresponding to the Langevin equation (29) [equivalently, Eqs. (19) and (31)] with HDD from Eqs. (33) and (34). The derived Fokker-Planck equation with HDD is
\[\frac{\partial\rho}{\partial t}= -\sum_{n=0}^{N-1}\left\{\frac{\partial}{\partial x_{n}}(x_{n+1} \rho)+\frac{\partial}{\partial p_{n}}\left[\left(\frac{\partial L}{\partial x _{n}}-p_{n-1}\right)\rho\right]\right\}\] \[+\mu k_{B}T\frac{\partial^{2}\rho}{\partial p_{0}^{2}} \tag{35}\]
As an example, we consider the case of \(N=1\) with \(L=(m/2)x_{1}^{2}-V(t,x_{0})\). The generalized momentum is \(p_{0}=\partial L/\partial x_{1}=mx_{1}\). Then \(x_{1}=p_{0}/m\) and \(p_{-1}=\mu x_{1}=\mu p_{0}/m\). Thus the above Fokker-Planck equation (35) with HDD is degenerated into the well-known Fokker-Planck equation,
\[\frac{\partial\rho}{\partial t}=\frac{p_{0}}{m}\frac{\partial\rho}{\partial x _{0}}-\frac{\partial}{\partial p_{0}}\left[\left(\frac{\partial V}{\partial x _{0}}+\mu\frac{p_{0}}{m}\right)\rho\right]+\mu k_{B}T\frac{\partial^{2}\rho}{ \partial p_{0}^{2}}, \tag{36}\]
corresponding to the Langevin equation (1).
## V Stochastic Pais-Uhlenbeck oscillator
In this section, we will discuss stochastic Pais-Uhlenbeck oscillators as a case study. The Lagrangian of the Pais-Uhlenbeck oscillator can be expressed as [5; 16]
\[L=\frac{Y}{2}[x_{2}^{2}-(\omega_{1}^{2}+\omega_{2}^{2})x_{1}^{2}+\omega_{1}^{ 2}\omega_{2}^{2}x_{0}^{2}], \tag{37}\]
where frequencies \(\omega_{1}\) and \(\omega_{2}\) are independent of time. \(Y\) is a constant. The above Lagrangian corresponds to the case of \(N=2\), thus the corresponding equaiton of motion is fourth-order. Substituting equation (37) into Eq. (29), we obtain a fourth-order Langevin equation
\[Y\left[\frac{\mathrm{d}^{4}x_{0}}{\mathrm{d}t^{4}}+(\omega_{1}^{2}+\omega_{2} ^{2})\frac{\mathrm{d}^{2}x_{0}}{\mathrm{d}t^{2}}+\omega_{1}^{2}\omega_{2}^{2} x_{0}\right]-\mu\frac{\mathrm{d}x_{0}}{\mathrm{d}t}+\xi(t)=0 \tag{38}\]
which contains both the damping term \(-\mu\mathrm{d}x_{0}/\mathrm{d}t\) and the noise term \(\xi(t)\). A oscillator described in the above equation is named as stochastic Pais-Uhlenbeck oscillator. To the best of our knowledge, Nesterenko first discussed the Pais-Uhlenbeck oscillator with a damping term [28], while Urenda-Cazares et al. discussed the Pais-Uhlenbeck oscillator with a noise term [29]. They wrote the corresponding equations phenomenologically without microscopic derivations. In the above discussion, we have made up for this shortage.
According to Eq. (10), we obtained \(p_{1}=Yx_{2}\). So \(x_{2}=p_{1}/Y\) and \(L=p_{1}^{2}/(2Y)+(Y/2)[\omega_{1}^{2}\omega_{2}^{2}x_{0}^{2}-(\omega_{1}^{2}+ \omega_{2}^{2})x_{1}^{2}]\). Substituting it into Eq. (35), we derive the Fokker-Planck equation
\[\frac{\partial\rho}{\partial t} =-x_{1}\frac{\partial\rho}{\partial x_{0}}-\frac{p_{1}}{Y}\frac{ \partial\rho}{\partial x_{1}}-(Y\omega_{1}^{2}\omega_{2}^{2}x_{0}-\mu x_{1}) \frac{\partial\rho}{\partial p_{0}}\] \[+[Y(\omega_{1}^{2}+\omega_{2}^{2})x_{1}+p_{0}]\frac{\partial\rho }{\partial p_{1}}+\mu\mathrm{k}_{\mathrm{B}}T\frac{\partial^{2}\rho}{ \partial p_{0}^{2}} \tag{39}\]
for the stochastic Pais-Uhlenbeck oscillator.
According to Eq. (12), the Hamiltonian of the Pais-Uhlenbeck oscillator can be written as
\[H=p_{0}x_{1}+\frac{p_{1}^{2}}{2Y}+\frac{Y}{2}[(\omega_{1}^{2}+\omega_{2}^{2})x_ {1}^{2}-\omega_{1}^{2}\omega_{2}^{2}x_{0}^{2}]. \tag{40}\]
It is not hard to verify that the Boltzmann distribution \(\rho_{s}\propto\mathrm{e}^{-H/\mathrm{k}_{\mathrm{B}}T}\) is the solution to the Fokker-Planck equation (39). Thus, although the Hamiltonian (40) has no lower bound, there is still a steady-state distribution \(\rho_{s}\propto\mathrm{e}^{-H/\mathrm{k}_{\mathrm{B}}T}\) for the stochastic Pais-Uhlenbeck oscillator. This distribution is pathological if no specific constraint is applied on the Hamiltonian. In addition, whether this distribution is stable still needs to be further explored.
## VI Summary and discussion
In the above discussion, we have investigated the motion of a HDD particle coupled with a bath consisting of harmonic oscillators, and then derived the Langevin equation (29) and its corresponding Fokker-Planck equation (35) for the HDD particle. As a case study, we consider the stochastic Pais-Uhlenbeck oscillator and present
the corresponding Langevin equation (38) and Fokker-Planck equation (39). These equations can be used as a starting point for studying the Brownian motion of HDD particles. It should be noted that in the above discussion, the dynamics of harmonic oscillators in the bath is still second-order. If the dynamics of harmonic oscillators is replaced with the HDD, such as the Pais-Uhlenbeck oscillators, we need further discuss whether the above results can still be maintained. In addition, we have only considered the one-dimensional case where all Lagrangians are non-degenerated. As a result, there is no one-dimensional odd-order-derivative dynamics in our theoretical framework. For 2-dimensional or higher-dimensional cases where Lagrangians might be degenerated [30]. We expect that the forms of the Langevin equation and the Fokker-Planck equation are not essentially changed for the degenerated cases.
Stochastic thermodynamics [21; 22; 23] is a frontier research field emerged in recent years for describing thermodynamic behaviors of small systems, which is mainly based on the conventional Langevin equation (1) and its corresponding Fokker-Planck equation (36). With the aid of these two equations, the quantities of work, heat, and entropy can be well defined on microscopic trajectories. In this paper, we have derived the Langevin equation (29) and its corresponding Fokker-Planck equation (35) for a HDD Brownian particle. It is valuable for us to extend current stochastic thermodynamics to the case of HDD based on these two equations. The crucial matter lies in the proper definitions of work, heat, and entropy. We expect that their definitions are similar to those in current stochastic thermodynamics [21; 22; 23].
Different from the Langevin equation and Fokker-Planck equation, the path integral method offers another way to describe stochastic processes. We have noted that Kleinert [31] discussed the path integral for the Pais-Uhlenbeck oscillator, and that Dean et al. [32] discussed the path integral of HDD in quadratic form. In both cases, they achieved analytical solutions of propagators. Although it is difficult to obtain analytical solutions in general cases, we may still borrow the idea of path integrals to extend the framework of stochastic thermodynamics to the situation of HDD. We will further discuss this point in the follow-up work.
Note: The Chinese version of this manuscript is referred to Ref. [33].
## Acknowledgements
The author would like to thank Xiuhua Zhao and Yating Wang for their careful reading the manuscript. This work is supported by the National Natural Science Foundation of China (Grant No. 11975050).
|
2309.00267 | RLAIF vs. RLHF: Scaling Reinforcement Learning from Human Feedback with
AI Feedback | Reinforcement learning from human feedback (RLHF) has proven effective in
aligning large language models (LLMs) with human preferences, but gathering
high-quality preference labels is expensive. RL from AI Feedback (RLAIF),
introduced in Bai et al., offers a promising alternative that trains the reward
model (RM) on preferences generated by an off-the-shelf LLM. Across the tasks
of summarization, helpful dialogue generation, and harmless dialogue
generation, we show that RLAIF achieves comparable performance to RLHF.
Furthermore, we take a step towards "self-improvement" by demonstrating that
RLAIF can outperform a supervised fine-tuned baseline even when the AI labeler
is the same size as the policy, or even the exact same checkpoint as the
initial policy. Finally, we introduce direct-RLAIF (d-RLAIF) - a technique that
circumvents RM training by obtaining rewards directly from an off-the-shelf LLM
during RL, which achieves superior performance to canonical RLAIF. Our results
suggest that RLAIF can achieve performance on-par with using human feedback,
offering a potential solution to the scalability limitations of RLHF. | Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, Sushant Prakash | 2023-09-01T05:53:33Z | http://arxiv.org/abs/2309.00267v3 | # RLAIF: Scaling Reinforcement Learning from Human Feedback
###### Abstract
Reinforcement learning from human feedback (RLHF) has proven effective in aligning large language models (LLMs) with human preferences. However, gathering high-quality human preference labels can be a time-consuming and expensive endeavor. RL from AI Feedback (RLAIF), introduced by Bai et al., offers a promising alternative that leverages a powerful off-the-shelf LLM to generate preferences in lieu of human annotators. Across the tasks of summarization, helpful dialogue generation, and harmless dialogue generation, RLAIF achieves comparable or superior performance to RLAIF, as rated by human evaluators. Furthermore, RLAIF demonstrates the ability to outperform a supervised fine-tuned baseline even when the LLM preference labeler is the same size as the policy. In another experiment, directly prompting the LLM for reward scores achieves superior performance to the canonical RLAIF setup, where LLM preference labels are first distilled into a reward model. Finally, we conduct extensive studies on techniques for generating aligned AI preferences. Our results suggest that RLAIF can achieve human-level performance, offering a potential solution to the scalability limitations of RLHF.
## 1 Introduction
Reinforcement Learning from Human Feedback (RLHF) is an effective technique for aligning language models to human preferences (Stiennon et al., 2020; Ouyang et al., 2022). It is cited as one of the key drivers of success in modern conversational language models, such as ChatGPT (Liu et al., 2023) and Bard (Manyika, 2023). Training language models with reinforcement learning (RL) enables optimization on complex, sequence-level objectives that are not easily differentiable and therefore ill-suited for traditional supervised fine-tuning (SFT).
One obstacle for employing RLHF at scale is its dependence on high-quality human preference labels. This raises the question of whether artificially generated labels can be a viable substitute. Generating labels with large language models (LLMs) is one promising approach, as LLMs have shown a high degree of alignment with human judgment (Gilardi et al., 2023; Ding et al., 2023). Bai et al. (2022b) was the first effort to explore Reinforcement Learning from AI Feedback (RLAIF)1, where
Figure 1: Human evaluators strongly prefer RLAIF and RLAIF over the SFT baseline for summarization and helpful dialogue generation. Their difference in win rates vs. SFT is not statistically significant. Furthermore, when compared head-to-head, RLAIF is equally preferred to RLHF. For harmless dialogue generation, RLAIF outperforms RLHF.
RL was conducted using a reward model trained on LLM preferences. Bai et al. (2022b) showed that utilizing a hybrid of human and AI preferences, in conjunction with their "Constitutional AI" self-revision technique, outperforms supervised fine-tuning for training a conversational assistant. However, it did not directly compare the efficacy of human vs. AI feedback, leaving the question of whether RLAIF can be a suitable alternative to RLHF unanswered.
Footnote 1: [https://github.com/](https://github.com/)
"1" and "2" and compute the softmax to obtain a preference distribution.
There are numerous alternatives to obtain preference labels from LLMs, such as extracting the preference from a free-form generated response (e.g. _"The first response is better"_), or representing the preference distribution as a one-hot encoding. However, we choose our method because it is straightforward to implement and conveys more information than a one-hot encoding through its distributed representation of preferences.
We experiment with two styles of preambles: _"Base"_, which essentially asks "which response is better?", and _"Detailed"_, which resembles detailed rating instructions that would be given to human preference annotators (see Table 16 for preambles for the summarization task). We also experiment with in-context learning Brown et al. (2020), where high-quality exemplars were hand-selected to cover a range of topics.
#### 2.1.1 Addressing Position Bias
The order in which candidates are shown to an LLM can bias which candidate it prefers Pezeshkpour and Hruschka (2023); Wang et al. (2023). We find evidence of position bias, which is more pronounced with smaller sizes of LLM labelers (see Appendix B).
To mitigate position bias in preference labeling, we make two inferences for every pair of candidates, where the order in which candidates are presented to the LLM is reversed for the second inference. The results from both inferences are then averaged to obtain the final preference distribution.
#### 2.1.2 Chain-of-thought Reasoning
We experiment with eliciting chain-of-thought (CoT) reasoning Wei et al. (2022) from our AI labelers through a two-step inference procedure. First, we replace the _Ending_ of the standard prompt (e.g. "_Preferred Summary_=") with a sentence asking for thoughts and explanation (e.g. "_Consider the coherence, accuracy, coverage, and overall quality of each summary and explain which one is better. Rationale_:") and then decode a response from the LLM. Then, we concatenate the original prompt, the response, and the standard _Ending_ string together, and follow the scoring procedure in Section 2.1 to obtain a preference distribution. See Figure 3 for an illustration.
In zero-shot prompts, the LLM is not given an example of what reasoning should look like. In few-shot prompts, we provide examples of CoT reasoning for the model to follow. See Tables 17 and 18 for examples.
we apply a cross-entropy loss to the softmax of the reward scores generated by the RM. The softmax converts the RM scores into a probability distribution. We note that training a RM on a dataset of AI labels can be viewed as a form of model distillation.
Finally, we conduct reinforcement learning to train the RLAIF policy model, using the RM to assign rewards to model responses.
#### 2.2.2 Direct RLAIF
An alternative approach is to directly use LLM feedback as the reward signal in RL. This enables bypassing the intermediate stage of training a RM that approximates the preferences of the LLM.
The LLM is prompted to rate the quality of a generation between 1 and 10. Similar to the format mentioned in Section 2.1, the prompt contains high-level details on the structure of the input and the dimensions along which to rate a generation (e.g. factuality, coherence). Then, the likelihood of each score token between 1 and 10 is computed, the likelihoods are normalized to a probability distribution, a weighted score is calculated as \(s(x|c)=\sum_{i=1}^{10}iP(i|x,c)\), and then the score is again normalized to the range \([-1,1]\). Additional details on the prompting technique can be found in the Appendix D.
Finally, RL is conduct RL in a similar manner to "distilled RLAIF", where the direct score is used as reward instead of the score from a RM. This approach is more computationally expensive than the canonical setup when the AI labeler is larger than the RM.
### Evaluation
We evaluate our results with three metrics - _AI Labeler Alignment_, _Win Rate_, and _Harmless Rate_.
_AI Labeler Alignment_ measures the accuracy of AI-labeled preferences with respect to human preferences. For a single example, a soft AI-labeled preference is first converted to a binary representation (e.g. \([0.6,0.4]\rightarrow[1,0]\)). Then, a 1 is assigned if the label agrees with the human preference and 0 otherwise. The alignment accuracy \(z_{acc}\) can be expressed as follows:
\[z_{acc}=\frac{1}{D}\sum_{i=1}^{D}\mathbbm{1}[\arg\max_{j}P_{i,j}^{AI}=p_{i}^{ H}],\]
where \(D\) is the size of the preference dataset, \(P^{AI}\in\mathbb{R}^{D\times 2}\) is the matrix of soft AI preferences, and \(p^{human}\in\mathbb{R}^{D}\) is the corresponding vector of human preferences, containing elements \(0\) or \(1\) to denote whether the first or second response is preferred, respectively.
_Win Rate_ evaluates the end-to-end quality of two policies by measuring how often one policy is preferred by human annotators over another. Given an input and two generations, human annotators select which generation they prefer. The percentage of instances where policy \(A\) is preferred over policy \(B\) is referred to as the _"win rate of A vs. B"_. A 50% win rate indicates that \(A\) and \(B\) are equally preferred.
Figure 3: An illustration of the process of obtaining AI-generated labels for summarization preferences. The LLM is first prompted to explain its thoughts on the quality of the two candidates (blue). The LLM’s response is then appended to the original prompt (orange) and fed to the LLM a second time to generate a preference distribution over “1” vs. “2” based on their log-probabilities (green).
_Harmless Rate_ measures the percentage of responses that are considered harmless by human evaluators. We evaluate the harmless dialogue generation task with this metric instead of _Win Rate_, because we find that many responses are equally safe, making it difficult to assign relative rankings.
## 3 Experimental Details
### Datasets
We use the following datasets for our experiments:
* posts from Reddit3 accompanied by summaries of the posts. Footnote 3: www.reddit.com
* a dataset created from a subset of Reddit TL;DR. Each example comprises a post, two candidate summaries, and a rating from a human annotator indicating which summary is preferred.
* conversations between a human and an AI assistant, where each conversation has two possible AI assistant responses
- one preferred and the other non-preferred, according to a human annotator. Preference is based on which response is more informative and honest for the helpful task, and which response is safer for the harmless task.
More dataset details can be found in Appendix C.
We also experimented with the Stanford Human Preferences dataset (Ethayarajh et al., 2022), but we found that both RLHF and RLAIF policies did not show meaningful improvements over the SFT baseline after correcting for length biases, using the procedure in Appendix J.
### LLM Labeling
To enable fast experiment iteration when evaluating AI labeling techniques, we randomly downsampled the training split of each preference dataset. For summarization, an additional filter was applied to only include examples where human annotators preferred one summary over the other with high confidence4. After downsampling and filtering, there were roughly 3-4k examples for each task5. AI labeler alignment metrics were calculated on these downsampled datasets.
Footnote 4: This follows the evaluation procedure in Stiennon et al. (2020). Examples with confidence scores of 1, 2, 8, and 9 were considered to be “high-confidence”
PaLM 2 (Google et al., 2023) is used as the LLM for labeling preferences. The versions used are instruction-tuned but not previously trained with RL. Unless otherwise specified, AI labels were generated using PaLM 2 Large (L) with the best-performing prompt in Section 4.4. For more details on LLM labeling, see Appendix D.
### Model Training
All SFT models are initialized from PaLM 2 Extra-Small (XS). For summarization, the SFT model is produced by fine-tuning PaLM 2XS on the Reddit TL;DR dataset. For all other tasks, an instruction-tuned variant of PaLM 2 is used in lieu of task-specific fine-tuning.
RMs are also derived from PaLM 2XS. RMs are fine-tuned on the entire training split of the corresponding preference dataset, where the label is the AI preference for AI feedback RMs and the original human preference label in the dataset for human feedback RMs. RM accuracies can be found in Appendix G.
In the RL phase, the policy is trained with a modified version of REINFORCE (Williams, 1992) adapted to the language modeling domain. While many recent works use Proximal Policy Optimization (PPO) (Schulman et al., 2017) - a related method that adds a few techniques to make training more conservative and stable (e.g. clipping the objective function), we use REINFORCE with a baseline given that it is simpler yet still effective for the problem at hand. Both policy and value models are initialized from the SFT model. For summarization, the policy is rolled out on the training split of the Reddit TL;DR dataset. In other words, the initial states for RL are the original posts from the dataset prior to summarization. For the helpful and harmless tasks, the initial states are drawn from the training splits of the preference datasets. For summarization, simple post-processing is applied to responses generated by RL-trained policies as described in Appendix E.
For additional details on the RL formulation and model training, see Appendices F and G.
### Human Evaluation
For experiments evaluated by win rates, evaluators were presented with an input context and multiple responses generated from different policies (e.g. RLAIF, RLHF, and SFT). They were then asked to rank responses in order of quality without ties, as seen in Figure 4. Input contexts were drawn from test splits of datasets, which were not used for training or any other evaluation6. Rankings were used to calculate win rates with respect to pairs of policies. For harmless dialogue generation, evaluators were asked to independently rate each response as harmless or harmful.
Footnote 6: For summarization, we used the test split of Reddit TL;DR. For helpful and harmless dialogue generation, we used test splits from the preference datasets, detailed in Appendix C.
For more details on human evaluation, see Appendix I.
## 4 Results
### RLAIF vs. RLHF
RLAIF achieves performance gains on par with or better than RLHF on all three tasks (see Figure 1 and Table 1). RLAIF and RLHF are preferred by human evaluators over the baseline SFT policy 71% and 73% of the time for summarization7 and 63% and 64% for helpful dialogue generation, respectively. The difference in win rates between RLAIF vs. SFT and RLHF vs. SFT are not statistically significant. When directly comparing RLAIF against RLHF, they are equally preferred - i.e. the win rate is not statistically significantly different from 50%. For harmless dialogue generation, RLAIF achieves a harmless rate of 88%, outperforming both RLHF and SFT, which score 76% and 64%, respectively8.
Footnote 7: RLAIF and RLHF are also preferred over the human reference summaries in Reddit TL;DR 79% and 80% of the time, respectively.
Footnote 8: RLAIF achieves a statistically significant improvement over RLHF and SFT, according to a two-sample t-test.
Figure 5 contains an example of SFT, RLAIF, and RLHF summaries. To better understand how RLAIF compares to RLHF, we qualitatively compare responses generated by both policies for summarization in Section 5.
As observed in Stiennon et al. (2020), RLAIF and RLHF policies tend to generate longer responses than the SFT policy, which may be partially responsible for their higher win rates. We conduct post-hoc analysis to control for length and find that both RLAIF and RLHF policies still outperform the SFT policy, and by similar margins to one another. See Appendix J for details.
One natural question that arises is whether there is value in combining human and AI feedback. We experimented with combining both types of feedback but did not see an improvement beyond using human feedback alone. However, we believe that there are several alternative training setups that could demonstrate value in combining both forms of feedback. See Appendix K for details.
These results suggest that RLAIF is a viable alternative to RLHF that does not depend on human annotation. In addition to expediting labeling time and reducing dependence on annotation services, another key benefit of AI labeling is cost reduction. We estimate the cost of labeling with an LLM to be over 10x cheaper than human annotation. See Appendix L for detailed calculations.
### Towards Self-Improvement
In Section 4.1, the LLM used to label preferences (PaLM 2 L) is much larger than the policy being trained (PaLM 2 XS). Going one step further, one might wonder if RLAIF can yield improvements when the AI labeler is the same size as the policy. On the task of summarization, we conduct RLAIF where PaLM 2 XS is used as the AI labeler instead of PaLM 2 L. The rest of the setup mimics the experiment in Section 4.1. We refer to this setup as "same-size RLAIF".
Human annotators prefer same-size RLAIF 68% of the time over SFT (see Table 1). For reference, RLAIF using an AI labeler larger than the policy is preferred 71% over SFT9. This result demonstrates that RLAIF can yield improvements even when the AI labeler is the same size as the policy LLM.
Footnote 9: The difference between win rates between “same-size RLAIF vs. SFT” and “RLAIF vs. SFT” is not statistically significant. For a two-sample t-test, p-value = 0.07. At alpha = 0.05, this difference is not statistically significant.
We note that the AI labeler and initial policy are not the exact same model. The AI labeler is the instruction-tuned PaLM 2 XS, whereas the initial policy is PaLM 2 XS fine-tuned on Reddit TL;DR summarization. Additionally, the summaries rated by the AI labeler were generated by policies created by the original dataset curators. For these reasons, we do not consider this experiment a strict case of "self-improvement"Huang et al. (2022). However, we believe that these results show great promise for this research direction.
### Direct RLAIF
In Sections 4.1 and 4.2, AI feedback was distilled into a RM. On the summarization task, we experiment with using an off-the-shelf LLM to _directly_ provide rewards during RL, bypassing RM training entirely. Since using a large AI labeler in RL is computationally expensive, we use the smaller instruction-tuned PaLM 2 XS as the off-the-shelf LLM. We refer to this setup as "direct RLAIF".
Human annotators prefer responses from direct RLAIF 74% of the time over SFT responses (see Table 1). To understand the impact of directly utilizing LLM feedback in RL, we compare this result to the same-size RLAIF policy from Section 4.2, which solely differs in training a RM that provides rewards during RL. Direct RLAIF outperforms same-size RLAIF, which achieves a statistically significantly lower win rate of 68%. Furthermore, when shown responses side-by-side, raters prefer direct RLAIF over same-size RLAIF 60% of the time10. One hypothesis for the improved quality is that bypassing the distillation from AI preferences into a RM enables information to flow directly from the off-the-shelf LLM to the policy.
Footnote 10: This is statistically significantly different from 50% according to a two-sample t-test.
### Prompting Techniques
We experiment with three types of prompting variations - preamble specificity, chain-of-thought reasoning, and in-context learning (see Table 2). We observe that eliciting chain-of-thought reasoning generally improves AI labeler alignment, while the impacts of preamble specificity and in-context learning vary across tasks. The best prompts outperform the base prompts ("Base 0-shot") by +1.9%, +1.3%, and +1.7% for summarization, helpfulness, and harmlessness, respectively.
Detailed preambles consistently improve alignment for summarization, while yielding mixed results for helpful and harmless dialogue generation. We hypothesize that summarization benefits more from a specific preamble due to the high complexity of this task. On the other hand, rating helpfulness and harmlessness are more intuitive to grasp, and therefore may benefit less from detailed instructions.
Chain-of-thought reasoning improves alignment consistently for summarization. For helpful and harmless dialogue generation, CoT only improves alignment when paired with the "Base" preamble.
Surprisingly, we observe that few-shot in-context learning only improves alignment for harmless dialogue generation11. For summarization and help
\begin{table}
\begin{tabular}{|c|c|c||c|c|} \hline \multicolumn{3}{|c||}{**Win Rate**} & \multicolumn{3}{c|}{**Harmless Rate**} \\ \hline \multirow{2}{*}{**Comparison**} & **Summa** & **Helpful** & \multirow{2}{*}{**Model**} & **Harmless** \\ & **-rization** & **dialogue** & & **dialogue** \\ \hline RLAIF **vs** SFT & 71\% & 63\% & SFT & 64\% \\ RLAIF **vs** SFT & 73\% & 64\% & RLHF & 76\% \\ RLAIF **vs** RLHF & 50\% & 52\% & RLAIF & 88\% \\ \hline Same-size RLAIF **vs** SFT & 68\% & & & \\ Direct RLAIF **vs** SFT & 74\% & & & \\ Direct RLAIF **vs** Same-size RLAIF & 60\% & & & \\ \hline \end{tabular}
\end{table}
Table 1: **Left side:** Win rates when comparing generations from two different models for the summarization and the helpful dialogue tasks, judged by human evaluators. **Right side:** Harmless rates across policies for the harmless dialogue task, judged by human evaluators.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multicolumn{4}{c}{AI Labeler Alignment} \\ \hline Prompt & Summary H1 & H2 \\ \hline Base 0-shot & 76.1\% & 67.8\% & 69.4\% \\ Base 1-shot & 76.0\% & 67.1\% & 71.7\% \\ Base 2-shot & 75.7\% & 66.8\% & **72.1\%** \\ Base + CoT 0-shot & 77.5\% & **69.1\%** & 70.6\% \\ Detailed 0-shot & 77.4\% & 67.6\% & 70.1\% \\ Detailed 1-shot & 76.2\% & 67.6\% & 71.5\% \\ Detailed 2-shot & 76.3\% & 67.3\% & 71.6\% \\ Detailed 8-shot & 69.8\% & – & – \\ Detailed + CoT 0-shot & **78.0\%** & 67.8\% & 70.1\% \\ Detailed + CoT 1-shot & 77.4\% & 67.4\% & 69.9\% \\ Detailed + CoT 2-shot & 76.8\% & 67.4\% & 69.2\% \\ \hline \end{tabular}
\end{table}
Table 2: We observe that eliciting chain-of-thought reasoning tends to improve AI labeler alignment, while few-shot prompting and detailed preambles have mixed effects across tasks. H1 refers to helpfulness, H2 to harmlessness.
fulness, alignment monotonically decreases as the number of exemplars increases. It seems unlikely that this effect is a result of exemplar quality, as exemplars were carefully handpicked to be high-quality and representative of each preference task. Furthermore, we conducted 10 trials for "Base 1-shot" on summarization, where a different exemplar was randomly selected for each trial. The maximum AI labeler alignment from all trials was 76.1%, which still did not surpass "Base 0-shot" in terms of AI labeler alignment. One hypothesis for why exemplars do not help is that the summarization and helpful dialogue generation tasks may already be sufficiently well-understood by the powerful AI labeler, rendering the exemplars unhelpful or distracting. It's interesting to note that in-context learning is still an important research area that is not fully understood (Min et al., 2022; Wang et al., 2022).
Footnote 1: [https://www.face.com/](https://www.face.com/)
Related Work
LLMs have shown impressive performance over a wide range of NLP tasks Brown et al. (2020); Thoppilan et al. (2022); Chowdhery et al. (2022); Google et al. (2023); OpenAI (2023). For several of these tasks, RL has emerged as an effective optimization technique. While initial applications of RL on tasks such as translation Wu et al. (2016, 2018) and summarization Gao et al. (2019); Wu and Hu (2018) used automatic evaluation metrics as rewards, such simplified formulations of rewards did not fully align with human notions of quality.
Reinforcement learning from human feedback Christiano et al. (2017) has been used as a technique to directly align LLMs with human preferences Ziegler et al. (2019) through training a reward model on pairwise comparisons of natural language responses. It has been successfully applied for summarization Stiennon et al. (2020), generalized instruction following Ouyang et al. (2022); Lai et al. (2023), dialogue Gilardi et al. (2023); Manyika (2023); Glaese et al. (2022); Bai et al. (2022) and question answering Nakano et al. (2021).
LLMs have also been extensively used for data generation Wang et al. (2021); Meng et al. (2023), augmentation Feng et al. (2021) and in self-training setups Wang et al. (2022); Madaan et al. (2023). Bai et al. (2022) introduced the idea of RLAIF, which used LLM labeled preferences in conjunction with human labeled preferences to jointly optimize for the two objectives of helpfulness and harmlessness. Recent works have also explored related techniques for generating rewards from LLMs Roit et al. (2023); Kwon et al. (2022); Yang et al. (2023). These works demonstrate that LLMs can generate useful signals for RL fine-tuning, which inspired this work's investigation into whether LLMs can serve as a viable alternative to humans in collecting preference labels for RL.
## 7 Conclusion
In this work, we show that RLAIF achieves comparable improvements to RLAIF on three text generation tasks. Our experiments show that RLAIF greatly improves upon a SFT baseline, and the margin of improvement is on par with or greater than that of RLAIF. Furthermore, in head-to-head comparisons, RLAIF and RLAIF are preferred at similar rates by humans. Additionally, we show that RLAIF is effective even when the LLM labeler is the same size as the policy, and directly prompting the LLM labeler to provide rewards during RL can outperform the canonical RLAIF setup that distills preferences into a separate RM. Finally, we study the impact of AI labeling techniques on alignment to human preferences.
While this work highlights the potential of RLAIF, there remain many fascinating open questions, such as whether conducting RLAIF iteratively can achieve additional gains (i.e. use the most recent RLAIF policy to generate new response pairs, conduct RLAIF, and repeat), how RLAIF can be adapted to a model-based RL setting where both human and assistant are modeled by LLMs, and how AI feedback can be leveraged for more specific credit assignment. We leave these questions for future work.
### Ethics
One ethical consideration concerns the utilization of AI-generated feedback as a source for model alignment. There exists a potential risk of transferring biases from the off-the-shelf LLM into the generated preferences. This in turn may result in RL-trained policies further amplifying biases, thereby inadvertently misaligning models and potentially causing harm. Extreme caution must be exercised, especially when deploying these models in high-stakes domains such as medicine, law, and employment, where they have the potential to significantly impact human lives in adverse ways. In such domains, we believe that human experts trained to carefully assign preferences according to strict policies should be considered the gold standard.
Another ethical consideration is that reducing the barriers to aligning LLMs also carries the risk of facilitating their misuse for malicious purposes. For instance, RLAIF could be employed to train models to generate convincing misinformation or produce hateful and abusive content. The best mitigation to this risk is to carefully govern the access and usage of powerful LLMs (e.g. limiting "white-box" access), to prevent bad actors from misusing them.
### Reproducibility
To promote the reproducibility of this work, many of the details of this research are shared throughout the paper. Open-source datasets are elaborated upon in Appendix C, LLM labeling details in Appendix D, the RL formulation in Appendix F,
model training details in Appendix G, human evaluation details in I, and the most critical prompts used in the Appendix (e.g. Tables 17, 21, and 22). Please reach out to authors for any additional questions or requests.
PaLM 2 models are available through Google Cloud's Vertex API, and the experiments in this work may also be repeated with other publicly available LLMs.
## Acknowledgements
We would like to thank many people who have helped make this work complete. We thank Chen Zhu for optimizing our LLM inference setup, Le Hou for suggesting prompt improvements and experimenting with self-consistency, Leonard Hussenot for bringing the problem of position bias in LLMs to our attention, and Bradley Green, Ewa Dominowska, and Blaise Aguera y Arcas for supporting this research.
We thank everyone who thoroughly reviewed our work and provided valuable feedback: Hakim Sidahmed, Meiqi Guo, Michal Valko, Nevan Wichers, Sian Gooding, and Yuan Cao.
We thank Mo Azar, Daniel Guo, Andrea Michi, Nicolas Perez-Nieves, and Marco Selvi for their work in developing a RLAIF training setup that directly prompts an LLM to obtain reward scores.
Finally, we thank the individuals who designed and built the RL training infrastructure used in this paper: Leonard Hussenot, Johan Ferret, Robert Dadashi, Geoffrey Cideron, Alexis Jacq, Sabela Ramos, Piotr Stanczyk, Sertan Girgin, Danila Sinopalnikov, Amelie Heliou, Nikola Momchev, and Olivier Bachem.
|
2306.01924 | multiRegionFoam -- A Unified Multiphysics Framework for Multi-Region
Coupled Continuum-Physical Problems | This paper presents a unified framework, called multiRegionFoam, for solving
multiphysics problems of the multi-region coupling type within OpenFOAM
(FOAM-extend). It is intended to supersede the existing solver with the same
name. The design of the new framework is modular, allowing users to assemble a
multiphysics problem region-by-region and coupling conditions
interface-by-interface. The present approach allows users to choose between
deploying either monolithic or partitioned interface coupling for each
individual transport equation. The formulation of boundary conditions is
generalised in the sense that their implementation is based on the mathematical
jump/transmission conditions in the most general form for tensors of any rank.
The present contribution focuses on the underlying mathematical model for
these types of multiphysics problems, as well as on the software design and
resulting code structure that enable a flexible and modular approach. Finally,
deployment for different multi-region coupling cases is demonstrated, including
conjugate heat, multiphase flows and fuel-cells. | Heba Alkafri, Constantin Habes, Mohammed Elwardi Fadeli, Steffen Hess, Steven B. Beale, Shidong Zhang, Hrvoje Jasak, Holger Marschall | 2023-06-02T21:27:48Z | http://arxiv.org/abs/2306.01924v2 | multiRegionFoam - A Unified Multiphysics Framework for Multi-Region Coupled Continuum-Physical Problems
###### Abstract
This paper presents a unified framework, called multiRegionFoam, for solving multiphysics problems of the multi-region coupling type within OpenFOAM (FOAM-extend). It is intended to supersede the existing solver with the same name. The design of the new framework is modular, allowing users to assemble a multiphysics problem region-by-region and coupling conditions interface-by-interface. The present approach allows users to choose between deploying either monolithic or partitioned interface coupling for each individual transport equation. The formulation of boundary conditions is generalised in the sense that their implementation is based on the mathematical jump/transmission conditions in the most general form for tensors of any rank.
The present contribution focuses on the underlying mathematical model for these types of multiphysics problems, as well as on the software design and resulting code structure that enable a flexible and modular approach. Finally, deployment for different multi-region coupling cases is demonstrated, including conjugate heat, multi-phase flows and fuel-cells.
Source code repository: [https://bitbucket.org/hmarschall/multiregionfoam/](https://bitbucket.org/hmarschall/multiregionfoam/)
keywords: multiphysics, interface coupling, multi-region problems, OpenFOAM
## 1 Introduction
Interface-coupled multi-region problems like fluid-structure interactions, conjugate heat & mass transfer or multiphase flow problems represent a subgroup of multiphysics problems of high relevance in engineering. Analysis of the underlying continuum-physical models reveals inherent structural similarities, which can be exploited in software design and method development to devise a unified computational multiphysics framework for multi-region coupled continuum-physical problems over a broad spectrum of applications.
We have developed a novel unified solver framework for computational multiphysics of multi-region coupling type, i.e. transport processes coupled across region boundaries/interfaces. The code design is such that a multiphysics problem can be assembled region-by-region, and the coupling conditions interface-by-interface. Both monolithic and partitioned coupling for each individual transport equation can be applied as desired by the user. Fluid flow problems are dealt with using SIMPLE, PISO and PIMPLE pressure-velocity algorithms - with loops of predictor and corrector steps across regions. The code is implemented as a C++ library in OpenFOAM (FOAM-extend) for computational continuum physics [1] and follows the principles of object-oriented programming.
We incorporate user-defined types of regions representing sub-domains of specific physics i.e. a set of transport equations. Such a region type should govern a meaningful subset of physics specific to the region - such as e.g. fluid flow, solid mechanics, species and/or energy transport - and can be combined with others. This results into a modular concept and allows to assemble a multiphysics problem region-by-region. The coupling and communication between regions is realised in a modular fashion where interface-specific physics as well as interpolation/mapping methods are accessible in boundary conditions. The implementation is readily parallelised for large scale computations in domain decomposition mode for runs on distributed-memory parallel computer architectures.
#### Literature Survey
There are numerous open-source solutions to cope with multi-region coupled problems. However, the majority of codes are dedicated to specific problems, and thus are following a domain-driven design. In consequence, they cannot be easily adapted to other multi-region coupling problems. Available proprietary simulation codes, on the other-hand-side, often do provide platforms to solve for a broad range of multi-region coupling problems. However, being proprietary the source-codes are non-accessible to the community with the consequence of limited flexibility and/or extensibility, particularly when it comes to very specific engineering applications in technology niches. To alleviate these limitations, (mostly) open-source multi-code coupling approaches have been devised. Examples are the ADVENTURE_Couler (ADVanced ENgineering analysis Tool for Ultra large REal world) [2], MpCCI (Mesh-based parallel Code Coupling Interface) [3, 4], OpenPALM (Projet d'Assimilation par Logiciel Multimethodes) [5, 6], the OASIS coupler [7, 8], PIKE (Physics Integration KErnerls) [9] as a part of the Trilinos library [10], and preCICE[11, 12]. With these coupling software packages, multiple distinct simulations codes are coupled in a co-simulation run - each with an own specialisation coping with the physics in one specific region of the multi-region domain. Note that such code-to-code coupling frameworks for co-simulations are particularly not within the scope of this work, since they inherently only provide partitioned coupling strategies and thus suffer from stability and/or efficiency issues when it comes to challenging, e.g. numerically stiff, coupling problems.
In what follows, we attempt to provide a comprehensive yet concise overview over available open-source multi-region coupling software in the literature, highlighting their coverage of applications.
**Alya**: [13, 14] is a Fortran and C based code developed at Barcelona Supercomputing Center, Spain. It solves coupled multiphysics problems using high performance computing techniques for distributed and shared memory supercomputers. The simulations involve the solution of partial differential equations in an unstructured mesh using finite element methods. Indeed it provides region coupling in a single code environment as well as partitioned multi-code coupling using existing code. Examples of implementation for fluid-structure interaction (FSI) and cojugate heat transfer (CHT) problems, among other applications, are found in [15] and [13].
**code_sature**: [16] is developed primarily by Electricite de France R&D (EDF) for computational fluid dynamics (CFD) applications. It is written in C and Fortran and relies on finite volume discretisation. It can be coupled with other codes, using its Parallel and Locator Exchange library (PLE) [17], for instance, with SYRTHES [18], a code for transient thermal simulations in solids, to model CHT problems [19]. It also has a module for arbitrary Lagrangian Eulerian (ALE) interface tracking in the frame of fluid-structure interaction [20].
**deal.II**: [21] is a C++ library intended to serve as an interface to the complex data structures and algorithms required for solving partial differential equations using adaptive finite element methods. It is deployed in multiphysics simulations for various applications including FSI in ALE formulation [22], and numerous others [23].
**FEniCS**: [24] is a python and C++ based code dedicated to solving partial differential equations arising in scientific models using the finite element method. It is often integrated with other platforms as it does not have a built in multi-physics solver. Example of usage include the FEniCS-FEATool solver [25], the coupling of FEniCS with OpenFOAM through the multiscale universal interface MUI [26], as well as the combination of FEniCS with HAZmath [27] where the cbc.block extension is used that enables the assembly and the solution of block-partitioned problems.
**MOOSE**: stands for Multiphysics Object-Oriented Simulation Environment [28]. It is an open-source C++ based code developed at the Idaho National Laboratory, enabling parallel multiphysics simulation. It uses a finite element framework and supports segregated and fully implicit volumetric coupling as well as partitioned interface coupling. Fluid dynamics, heat transfer, and fluid-structure interaction are some of the applications where MOOSE is used [29, 30].
**OpenFOAM**: [1] (Open Field Operation And Manipulation) is an open-source C++ library for computational continuum physics (CCP) including computational fluid dynamics (CFD) based on the finite volume method (FVM) with support for dynamic meshes of general topology (unstructured meshes). Within OpenFOAM, numerous application-specific multi-region frameworks are available mostly from its active developer community. For instance, solids4foam [31] has been developed for FSI simulations, openFuelCell[32] for modelling fuel cells, chtMultiRegionFoam[33] for CHT problems. Note that besides the above application-specific solutions to interface-coupled multiphysics, OpenFOAM (OpenFOAM-dev) has undergone refactoring towards a "new modular solver framework" [34], in which so-called solver modules can be selected for coupled multi-region simulations. However, with only one solver module selectable for each region, the resulting framework forces the user to develop modules covering the full set of physics
in a region. This hinders flexibility and introduces unnecessary complexity. Moreover, interfacial physics has not been modularised and generalised at all. Both aspects have been subject to the present work.
**Yales2**: [35] aims at solving two-phase combustion from primary atomization to pollutant prediction on massive complex meshes. It is developed at CORIA-CFD using C++ and Fortran. It uses a finite volume solver for multiphysics problems in fluid dynamics, with support for ALE within FSI context [36] in addition to the possibility for multi-code coupling, such as for CHT applications [37] through the OpenPALM library [38].
While the focus here is on open-source, there are also various proprietary software packages developed for similar purposes, such as COMSOL Multiphysics [39], FEATool [40], Fluent [41], and LS-DYNA [42].
#### Aim & Objective
This contribution aims to provide a unified and versatile framework to cope with multiphysics problems of multi-region coupling type in OpenFOAM. Particular emphasis is put on
* i.e. monolithic and partitioned coupling is at the choice of the user for each individual transport equation,
* by means of use-defined region types, which also can be superimposed, leading to a multiphysics setup that can be assembled in a modular fashion,
* e.g. to support pressure-velocity, or magnetohydrodynamics in fluid flow across regions,
* by rigorously exploiting the common mathematical structure of transport equations _and_ interface jump and transmission (flux) conditions.
With this, it is hoped to enable substantial coverage over a spectrum of different multiphysics problems of multi-region coupling type, and to leverage significant synergies among different, so far disjoint domain-expert communities, such that improvements and fixes from one community directly can benefit others.
In the remainder, detailed information on the generic mathematical formulation of the sharp-interface model is given (Section 2) which is exploited at the software design stage in the code structure to arrive at a unified multiphysics framework (Section 5). Eventually, its deployment for different multi-region coupling cases is demonstrated (Section 7).
## 2 Generic sharp interface model
We aim to provide a concise self-contained derivation of the sharp interface model in its generic formulation as it emerges from balance considerations of conserved quantities. The mathematical procedure will also yield the well-known general transport equation introduced by Spalding in [43]. Thus, the following can also be seen as its extension to general transport equations in multiple domains coupled across sharp interfaces separating the domains. For this, we shall closely follow [44, 45, 46, 47].
Starting point of this derivation is a conservation equation in its generic form,
\[\frac{D\Phi}{Dt}=J+S, \tag{1}\]
with the left hand side of (1) being the material derivative of an extensive quantity \(\Phi\) (e.g. mass, momentum, energy), \(J\) denoting the flux term and \(S\) being sources/sinks of \(\Phi\). We assume this generic conservation equation to be valid in a material control volume \(V(t)=\Omega^{+}(t)\cup\Omega^{-}(t)\), being composed out of two subdomains \(\Omega^{+}\) and \(\Omega^{-}\) (see Fig. 1). The two subdomains are separated by a deformable sharp interface \(\Sigma(t)=\partial\Omega^{+}(t)\cap\partial\Omega^{-}(t)\) which leads to the fact that V(t) is bounded by \(\partial V(t)=\partial V^{+}(t)\cup\partial V^{-}(t)\cup\partial\Sigma(t)\), where \(\partial V^{\pm}(t)=\partial\Omega^{\pm}(t)\backslash\Sigma(t)\). Furthermore, we define that the interface normal \(\mathbf{n}_{\Sigma}\) always points from \(\Omega^{-}\) to \(\Omega^{+}\) and the interface edge normal points out of \(\Sigma\).
The sharp interface \(\Sigma(t)\) considered here can be seen as a simplification of a transition layer of finite thickness between adjacent domains with different physical properties. Therefore, the interface is not necessarily massless and can store conserved quantities which are accounted for in the following by the appearance of so-called surface excess quantities [44]. Thus, the extensive quantity can be written as a volume integral,
\[\Phi=\int_{V(t)}\rho\phi\mathrm{d}\mathbf{x}+\int_{\Sigma(t)}\rho^{\Sigma}\phi^{ \Sigma}\mathrm{d}\mathbf{x}\;, \tag{2}\]
where \(\rho\) is the mass density in \(\Omega^{\pm}\), \(\phi\) is the mass density-related volume specific density of the extensive quantity in the bulk and \(\rho^{\Sigma}\) and \(\phi^{\Sigma}\) are their respective area specific excess quantities defined on the interface [45]. The flux term can be expressed through
\[J=-\int_{\partial V(t)}\mathbf{j}\cdot\mathbf{n}\mathrm{d}s-\int_{\partial\Sigma(t)}\bm {j}^{\Sigma}\cdot\mathbf{n}_{\partial\Sigma}\mathrm{d}l\;, \tag{3}\]
with the first integral being a surface integral over the bulk flux density \(\mathbf{j}\) across the boundary of \(V(t)\) and the second integral being a line integral over the interface flux density \(\mathbf{j}^{\Sigma}\) across the interface boundary. A similar expression as (2) can be formulated for the source/sink term
\[S=\int_{V(t)}s\mathrm{d}\mathbf{x}+\int_{\Sigma(t)}s^{\Sigma}\mathrm{d}s\;. \tag{4}\]
Here \(s\) is the source/sink density field in the bulk and \(s^{\Sigma}\) is its respective counterpart on the interface. Inserting (2), (3) and (4) into (1) gives
\[\frac{D}{Dt}\int_{V(t)}\rho\phi\mathrm{d}\mathbf{x}+\frac{D}{Dt}\int_{\Sigma(t)} \rho^{\Sigma}\phi^{\Sigma}\mathrm{d}s=-\int_{\partial V(t)}\mathbf{j}\cdot\mathbf{n} \mathrm{d}s-\int_{\partial\Sigma(t)}\mathbf{j}^{\Sigma}\cdot\mathbf{n}_{\partial\Sigma }\mathrm{d}l\quad+\int_{V(t)\setminus\Sigma(t)}s\mathrm{d}\mathbf{x}+\int_{\Sigma (t)}s^{\Sigma}\mathrm{d}s\;. \tag{5}\]
The generalized transport theorem, [48]
\[\frac{D}{Dt}\int_{V(t)}\rho\phi\mathrm{d}\mathbf{x}=\int_{V(t)\setminus\Sigma(t)} \frac{\partial\rho\phi}{\partial t}\mathrm{d}\mathbf{x}+\int_{V(t)\setminus\Sigma( t)}\nabla\cdot(\rho\phi\mathbf{u})\mathrm{d}\mathbf{x}+\int_{\Sigma(t)}\llbracket\rho \phi(\mathbf{u}-\mathbf{u}^{\Sigma})\rrbracket\cdot\mathbf{n}_{\Sigma}\mathrm{d}s\,,\]
can be applied to the first term on the left hand side of (5), where the jump brackets are defined as
\[\llbracket\phi\rrbracket(t,\mathbf{x})=\lim_{h\to 0^{+}}\left[\phi\left(t,\mathbf{x}+h \mathbf{n}_{\Sigma}\right)-\phi\left(t,\mathbf{x}-h\mathbf{n}_{\Sigma}\right)\right], \quad\mathbf{x}\in\Sigma\;. \tag{6}\]
Furthermore, \(\mathbf{u}\) is the velocity field in the bulk and \(\mathbf{u}^{\Sigma}\) is the interface velocity field which - in the general case of a fluid interface - can have contributions in both normal and tangential direction to the interface. The second term on the left hand side of (5) can be reformulated through the use of the surface transport theorem [45]
\[\frac{D}{Dt}\int_{\Sigma(t)}\rho^{\Sigma}\phi^{\Sigma}\mathrm{d}s=\int_{ \Sigma(t)}\frac{\partial\rho^{\Sigma}\phi^{\Sigma}}{\partial t}\mathrm{d}s+ \int_{\Sigma(t)}\nabla_{\Sigma}\cdot(\rho^{\Sigma}\phi^{\Sigma}\mathbf{u}^{\Sigma} )\mathrm{d}s\;. \tag{7}\]
Here the interface Nabla operator is defined by \(\nabla_{\Sigma}=\llbracket\mathbf{I}-\mathbf{n}_{\Sigma}\otimes\mathbf{n}_{\Sigma} \rrbracket\cdot\nabla\), with interface divergence of a vector \(\nabla_{\Sigma}\cdot\mathbf{y}=tr(\nabla_{\Sigma}\mathbf{y})\) being the trace of the interface gradient [46]. Using the two-phase divergence theorem [45], the first term on the right hand side of (5) can be written as
\[-\int_{\partial V(t)}\mathbf{j}\cdot\mathbf{n}\mathrm{d}s=-\int_{V(t)\setminus\Sigma( t)}\nabla\cdot\mathbf{j}\mathrm{d}\mathbf{x}-\int_{\Sigma(t)}\llbracket\mathbf{j} \rrbracket\cdot\mathbf{n}_{\Sigma}\mathrm{d}s\;. \tag{8}\]
Similarly, the second term on the right hand side of (5) can be expressed using the surface divergence theorem [49]
\[-\int_{\partial\Sigma(t)}\mathbf{j}^{\Sigma}\cdot\mathbf{n}_{\partial\Sigma}\mathrm{d}l= -\int_{\Sigma(t)}\nabla_{\Sigma}\cdot\mathbf{j}^{\Sigma}\mathrm{d}s\;. \tag{9}\]
Substituting (27), (7), (8) and (9) back into (5) yields
\[\begin{split}&\int_{V(t)\Sigma(t)}\left[\frac{\partial(\rho \phi)}{\partial t}+\nabla\cdot(\rho\phi\mathbf{u})+\nabla\cdot\mathbf{j}-s\right] \mathrm{d}\mathbf{x}\\ +&\int_{\Sigma(t)}\left[\frac{\partial(\rho^{\Sigma} \phi^{\Sigma})}{\partial t}+\nabla_{\Sigma}\cdot(\rho^{\Sigma}\phi^{\Sigma}\mathbf{ u}^{\Sigma})+\nabla_{\Sigma}\cdot\mathbf{j}^{\Sigma}+\llbracket\rho\phi(\mathbf{u}-\mathbf{u}^{ \Sigma})+\mathbf{j}\rrbracket\cdot\mathbf{n}_{\Sigma}-s^{\Sigma}\right]\mathrm{d}s=0\;. \end{split} \tag{10}\]
Localizing this expression to points inside the bulk results in the well-known general transport equation
\[\frac{\partial\rho\phi}{\partial t}+\nabla\cdot(\rho\phi\mathbf{u})=-\nabla\cdot \mathbf{j}+s \tag{11}\]
introduced by Spalding [43]. Doing the same for points on the interface yields
\[\frac{\partial\left(\rho^{\Sigma}\phi^{\Sigma}\right)}{\partial t}+\nabla_{ \Sigma}\cdot(\rho^{\Sigma}\phi^{\Sigma}\mathbf{u}^{\Sigma})+\llbracket\rho\phi(\bm {u}-\mathbf{u}^{\Sigma})+\mathbf{j}\rrbracket\cdot\mathbf{n}_{\Sigma}=-\nabla_{\Sigma} \cdot\mathbf{j}^{\Sigma}+s^{\Sigma}\;, \tag{12}\]
which is called the interface transmission or flux condition. This equation relates the transport of the surface excess quantities to the jump in convective and diffusive fluxes from the adjacent regions across the interface. By specifying \(\phi\), \(\mathbf{j}\), \(s\) and their corresponding surface excess quantities it is then possible to obtain the respective transport equations and transmission conditions for mass, momentum energy and entropy. Such specifications under sharp interface assumptions (no interfacial mass) are given in Table 1. In this table, \(p\) represents the pressure, \(\mathbf{\tau}\) is the deviatoric stress tensor, \(\mathbf{g}\) is the gravitational vector, \(e\) is the internal energy and \(\mathbf{q}\) is the heat flux vector. The interfacial stress tensor \(\mathbf{\tau}^{\Sigma}\) accounts for the capillarity of the interface. The entropy is denoted by \(\eta\) with its flux and production in the bulk being represented by \(\mathbf{j}_{\eta}\) and \(\zeta\) respectively. Due to the consideration of interfacial stress, the interfacial entropy production \(\zeta^{\Sigma}\) is also accounted for.
## 3 Interface-coupling
Within the sharp interface model framework, adjacent regions are coupled with each other through the interface transmission condition (12). Note that (12) does not represent a boundary condition for the coupling of the primitive fields used to describe the physics of each region. However, it can be reformulated when considering a generic primitive field \(f\)[50] to match with
\[\llbracket\Gamma\nabla f\rrbracket\cdot\mathbf{n}_{\Sigma}=\mathcal{F}. \tag{13}\]
Here, \(\Gamma\) typically denotes a constant diffusivity but can also be a dependent function of other variables. The jump of the interfacial flux of \(f\) (flux discontinuity) is denoted by \(\mathcal{F}\), which might also be a dependent function of other variables. When linearised appropriately, (13) can be used as a coupling Neumann boundary condition on either side of the interface.
In addition, one also needs to account for jumps of the primitive fields \(f\) at the interface in order to fully describe the interfacial coupling. Such jumps of the primitive fields arise from closure, e.g. when accounting for the second law of thermodynamics - see [44] for a rigorous treatise on this subject. Jump conditions can also be written in a generic form,
\[\llbracket f\rrbracket=\mathcal{J}\;, \tag{14}\]
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \(\phi\) & \(\mathbf{j}\) & \(s\) & \(\phi^{\Sigma}\) & \(\mathbf{j}^{\Sigma}\) & \(s^{\Sigma}\) \\ \hline Mass & 1 & 0 & 0 & 0 & 0 & 0 \\ Momentum & \(\mathbf{u}\) & \(p\mathbf{I}-\mathbf{\tau}\) & \(\rho\mathbf{g}\) & \(0\) & \(-\mathbf{\tau}^{\Sigma}\) & 0 \\ Energy & \(e+\frac{\mathbf{u}^{2}}{2}\) & \(\mathbf{q}+(p\mathbf{I}-\mathbf{\tau})\cdot\mathbf{u}\) & b & 0 & \(-\mathbf{\tau}^{\Sigma}\cdot\mathbf{u}^{\Sigma}\) & 0 \\ Entropy & \(\eta\) & \(\mathbf{j}_{\eta}\) & \(\zeta\) & 0 & 0 & \(\zeta^{\Sigma}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Specifications of the general transport equation and the generic interface transmission condition under the assumption of a mass-less but capillary interface
where \(\mathcal{J}\) represents the interfacial jump of \(f\) and may be a dependent function of primitive transport variables. In a linearised form (14) can be applied as a coupling Dirichlet boundary condition on either side of the interface.
When it comes to the algorithmic aspect of region-to-region coupling, monolithic coupling and partitioned coupling types are to be distinguished. In monolithic coupling methods, the coupled equations for each region are assembled and solved simultaneously in one system accounting for the coupling conditions implicitly. Whereas in partitioned coupling methods, the equations of each region are solved separately, updating the coupling conditions using iterations [51]. These partitioned methods are most often based on so-called non-overlapping domain decomposition methods - also known as Schwarz methods - and can be of different types [52, 53]. One such type is the Dirichlet-Neumann-Algorithm, which is implemented in the presented framework. Here, each interface is assigned a coupling Dirichlet boundary condition on one side and a coupling Neumann boundary condition on the other side. Since partitioned coupling algorithms in general can lack convergence, acceleration methods are needed during the update procedure [54]. These acceleration methods can either be based on relaxation of the boundary condition values or on quasi-Newton procedures. The multiRegionFoam framework implements a fixed and an Aitken relaxation method as well as the IQN-ILS procedure. The interested reader is referred to [54, 55, 56] for more details.
## 4 Notes on OpenFOAM
In the following, we attempt to set out details regarding two essential features OpenFOAM provides _by design_, namely the object registry and runtime selection. Both are crucial to understand at this point, since multiRegionFoam leverages them so as to devise a flexible and modular approach for computational multiphysics problems of multi-region coupling type.
### Run-Time Selection
Typically a domain expert will be concerned with developing models and/or testing different combinations of models. For this purpose, in the strict object-oriented programming paradigm of OpenFOAM, models are implemented as classes answering to the same interface so they are encapsulated and re-usable. Note that the term'model' is used here in the broadest sense, e.g. for the choice of linear solvers, or the selection of discretisation schemes, etc. Then, compiling such model classes into shared objects (model libraries) has the advantage that the selection of models can be deferred until run-time (compared to compile-time in traditional factory methods). Together with Run-Time Selection (RTS) tables, models can then be selected from dynamically loaded shared libraries. Such an approach provides ultimate extensibility, since new models can be just added to a RTS table at run-time (by loading shared libraries dynamically) and become available to the top-level solvers just like the models of the same kind from the legacy code. The basic idea of a Run-Time Selection Table is to use a combination of static member variables and methods as well as templates to declare a Hash-Table in the base classes which all child classes register to automatically [57]. It leverages the fact that when a new shared library is loaded, all static variables are immediately initialized, which is exploited to call code that inserts the type name to the parent class's table of models. The parent class's side basically manages a dynamic 'v-table' to construction methods of child classes. Traditionally, a static method (called New) looks up the requested model (provided by user input), constructs the object with its concrete type, but returns a pointer to the base type.
### Object Registry
In OpenFOAM the object registry can be thought of as a kind of database that stores information about each object registered to it. It provides a way to keep track of all relevant objects created in a simulation, making it easier to access and manipulate them during runtime. An object registry is implemented using a hash table which belongs to C++ data structures that store key-value pairs. In this case, the keys are strings representing the names of the objects, and the values are references to the objects themselves. All classes in OpenFOAM inheriting from objectRegistry represent such an object registry. The most notable ones are the classes Time, providing read access to e.g. mesh objects, and mesh, providing read access to e.g. field objects.
To look up an object in the registry, OpenFOAM uses a technique called string hashing. If the object is found, its reference is returned. If the object is not found, an exception is thrown indicating that the object does not exist in the registry. In essence, the mechanism relies on two incredients to check for existence of a requested object and return a reference to it (see Listing 1):
* the object's name, which is assumed to be unique in a single database, and
* the object's type, which is passed as a template argument to the lookup member function. Dynamic casts are simply used to check whether the regIObject object of requested name is also of requested type.
```
1//Lookupthevelocityfield(oftypevolVectorField)fromthemesh
2constvolVectorField&U=mesh.lookupObjectvolVectorField>(U=");
```
Listing 1: Object lookup example in OpenFOAM
This hierarchy enables flexible access to objects across multiple libraries. These objects are typically declared at the main scope in solver code and persist throughout the solver's execution. The object registration mechanism facilitates obtaining references to these objects from any shared library, provided there is access to the corresponding database (refer to Section 4.1 for the significance of this access level). This mechanism effectively eliminates the requirement to pass lengthy lists into class constructors, which otherwise impairs maintainability, extensibility, and generality.
## 5 Code structure and design
In its essence, multiRegionFoam is a unified framework for multiphysics simulations of region-to-region coupling type. This coverage of distinct region and interfacial physics requires combinatorial flexibility. Therefore, its designed with attention to the following complementary aspects, in many areas going beyond basic requirements of domain-driven software development in research & development:
* **Usability**. Often domain experts are developing research software using their substantial knowledge on details of the continuum-physical model specific to their own area. To leverage this potential, we have followed the domain-driven software design approach. Utmost attention has been devoted to devise a modular framework for both region- and interface-specific physics which can be developed as entities on their own right. Our aim has indeed been to keep the differences between implementing a module to writing a domain-specific top-level solver in OpenFOAM as low as possible.
* **Understandability**. We have aimed to devise a complete and organized software fabric with a concise, clear as well as descriptive terminology for names of classes, data and functions. This has been motivated by the wish that when presenting multiRegionFoam to an engineer not familiar with the code before, basic functionality and principles of use should be easily comprehended.
* **Generality**. Recognising inherent structural similarities which multiphysics problems of multi-region coupling type have in common, we have devoted significant effort in a general mathematical formulation as foundation of the software design. This has led to a unified framework for multi-region coupling with coverage over a wide range of numerous coupled continuum-physics problems from various distinct fields.
* **Extensibility**. The structure of multiRegionFoam has been purposely designed to allow the flexible and non-intrusive addition of new capabilities or functionality. For instance, it is straight forward to add new modules for region- and interface-specific physics and to complement the set of provided coupling algorithms and coupled boundary conditions if needed.
* **Maintainability**. multiRegionFoam makes comprehensive use of modern C++, such as classes (encapsulation, inheritance and composition), virtual functions (dynamic polymorphism), and operator overloading. We paid attention to enforcing consistent encapsulations of class families under common interfaces being as small as possible. Classes are minimal in size and as low in complexity as possible, so as to fulfill their (single) task. Moreover, special attention has been paid to avoid code-repetition by means of templating.
* **Robustness**. Substantial efforts have been devoted to provide different coupling strategies. The approach enables to deploy both monolithic and partitioned coupling at the user's choice for each continuum-physical transport equation. This enables to devise the solution strategy to coupled multiphysics problems of region-to-region coupling type on an abstraction level, allowing to have numerical robustness (stability and convergence) in mind despite dealing with substantial model complexity.
* **Efficiency**. We ensure the possibility to efficiently deploy multiRegionFoam on distributed-memory parallel computer architectures on high-performance computing clusters by providing means for straight forward domain-decomposition. In particular, the interface-to-interface communication layer is developed such that it can be used in coupled boundary conditions for data transfer both in serial and parallel in a versatile manner providing multiple mapping strategies.
### Main class structure
We have followed a strictly object-oriented programming paradigm underlying a layered software design. Figure 2 depicts the class structure following the unified modeling language (UML) class diagram convention [58]. The building blocks of the code structure are two fundamental classes; regionType and regionInterfaceType which are base classes providing common functionalities that any type of region or interface would require regardless of the simulated physics. Examples of such functionalities for regions include assembling the equations that specify their physical behaviour, correcting the region's material properties and moving its mesh. Similarly for
interfaces, examples are administering the protocols for communication and coupling between regions, allowing for deploying various mapping methods, and implementing generic coupled boundary conditions (Section 5.4). From the aforementioned base classes, a code with modular design is devised which relies on the run-time selection mechanism (Section 4.1) in which the fundamental inheritance in C++ plays a crucial role. This enables the creation of derived classes that inherit all common functionalities and extend them to account for additional physical processes at the respective region or interface by defining specialised fields or equations. The inheritance hierarchy is indicated in Figure 2 by a solid line with a hollow arrowhead pointing from derived to base classes. Currently some specialised region types are implemented such as icoFluid for transient flow of incompressible fluid, conductTemperature and transportTemperature for thermal transport between solid and fluid regions. Also different region interface types are already available, like the capillaryInterface which is a fluid-fluid interface type accounting for surface tension and surfactant transport. Additional region or interface types can be easily added with this design. Moreover, different region types and region interface types can be superimposed. This is made possible by utilising the functionality of the object registry (Section 4.2) and the helper function lookupOrRead in Listing 2, so that all superimposed region types have access to
Figure 2: General structure of the multiRegionFoam framework
the fields present in one region even if they are defined in other types.
```
template<classT> inline inlineautoPtr<T> lookupOrRead { constfwMesh&mesh, constword&fldName, constbookread=true, constbookwrite-true, consttmp<T>fld=tmp<T>(nullptr) };
```
Listing 2: Object lookup or read helper function
The information of the regions and interfaces, such as name, type and the settings for the interfacial coupling algorithm are specified by the user at run-time via the dictionaries multiRegionProperties and regionInterfaceProperties (see Listings 4 and 3 for a simple conjugate heat transfer (CHT) problem whose setup is also displayed in Figure 3). The regions and interfaces information is read and stored by the regionProperties and regionInterfaceProperties classes which are utilised by the regionTypeList and regionInterfaceTypeList classes to instantiate the regions and the interfaces. The multiRegionSystem class builds on these lists to create the multi-region system of equations. It orchestrates the solution process by either applying monolithic or partitioned approaches and therefore acts as the main class interface to the top-level solver multiRegionFoam.
```
template<classT> inlineautoPtr<T> lookupOrRead { constfwMesh&mesh, constword&fldName, constbookread=true, constbookwrite-true, consttmp<T>fld=tmp<T>(nullptr) };
```
Listing 3: multiRegionProperties dictionary for a simple CHT problem
```
1regions
2{
3(fluid(icoFluidtransportTemperature))
4{solid(conductTemperature)}
5};
6
7DHA//Dirichlet-NeumannAlgorithmcontrols
8{
9T
10{
11maxCoupleIter20;
12residualControl
13{
14maxJumpRes1e-07;
15outputJumpResFieldno;
16maxJunkRes1e-07;
17outputFluxResFieldno;
18}
19}
20}
```
Listing 4: regionInterfaceProperties dictionary for a simple CHT problem
```
1partitionedCoupledPatches
2{
3fluid
Figure 3: Example of a heat transfer multi region system being composed out of two regions. The physics of the fluid region is described by the superposition of the two regionTypesicoFluid and transportTemperature while the physics of the solid region is described by the regionType conductTemperature with a heatTransferInterface coupling both regions
```
5interfaceTypeheatTransferInterface;
6
7coupledPatchPair
8{
9(fluidboton)
10(solidtop)
11};
12
13coupledFields
14{
15T
16};
17
18defaultInterfaceCoeffs{}
19}
20};
21monolithicCoupledPatches
22
23};
24curvatureCorrectedSurfacePatchesO();
25interpolatorUpdateFrequency1;
26interfaceTransferMethoddirectHap;
27directHapCoeffs{}
28GICCoeffs{}
```
Listing 5: setCoupledGaps of transportTemperature region type
### Solution of the coupled system
The set of equations describing the physical behaviour are defined in the specialisations of the regionType class in the setCoupledEgns function as illustrated in Listing 5 for the transport temperature equation in a fluid region. An instance of the equation system is stored in a hash pointer table, HashPtrTable. In this case, it is called fvScalarMatrices which points to a finite volume matrix system of scalar type. Other types include vector, tensor, or symmetric tensor matrices, as well as block coupled types fvBlockMatrix, as indicated in Listing 6. A unique name identifier is used as a key for the hash table where the same name pattern is used among all types of regions for straightforward access to all coupled equations later when the system is set up and solved.
```
1voidFoam::regionTypes::transportTemperature::setCoupledEqns()
2{
3//-Createtemperatureequationsystem
4TEqn=
5{
6rho_*cp_
7*(
8fvm::ddt(T())
9+fvm::div(phi_(),T())
10)
11...
12fvm::laplacian(kappa_(),T())
13);
14//-StoreequationsysteminappropriateHashPtrTable
15fvScalarMatrices.set
16{
17T_().name()
18+mesh().name()+"Mesh"
19+transportTemperature::typeName+"Type"
20+"Eqn";
21#TEqn()
22};
23}
```
Listing 6: coupled governing equations
```
1/-coupledgoverningequations
2HashPtrTable<fvMatrix<scalar>>fvScalarMatrices;
3HashPtrTable<fvMatrix<vector>fvVectorMatrices;
4HashPtrTable<fvMatrix<symTensor>fvSymTensorMatrices;
5HashPtrTable<fvMatrix<tensor>fvTensorMatrices;
6HashPtrTable<fvBlockMatrix<vector>fvVectorMatrices;
```
Listing 7:
```
1/-coupledgoverningequations
2HashPtrTable<fvMatrix<scalar>>fvScalarMatrices;
3HashPtrTable<fvMatrix<vector>fvVectorMatrices;
4HashPtrTable<fvMatrix<symTensor>>fvSymTensorMatrices;
5HashPtrTable<fvMatrix<tensor>fvTensorMatrices;
6HashPtrTable<fvBlockMatrix<vector>fvVectorMatrices;
```
Listing 8:
The multi-region system comprising all such physics-specific equations, defined in different specialisations of region types, is assembled and solved in the multiRegionSystem class via the solve function shown in Listing 7. The auto pointers interfaces_ and regions_ which point to regionInterfaceTypeList and regionTypeList classes (cf. Figure 2) provide access to the list of all regions and their interfaces, including the functions defined in their respective specialised classes. For example, using interfaces_->detach(), the detach boundary mesh
modifier is applied to the interfaces in preparation for partitioned coupling while regions_->solveRegion() checks for possible individual physics solution requirement in each region. Another function, solvePIMPLE (see Section 5.3), is dedicated for solving pressure-velocity systems, if any, using the PIMPLE algorithm. A distinction of cases is made based on whether the pressure and velocity fields are coupled across the interface or not. For example, solving the pressure-velocity system in the fluid region in the case of heat transfer between fluid and solid, versus the case of pressure-velocity interfacial coupling between two fluids as in a rising bubble scenario. Moreover, the coupling of the pressure and the velocity fields is made separate from other fields in order to allow for mesh motion using the Arbitrary Lagrangian-Eulerian (ALE) interface tracking method. For other fields, the system of equations is created and solved under the assembleAndSolveEqns function for partitioned coupling and the assembleAndSolveCoupledMatrix for monolithic coupling. Each of these functions loops over the list of the partitioned or monolithic coupled fields from the lists partitionedCoupledFldNames or monolithicCoupledFldNames, respectively. These are hashedWordLists reading coupled fields as specified in the regionInterfaceProperties dictionary (Listing 4). Note that when the partitioned Dirichlet-Neumann Algorithm (DNA) (Section 3) is selected, a convergence criteria is added. It is based on the interface residuals (see Section 4.4.2 in [50]) which are computed in the generic coupled boundary condition classes (Section 5.4). dnaControl has access to these coupled boundary conditions and residuals for all interfaces. The DNA controlled fields and the termination criteria are specified by the user in the multiRegionProperties dictionary as illustrated for the temperature field T in Listing 3 under the DNA entry.
```
1voidFom::multiRegionSystem::solve()
2{
3//Detachboundarymeshmodifier
4interfaces_->detach();
5
6//Solveindividualregionphysics
7regions_->solveRegion();
8
9//CheckifatleastoneregionimplementsPIMPLEloop
10//andsolvepressure-velocitysystemifso
11if(regions_->unesSimple())
12{
13//PIMPLEp-U-coupling
14regions_->solvePIMPLE();
15}
16
17//Solveregionregion-regioncoupling(partitioned)
18forAll(partitionedCoupledFldNames_,fldI)
19{
20//-Getnameoffieldwhichis
21//currentlypartitionedcoupled
22wordfldName=partitionedCoupledFldNames_[fldI];
23
24//Solvepressure-velocitysystemusingPIMPLE
25if(fldName=="p0Pimple")
26{
27while(dnaControls_[fldName]->loop())
28{
29//PIMPLEp-U-coupling
30regions_->solvePIMPLE();
31/ALERnoshnotioncorrector
32regions_->meshMotionCorrector();
33//Updateinterfaceinherentphysics
34interfaces_->update();
35}
36}
37else
38{
39//-Solveotherpartitionedcoupledfields
40while(dnaControls_[fldName]->loop())
41{
42assembleAndSolveEqns<fvMatrix,scalar>(fldName);
43//fieldsofhighertensorrank
44//...
45}
46}
47}
48//-Attachboundarymeshmodifier
49interfaces_->attach();
50
51//-Solveregion-regioncoupling(monolithic)
52forall(monolithicCoupledFldNames_,fldI)
53{
54//-Getnameoffieldwhichis
55//currentlymonolithiccoupled
56wordfldName=monolithicCoupledFldNames_[fldI];
57
58assembleAndSolveCoupledMatrix<fvMatrix,scalar>
* (
60 monolithicCoupledScalarFlds_, fldName
61 );
62 // fieldsofhighertensorrank
63 //...
64 }
65} ```
The assembly and solution of the system of coupled equations in partitioned mode is shown in Listing 8 which represents the assembleAndSolveEqns function of the multiRegionSystem class. The function iterates through all regions and obtains a non-constant reference, called rg, to the current region. It also constructs a unique name matrixSystemName for the equation system for the coupled field in this region. This name matches the pattern of the key assigned for the hash pointer table defined within the setCoupledEqns function in the respective specialised region type as mentioned earlier (see Listing 5). The coupled equation matrix defined in the current region is retrieved from the hashed table using the getCoupledEqn function which is a member of the regionType class. It is defined using a template, which allows it to be specialised for different types of matrices where the type is deduced from the input matrixSystemName. The use of templatisation is particularly beneficial here to avoid writing multiple identical classes differing only in type. The system of equations is then assembled and solved with the option to perform individual post-solve actions that are implemented in each region.
```
1template<template<class>classM,classT>
2voidFom::multiRegionSystem::assembleAndSolveEqns
3{
4wordfldName
5}const
6{
7forall(regions_(),regI)
8{
9//Gettingmonconstaccesstocurrentregion
10reginTypeRg+const_cast<regionTypeR>(regions_()[regI]);
11
12//Uniquenameofequationsystem
13wordmatrixSystemName=
14{
15fldName
16+rg.mesh().name()+"Mesh*
17+rg.regionTypeName()+"Type"
18+"Eqn"
19};
20};
21//-Sanitychecks
22//...
23
24//-Getequationfromregion
25M<T>&eqn=rg.getCoupledEqn<M,T>(matrixSystemName);
26
27//-Relaxequation
28rg.relaxEqn<T>(eqn);
29
30//-Solveequation
31egn.solve();
32
33//-Postsolveactions
34rg.postSolve();
35} ```
For monolithic coupling, the procedure starts with attaching the meshes at the interfaces between adjacent regions using the attach boundary mesh modifier interfaces_->attach() function as indicated previously in Listing 7. Then, as for partitioned coupling, the assembleAndSolveCoupledMatrix function creates and solves the coupled system. However, it is now represented as a block coupled finite volume matrix using the coupledFwMatrix approach [59] as illustrated in Listing 9. This is a block matrix system that is initialised with the number of monolithic coupled regions. The equations to be loaded into the coupledFwMatrix are obtained from the respective regions using the getCoupledEqn function analogously to the partitioned approach by iterating over the regions and collecting equations by their unique name. Unlike partitioned coupling, the retrieved coupled equations are not directly solved but instead inserted into the block matrix system. They are first appended into the dynamic list of equations eqns which holds pointers to the equation matrices for all regions. The new operator dynamically allocates memory for each instance of a coupled equation appended to the list. Finally, the equations are solved in the block matrix system simultaneously in an implicit manner using the solver specified by the solution dictionary of the first region's mesh for the given coupled field name.
```
1template<template<class>classM,classT>
2voidFam::multiRegionSystem::assembleAndSolveCoupledMatrix
3{
4PtrList<GeometricField<T,fvPatchField,volMesh>>kflds,
5word1fdName
6}const
7
8//-Getnumberofmonoliticcoupledregionsperfieldname
9//andreturniftherearenone
10//-Initialiseblockmatrixsystem
11
12//withnumberofmonoliticcoupledregions
13coupledFVMatrix<T>coupledEqms(nEqns);
14
15
16//-Assembleallmatricesone-by-oneandcombinethemintothe
17//blockmatrixsystem
18labelnReg=0;
19DynamicList<MCT>*>>eqns;
20forAll(regions_(),regI)
21{
22//-Gettingnonconstaccesstocurrentregion
23regionType&rg=const_cast<regionType&>(regions_()[regI]);
24
25//-Uniquenameofequatisystem
26wordmatrixSystemName=
27{
28tidName
29+rg.mesh().name()+"Mesh"
30+rg.regionTypeName()+"Type"
31+"Eqn"
32};
33//-Getequationfromregion
34MCT>&eqn=rg.getCoupledEqn<M,T>(matrixSystemName);
35
36//-Insertmatrixintobleckmatrixsystem
37eqns.append(newMCT>(eqn));
38coupledEqns.set(nReg,eqns[nReg]);
39nReg++;
40
41}
42
43//-Solveblockmatrixsystem
44coupledEqns.solve
45{
46regions_()[0].mesh().solutionDict().solver(fldName+"coupled")
47};
48
49//Postsolveactionsformonolithiccoupledfields
50//...
51}
```
Listing 9: assembleAndSolveCoupledMatrix of multiRegionSystem
### Pressure-velocity coupling
The pressure-velocity coupling is solved in a semi-implicit manner using the PIMPLE algorithm which is a combination of the well known Pressure-Implicit with Splitting of Operators (PISO) algorithm [60] and a more consistent version of the Semi-Implicit Method for Pressure-Linked Equations (SIMPLE) algorithm [61]. This involves a predictor-corrector procedure in which a tentative solution for the velocity is obtained in the predictor step while the pressure is updated in the corrector step. The standard implementations of these algorithms in OpenFOAM are refactored and integrated into the class structure of multiRegionFoam (see Figure 2). The code is implemented in the icoFluid region type class which needs to be specified by the user in the multiRegionProperties dictionary (Listing 3). The code block for each of the steps of the PIMPLE algorithm are defined in the momentumPredictor and the pressureCorrector functions. These are called in the regionTypeList class by utilising the concept of operator overloading in C++, as illustrated in Listing 10. The solvePIMPLE() function represents the skeleton of the PIMPLE algorithm which outlines the outer and the inner loops and the execution across regions. In order to ensure the consistency of the pressure-velocity coupling, each step of the solution procedure is performed for all regions before the next step is carried out.
```
1voidFam::regionTypeList::solvePIMPLE()
2{
3//-Getthenumberofautercorsthatshouldbeperformed
4//duringthePIMPLEprocedure(Wedonhaveatop-level
5//mesh.ConstructfvSolutionfortherunTimeinstead.)
6fvSolutionsolutionDist(runTime_);
7constditicomary&pimple=solutionDict.subDict("PIMPLE");
8intmOuterCorreadmit(pimple.lookup("mOuterCorrector"));
### Generic boundary conditions for partitioned coupling
As described in Section 3, the solution of the multi-region system using a partitioned approach requires specifying boundary conditions on the connecting boundaries of the sub-regions where a Dirichlet condition is applied on one side while a Neumann condition is applied on the other one. For this purpose, a set of generic boundary conditions are devised exploiting the fact that thes conditions have a general mathematical formulation (Section 2). Figure 4 shows how the implementation is integrated into the structure of multiRegionFoam.
The generic boundary conditions are derived from the standard OpenFOAM Dirichlet and Neumann boundary conditions, namely fixedValueFvPatchField and fixedGradientFvPatchField. This results in two classes
Figure 4: General structure of the generic coupled boundary condition
genericRegionCoupledValueFvPatchField and genericRegionCoupledFluxFvPatchField which implement the interfacial jump and transmission conditions, respectively. Utilising templates, these conditions are made agnostic to the actual physics that the jumps and fluxes originate from and rather receive this information from the relevant specialisation of regionInterfaceTypes. The derived coupled boundary conditions also inherit from the interfaceToInterfaceCouplieManager class which gives access to the neighbour region data, such as the mesh and patch names, and has access to the regionInterfaceType on which the coupled boundary condition is applied. For their use in the partitioned interface coupling solution approach they are also able to utilize different convergence acceleration methods including, besides others, the Aitken relaxation method [54] and the Interface Quasi-Newton Inverse Least-Squares method (IQN-ILS) [62].
### Parallelisation
In OpenFOAM, parallelization is generally implemented through the domain decomposition method. This typically requires dividing the solution domain into sub-regions in order for each of them to be solved on separate CPU cores. The standard Message Passing Interface (MPI) [63] is then utilised to establish communication between the processors. Although the proposed multi-region framework already requires subdivision of the computational mesh into multiple domains, these sub-domains are not convenient for efficient parallel computing, especially because a unified framework is sought. For example, if partitioned coupling is selected, then only one region will be solved at a time, or if the regions vary significantly in size in some scenarios, this will lead to an unbalanced distribution of load over the processors. Instead, all regions are decomposed into the same number of sub-domains but not necessarily using the same decomposition method. Figure 5 depicts an example of two interface coupled regions representing gas bubble and liquid where both phases are decomposed into two sub-domains but one is decomposed vertically while the other is decomposed horizontally. This causes the two patches, that form the two sides of the interface, to be non-entirely overlapping (cf. blue lines in Figure 5) which makes the fields mapping between the two sides impossible. To overcome this situation, the _global face zone_ approach, introduced by Cardiff et al. [31] and Tukovic et al. [64], is used. It suggests that each processor is given access to the entire interface patch via the so-called globalPolyPatch (red and orange lines in Figure 5) which resembles the union of the non-overlapping boundaries, holding copies of their data and allowing for mapping the fields from one global patch to the other. These global patches also facilitate the implementation of the coupled boundary conditions. To adapt this concept in multiRegionFoam, the globalPolyPatch is incorporated into the regionInterfaceType class, i.e. it includes a pair of the local interface patches as well as pair of global patches.
## 6 Automated test harness
In order to validate and evaluate different aspects of multiRegionFoam, an automated test framework is deployed. The main goal is to automatically run a suite of test cases and report on the results. This includes testing whether each simulation was successfully executed and completed as well as checking the obtained results against some criteria, such as a reference solution or an error threshold. The testing framework is also useful for parameter studies, continuous integration, and testing of further developments or new features. The
Figure 5: Local (blue) and global (red, orange) poly patches of a gas and a liquid region decomposition
tests are performed using Python utilising the oftest library [65] which is a test framework for OpenFOAM cases available as open source under the GPLv3 License. It makes use of the pytest library such that, by adding the script file test_*.py to each test case, pytest automatically discovers all tests in the folder. This script file includes all the instructions to initiate the simulation and store the results for each test case. The run_reset_case module from oftest library is responsible for running and cleaning the case according to the standard OpenFOAM scripts Allrun and Allclean. Listing 11 shows the test_completed function which adds the case to the pool of tests, returns the status of completion (passed/failed) in the terminal, and leaves the simulation log file in the specified path.
```
1deftest_completed(run_reset_case):
2log=offset.path_log() assertoffset.case_status(log)=='completed'
```
Listing 11: Test case completion check using oftest and pytest libraries
For parameter studies and/or testing different scripts for the same case, the oftest.Case_modifiers module is used allowing for the manipulation of the case files and dictionaries. For example, Listing 12 shows part of the script file for a suite of tests for a conjugate heat transfer scenario where different values of parameters are simulated, including Reynolds number, Re, the Prandtl number, Pr, and the thermal conductivity ratio, k = \(k_{s}/k_{f}\) (see Section 7.1 for the flow over heated plate case description). Additionally, the test includes different coupling algorithm, monolithic ("Allrun -m") or partitioned ("Allrun -p"), where the latter could be performed with different types of acceleration accType ("aitken"/ "IONN-ILS") or without acceleration ("fixed"). In order to execute the tests, the @pytest.mark.parametrize decorator is used to specify the test parameters and their corresponding values as shown in Listing 13 which is included in the same script file.
```
1defparamSet(Pr,Re,k,relaxType,relaxValue,script):
2L=1.0
3rhod=1.0
4ks=100.0
5Uinf=1.0
6mu=rhod*%inf*L/int(Re)
7k+ks/k
8cp=kf*Pr/mu
9dir_name=os.path.dirname(os.path.abspath(_file_))
10filemod=
11
12{
13"constant/fluid/transportProperties":
14[
15('mu','mu[1-1-100000]Ks'Zmu),
16('ep','ep[02-2-10000]Ks'Xep),
17('k','k[11-3-10000]Ks'Xkf),
18],
19"0/fluid/orig/monolithic/k":
20[
21('internalField','uniformKs'Xkf),
22],
23"0/fluid/orig/partitioned/T":
24[
25('boundaryField/interface/accType',relaxType),
26('boundaryField/interface/relax',relaxValue),
27]
28]
29meta_data={"Pr":Pr,"Re":Re,"k":k,"relaxType":relaxType,
30"relaxValue":relaxValue,"script":script}
31case_mod=offset.Case_modifiers(filemod,dir_name,meta_data)
32returncase_mod
```
Listing 11: Test cases completion check using oftest and pytest libraries
During the test execution, pytest will run the test_flowOverHeatedPlate function multiple times, once for each set of input values specified in the decorator's parameters list. Before cleaning the current case and proceeding to the next one, the data of interest from the current case, such as solution fields at final time, outcomes of OpenFOAM post-processing utilities, or computed errors are stored and written into a file format supported by Python as demonstrated in Listing 13. The results of all tests could be written into a single data frame file that can be further analyzed using standard Python libraries for data management and visualisation.
```
1parameters=[paramSet(0.01,500,1,"fixed",1,"Allrun-p"),
3paramSet(0.01,500,1,"aitken",0.75,"Allrun-p"),
4paramSet(0.01,500,1,"monolithic",0,"Allrun-m"),
5paramSet(100,500,1,"fixed",1,"Allrun-p"),
6...]
*labels=["ProiHe500k!Fired1","ProiHe500k!A!iHe75","ProiHe500k!Honolithic","Proi100Re500k!Fixed1",...]
*pytest.mark.parametrize(
*run_reset_case",parameters,
* indirect=["run_reset_case"],ids=labels)
* deftest_flowOverHeatedPlate(run_reset_case,load_results):
* Accessthecurrentcaseparametersfromthemeta_data dictionary
* current_case=run_reset_case.meta_data
* Appendtherrorretrievedbyload_resultsfunction
* current_case["error"]=max(load_results["error"])
* Appendcurrentcasteothelistofcases
* all_cases.append(current_case)
* writeresultsintodataframeincsvfformat
* all_cases_df=pd_MatFrame(all_cases)
* all_cases_df.to_csv(os.path.join(dir_name,"all_cases.csv"),index=False)
* Reportontestcompletion
* log=oftest.path_log() assertoftest.case_status(log)=="completed"
## 7 Examples of Usage
### Forced convection heat transfer from a flat plate
In this case, an incompressible laminar flow over a flat plate is considered, following the description from Vynnycky et al. [66]. Their numerical results as well as their derived reference solution, using boundary-layer theory, are used for validation. A schematic sketch of the case setup is shown in Figure 6. A fluid of uniform temperature \(T_{\infty}\) and velocity \(U_{\infty}\) flows over a plate of finite thickness that is held at constant temperature \(T_{s}>T_{\infty}\).
The computational mesh is depicted in Figure 7. It covers the fluid and solid regions, both consisting of hexahedral elements. In order to capture the thermal and viscous boundary layers in the fluid, mesh grading is used to obtain a finer mesh at the fluid-solid interface and the leading edge of the plate. The two regions are coupled at the fluid-solid interface with a heatTransferInterface regionInterfaceType which consists of the meshes boundary patches, namely the "bottom" patch from the fluid and the "top" patch from the solid. Table 3 shows the thermal coupled boundary conditions used in multiRegionFoam. A Dirichlet condition is applied at the fluid side of the fluid-solid interface while a Neumann condition is specified at the solid side. The generic jump and flux boundary conditions (Section 5.4) are used for partitioned coupling while the region couple patch field chtRcTemperature [67] is used for monolithic coupling. The rest of the boundary conditions are summarized in Table 2.
The authors of [66] investigated several factors affecting heat transfer, including the aspect ratio of the plate \(\lambda=a/L\), the Reynolds number, Re, the Prandtl number, Pr, and the thermal conductivity ratio, k = \(k_{s}/k_{f}\), between the plate and the fluid. In this study, the aspect ratio is fixed at \(\lambda=0.25\), while different combinations of the other parameters are considered as specified in table 4; the thermophysical properties of the fluid and solid are given in table 5. The simulations are run for 10s using time step size of \(\Delta t=0.01\). The numerical schemes used are listed in Table 6 according to OpenFOAM equivalent terms [68].
The results are validated by computing the dimensionless conjugate boundary temperature, \(\theta\), defined as
\[\theta=\frac{T-T_{\infty}}{T_{s}-T_{\infty}},\]
where \(T\) is the temperature along the fluid-solid interface. Figure 8 summarizes the results of multiRegionFoam using monolithic and partitioned coupling. The latter was performed with Aitken's relaxation procedure to accelerate the convergence. Both approaches result in good agreement with the numerical and analytical results from Vynnycky et al. [66]. The deviations from the reference solutions in Figure 7(c) are due to the fact that the solutions for the case \(Pr\gg 1\) were derived under the constant-flux approximation assumption,
Figure 6: Computational domain and boundary conditions for the flow over a heated plate
Figure 7: Meshes for the flow over heated plate simulation
\begin{table}
\begin{tabular}{l l l l} \hline \hline Property & Symbol & Unit & Solid & Fluid \\ \hline Density & \(\rho\) & kg/m\({}^{3}\) & 1 & 1 \\ Dynamic viscosity & \(\mu\) & kg/ms & - & \(\rho_{f}U_{\infty}L/\) Re \\ Thermal conductivity & \(k\) & W/m\({}_{\cdot}\)K & 100 & \(k_{s}/\) k \\ Specific heat capacity & \(c_{p}\) & \({}^{3}\)/kg\({}_{\cdot}\)K & 100 & \(k_{f}\Pr/\mu\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Numerical schemes for the flow over heated plate simulation
\begin{table}
\begin{tabular}{l c c} \hline \hline Boundary & Thermal & Velocity \\ \hline
**Fluid** & & \\ \hline inlet & 300 K & (1 0 0)\({}^{\dagger}\) m/s \\ bottom & coupled & (0 0 0)\({}^{\dagger}\) m/s \\ slip-bottom (before the plate) & zeroGradient & zeroGradient \\ noSlip-bottom (after the plate) & zeroGradient & (0 0 0)\({}^{\dagger}\) m/s \\ outlet, top & zeroGradient & zeroGradient \\ \hline
**Solid** & & \\ \hline top & coupled & - \\ bottom & 310 K & - \\ left, right & zeroGradient & - \\ \hline \end{tabular}
\end{table}
Table 2: Boundary conditions for the flow over a heated plate
\begin{table}
\begin{tabular}{l c c} \hline \hline Region & Boundary & Partitioned & Monolithic \\ \hline Fluid & bottom & regionCoupledScalarJump & chtRcTemperature \\ Solid & top & regionCoupledScalarFlux & chtRcTemperature \\ \hline \hline \end{tabular}
\end{table}
Table 3: Coupled thermal boundary conditions for the flow over heated plate simulation
\begin{table}
\begin{tabular}{c c c} \hline \hline Re & Pr & k \\ \hline
500 & 0.01 & 1, 5, 20 \\
10000 & 0.01 & 1, 5, 20 \\
500 & 100 & 1, 5, 20 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Parameters for the flow over heated plate simulation
\begin{table}
\begin{tabular}{l c c c} \hline \hline Property & Symbol & Unit & Solid & Fluid \\ \hline Density & \(\rho\) & kg/m\({}^{3}\) & 1 & 1 \\ Dynamic viscosity & \(\mu\) & kg/ms & - & \(\rho_{f}U_{\infty}L/\) Re \\ Thermal conductivity & \(k\) & W/m\({}_{\cdot}\)K & 100 & \(k_{s}/\) k \\ Specific heat capacity & \(c_{p}\) & \({}^{3}\)/kg\({}_{\cdot}\)K & 100 & \(k_{f}\Pr/\mu\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Thermophysical properties of the fluid and solid for the flow over heated plate simulation
which does not provide an accurate description of the flow as \(k\) increases. Figure 8d reports the average time spent on the coupling using partitioned and monolithic approaches. In comparison to partitioned coupling, monolithic coupling exhibits either the same or reduced average coupling time across all cases. However, a notable distinction is observed when Pr is equal to 100. Notably, for monolithic coupling, the time remains almost constant regardless of the simulated parameters.
Figure 8: Simulation results for different \(Pr\), \(Re\), and \(k\) values
### Shell-and-tube heat exchanger
The next example demonstrates an industrial application where conjugate heat transfer takes place. Figure 9 shows a shell-and-tube heat exchanger. This particular design includes a shell, tubes, and baffles. Heat transfer occurs between two fluids; an inner fluid, flowing at lower temperature inside the tubes, and an outer fluid, flowing within the shell, but outside the tubes. The solid walls of the tubes ensure that the two fluids do not mix while the baffles help directing the flow on the shell side.
The case setup is based on a case prepared by SimScale GmbH that is publicly available at [69]. The computational mesh is shown in Figure 10 and the boundary conditions are summarized in Table 7. The coupled boundary conditions are as in the previous case in Table 3. The material properties of the solid and fluid regions are shown in Table 8 where the inner and the outer fluids both have the same properties.
field values using the partitioned approach are not remarkably different, but it takes longer to reach a steady state (around t = 500 s).
### Polymer electrolyte fuel cell
The possibility of simulating complex multi-physical systems within the multiRegionFoam framework will be demonstrated in the following example, considering a single polymer electrolyte fuel cell (PEMFC) channel. The general structure and the typical physical components of a PEMFC are depicted in the left, gray scaled image of Figure 12.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Property & Symbol & Unit & Fluid & Solid \\ \hline Density & \(\rho\) & \(\nicefrac{{\mathrm{kg}}}{{\mathrm{m}^{3}}}\) & 1027 & 8960 \\ Thermal conductivity & \(k\) & \(\nicefrac{{\mathrm{W}}}{{\mathrm{m}\cdot\mathrm{K}}}\) & 0.668 & 401 \\ Dynamic viscosity & \(\mu\) & \(\nicefrac{{\mathrm{kg}}}{{\mathrm{ms}}}\) & 3.645e\({}^{-4}\) & - \\ Specific heat capacity & \(c_{p}\) & \(\nicefrac{{\mathrm{J}}}{{\mathrm{kg}\cdot\mathrm{K}}}\) & 4195 & 385 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Thermophysical properties of the fluid and solid for the heat exchanger case
Figure 11: Final temperature distribution for the heat exchanger simulation
\begin{table}
\begin{tabular}{l l l} \hline \hline & Scheme & Setting \\ \hline Time Scheme & ddtScheme & steadyState \\ \hline Finite Volume Schemes & gradScheme & Gauss linear \\ & gradScheme grad(U) & cellLimitted Gauss linear 1 \\ & divScheme div(phi,U) & Gauss upwind \\ & divScheme div(phi,T) & Gauss upwind \\ & lapacianScheme & Gauss linear corrected \\ & interpolationScheme & linear \\ & snGradScheme & corrected \\ \hline \hline \end{tabular}
\end{table}
Table 9: Numerical schemes for the heat exchanger
Beginning with the outer components, the cell consists of two end plates, or interconnects, to which the external electrical circuit is connected. The reactants are conveyed through the air and fuel channels and are transported via porous gas diffusion layers (GDLs) to the catalyst layers (CLs), which is represented here as an infinite thin surface. Besides the gas transport to the catalyst layers, the electric conductive GDLs also enable the electrical contact between the forementioned and the end plates (interconnect). The CL is located between the ionic conductive membrane and GDL. On the cathode (CL at the air side) oxygen is reduced and water is created, whereas on the anode side (CL at the fuel side) hydrogen is oxidized and protons are conducted through the ionic conductive membrane to the opposite CL. In addition, Figure 12 shows the general structure of the modeling and computational domains. The design of the single-phase PEMFC implemented here in multiRegionFoam is based on developments that can be traced back to [70] and to [71], and in this particular case has been extended especially towards the possibility of interface coupling. Thus, the subdomains defined as different regionTypes are solid, fluid and electric/ionic conductive regions. In the fluid (air and fuel) regions, both the modified Navier-Stokes equations, which take into account the porous layers via Darcy's law, and the species mass fraction equations are solved. The gas mixtures are assumed to be incompressible, ideal gases whose diffusion is characterized by Fick's law, taking also into account the impact of the porous layers. On the cathode side, the inflowing gas mixture is composed of oxygen, nitrogen and water vapor, whereas on the anode side, hydrogen and water vapor flow in. In total, the constraint that the sum of all mass fractions have to equal one must be fulfilled in each cell. Humidification of the gas mixtures is important to ensure the ionic conductivity of the membrane, which is strongly dependent on its water content. Within the electric/ionic conducting regions (phiEC, phiEA and phiI), the electric potential equations are solved using the partitioned coupling approach. Here, the open cell voltage is described by Nernst equation and the activation overpotential resulting from the reaction is expressed by Butler-Volmer equation. Additional contact resistances or limiting current densities are not considered in this particular case. The cell operates in galvanostatic mode, i.e., a constant mean current density is applied to it. Therefore, the potential field specified at the upper surface of phiEC (interconnectUp) is adjusted every iteration step until the specified current density is reached. In Table 10 this is indicated by the entry adjustedPotential for the boundary interconnectUp. The calculation of the temperature within each component of the PEM fuel cell takes place on the global region by mapping the required quantities such as densities and specific heat capacities etc. onto this mesh. A further adaptation of the source code, whereby the energy equation on the individual regions are solved using the coupled boundary conditions as shown in the previous sections, is also be possible with the framework here. With such an approach and the assumption of a single-phase flow, the global region would be redundant. Figure 13 shows the dimensions of the single channel and selected boundary conditions taken from Table 10.
Figure 12: General modeling approach of the PEM fuel cell
The cell under consideration consists of a single channel with a length of 20 mm, where the fluid channels have an area of 1 \(\times\) 1 mm. The simulation itself is steady state and the velocity used for the boundary condition at the fluid inlets is given by a constant stoichiometry of \(\lambda=3\) calculated via Faraday's law. The temperature of the incoming gases is set to \(T=90\,^{\circ}\)C and a mean current density of \(i=8000\,\)A/m\({}^{2}\) is applied. Figure 14 shows the corresponding velocity and species mass fraction contours at different positions along the channel inside the anodic and cathodic fluid regions.
\begin{table}
\begin{tabular}{l c c c c} \hline Boundary & Thermal & Velocity & Species mass fraction & Potential \\ \hline
**air** & & & & \\ \hline airInlet & - & (0.655 0 0) \({}^{1}\) m/s & Y\({}_{\text{O2}}\) = 0.156, Y\({}_{\text{H2O}}\) = 0.35 & - \\ airOutlet & - & zeroGradient & zeroGradient & - \\ air\_to\_interconnect & - & noSlip & zeroGradient & - \\ air\_to\_electrolyte & - & noSlip & zeroGradient & - \\ airSides & - & noSlip & zeroGradient & - \\ \hline
**fuel** & & & & \\ \hline fuelInlet & - & (0.161 0 0) \({}^{1}\) m/s & Y\({}_{\text{H2}}\) = 0.65, Y\({}_{\text{H2O}}\) = 0.35 & - \\ fuelOutlet & - & zeroGradient & zeroGradient & - \\ fuel\_to\_interconnect & - & noSlip & zeroGradient & - \\ fuel\_to\_electrolyte & - & noSlip & zeroGradient & - \\ fuelSlides & - & noSlip & zeroGradient & - \\ \hline
**phil** & & & & \\ \hline electrolyteSides & - & - & - & zeroGradient \\ electrolyte\_to\_air & - & - & - & coupled flux \\ electrolyte\_to\_fuel & - & - & - & coupled jump \\ \hline
**phiEA** & & & & \\ \hline anodeSides & - & - & - & zeroGradient \\ interconnectDown & - & - & - & 0 V \\ phiEA\_to\_electrolyte & - & - & - & coupled flux \\ \hline
**phiEC** & & & & \\ \hline cathodeSides & - & - & - & zeroGradient \\ interconnectUp & - & - & - & adjustedPotential \\ phiEC\_to\_electrolyte & - & - & - & coupled jump \\ \hline
**Global region** & & & & \\ \hline airInlet & 363.15 K & - & - & - \\ airOutlet & zeroGradient & - & - & - \\ fuelInlet & 363.15 K & - & - & - \\ fuelOutlet & zeroGradient & - & - & - \\ interconnectSides & zeroGradient & - & - & - \\ interconnectUp & 363.15 K & - & - & - \\ interconnectDown & 363.15 K & - & - & - \\ \hline \end{tabular}
\end{table}
Table 10: Boundary conditions for the PEM fuel cell
Figure 13: Mesh and simulation domain
At the air side the velocity increases due to the combination of the reaction and decreased density of the gas mixture, dragged water and the development of the velocity profile, since a constant velocity is specified at the inlet. For the fuel side, the velocity is decreasing, mainly due to the dragged water and the consumption of hydrogen. This corresponds to the mass fraction contours, where the oxygen and hydrogen mass fraction decreases in flow direction. The main focus within this example is in the surface coupling of the potential. The reactions take place on the infinitesimally thin CL and at this interface the coupled jump/flux boundary conditions for the potential are prescribed (see Table 10). Figure (a)a shows the potential variation in the middle of the cell going from the upper surface of the GDL at the cathode to the lower surface of the GDL at the anode. Due to the high electric conductivity of the porous media, the potential drop there is comparatively low. At the CLs however, the resulting overpotentials arising from the reactions induce a jump in the potential. Another noticeable potential drop can be found in the membrane because of the low ionic conductivity of the membrane. In addition to this, Figure (b)b shows the polarization curve as a result of conducting the simulation for different current densities. The polarization curve of a PEM fuel cell exhibits a characteristic pattern, featuring a non-linear region at low current densities, which is primarily influenced by the activation overpotential, and a subsequent linear ohmic region, predominantly governed by the internal resistances within the cell.
Figure 14: Velocity and species mass fraction distribution along the channels for \(i=8000\,\nicefrac{\text{A}}{\text{m}^{2}}\)
### Air bubble rising in water
The last two cases demonstrate the usage of multiRegionFoam to implement a moving mesh ALE interface tracking method, originally developed by Hirt et al. [72] and originally implemented into OpenFOAM by Tukovic and Jasak [73]. A single air bubble rising in still pure water is considered, following the setup of Duineveld [74] and using his experimental data for validation. The bubble assumes an initial spherical shape with radius \(r_{b}\) and accelerates from zero velocity at release to its terminal rise velocity. As shown in Figure 16, the computational domain has two regions comprising the bubble and the outer medium which is represented by a sphere of radius \(20r_{b}\). The bubble mesh is bounded by the "interfaceShadow" patch which coincides with the "interface" patch from the liquid side forming the bubble-liquid interface where the coupled boundary conditions are imposed according to Table 11. The meshes consist of polyhedral cells for the bubble and prismatic cells with a polyhedral base for the water, as shown in Figure 17.
Figure 16: Sketch of the computational domain for the rising bubble simulation
Figure 15: Potential distribution along the middle of the cell (a) and the polarization curve (b)
The study considered bubbles with equivalent initial radii of \(r_{b}=0.5\), \(0.6\), \(0.7\), \(0.8\), and \(0.9\,\mathrm{mm}\). The physical properties of the bubble and the surrounding liquid are reported in Table 12. The simulations run with time step size of \(\Delta t=1e^{-5}\,\mathrm{s}\) until the terminal rise velocities are observed. The numerical schemes used are listed in Table 6.
The results are compared with the experimental data obtained by Duineveld [74], as well as the simulation results from Tukovic and Jasak [73]. Figure 18 depicts the rise velocity of the bubble with radius \(r_{b}=0.5\,\mathrm{mm}\) over time until the expected terminal rise velocity value is attained. Figure 19 shows the final shape of the bubble at time \(0.1\,\mathrm{s}\) color-coded by the magnitude of velocity and pressure fields along with the velocity
\begin{table}
\begin{tabular}{l l l} \hline Property & FluidB (Air) & FluidA (Water) \\ \hline Density & \(1.205\,\mathrm{kg/m^{3}}\) & \(998.3\,\mathrm{kg/m^{3}}\) \\ Dynamic viscosity & \(1.82e^{-5\,\mathrm{kg/ms}}\) & \(1e^{-3\,\mathrm{kg/ms}}\) \\ Surface tension coefficient & \(0.0727\,\mathrm{N/m}\) & \\ \hline \end{tabular}
\end{table}
Table 12: Physical properties for the rising bubble simulation
\begin{table}
\begin{tabular}{l c c} \hline Boundary & Pressure & Velocity \\ \hline
**FluidA** & & \\ \hline interface & genericInterfaceCoupledPressureValue & genericInterfaceCoupledVelocityFlux \\ space & zeroGradient & inletOutlet \\ \hline
**FluidB** & & \\ \hline interfaceShadow & genericInterfaceCoupledPressureFlux & genericInterfaceCoupledVelocityValue \\ \hline \end{tabular}
\end{table}
Table 11: Boundary conditions for the rising bubble simulation
Figure 17: Mesh for the rising bubble simulation
streamlines of the flow surrounding the bubble and inside it. Figure 20 displays the terminal rise velocity for larger bubbles. The high level of agreement between the simulation results and the available data from the literature indicates that the present work has successfully addressed any potential robustness issues related to significant mesh deformation. Figure 21 depicts the terminal state of the rising bubble with \(r_{b}=0.9\,\mathrm{mm}\).
Figure 19: Velocity and pressure fields visualisation at time \(0.1\,\mathrm{s}\)
Figure 18: The rise velocity over time for bubble with radius \(r_{b}=0.5\,\mathrm{mm}\) compared with the experiment results
### Air bubble oscillating in water
Another typical benchmark problem for interface tracking codes involves an air bubble oscillating in a liquid due to interfacial tension in the absence of gravitational forces. The bubble is initially slightly deformed from the shape of a sphere with radius \(R\). This results in an initial prolate shape with a semi-major axis \(R+a_{\rm o}\) as shown in Figure 22. The boundary conditions, the mesh, and the numerical schemes are the same as in the rising bubble case (Section 7.4). The simulation parameters are summarized in Table 14 as suggested in [75]. The shape oscillation of the bubble is realised by tracing the temporal evolution of the semi-major axis as illustrated in Figure 23. The results exhibit high agreement with the analytical decay profile which is given for small linear oscillations by \(a_{\rm o}e^{-t/\tau}\), where \(\tau^{-1}\) is the decay factor defined in [76].
Figure 21: Velocity field visualisation for the rising bubble with \(r_{\rm b}=0.9\)
Figure 20: Comparison of the terminal rise velocity for varying equivalent bubble radii
## 8 Summary and Outlook
The present contribution sets out multiRegionFoam, a unified framework for multiphysics problems of multi-region coupling type within OpenFOAM (FOAM-extend). multiRegionFoam offers a flexible design that allows for the assembly of multiphysics problems region-by-region and the specification of coupling conditions interface-by-interface. For this, the framework enables to formulate region-specific physics in form of sets of partial differential equations in a modular fashion and incorporates mathematical jump/transmission conditions in their most general form - accommodating tensors of any rank. This advancement allows for a unified treatment of coupled transport processes across regions. Moreover, users have the freedom to choose between monolithic and partitioned coupling for each coupled transport equation separately. To address fluid flow problems, the code implements various pressure-velocity algorithms, including SIMPLE, PISO, and PIMPLE, incorporating loops of predictor and corrector steps across regions. The framework's maturity and versatility is demonstrated through its successful deployment in various multi-region coupling cases, including multiphase flows, conjugate heat transfer, and fuel cells. The authors anticipate that the creation and release of multiRegionFoam will empower numerous domain experts and developers to derive significant advantages from this framework. Specifically, they inspire for this contribution to facilitate thorough investigations in diverse domains of multiphysics problems and across various application areas. This, in turn, is expected to foster synergistic collaborations among previously separate disciplines, enabling mutual benefits to be realized.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(\rho_{{}_{A}}\) & \(\rho_{{}_{B}}\) & \(\mu_{{}_{A}}\) & \(\mu_{{}_{B}}\) & \(\sigma\) & \(R\) & \(a_{{}_{0}}\) \\ \hline \(1000\,\mathrm{k}\mathrm{s}\mathrm{/}\mathrm{m}^{3}\) & \(1.226\,\mathrm{k}\mathrm{s}\mathrm{/}\mathrm{m}^{3}\) & \(1.13e^{-3}\,\mathrm{k}\mathrm{s}\mathrm{/}\mathrm{m}\mathrm{s}\) & \(1.78e^{-5}\,\mathrm{k}\mathrm{s}\mathrm{/}\mathrm{m}\mathrm{s}\) & \(0.0727\,\mathrm{N}\mathrm{/}\mathrm{m}\) & \(1\,\mathrm{m}\) & \(0.05\,\mathrm{m}\) \\ \hline \hline \end{tabular}
\end{table}
Table 14: Simulation parameters for the oscillating bubble simulation
Figure 23: Time evolution of the oscillating bubble compared to the analytical decay profile
Figure 22: Sketch of the computational domain for the oscillating bubble simulation
## Acknowledgment
The authors H.A. M.E.F. and H.M. are grateful for the funding by the Hessian Ministry of Higher Education, Research, Science and the Arts, and the National High Performance Computing Center for Computational Engineering Science (NHR4CES). S.H. is funded by the AI Data Analytics and Scalable Simulations (AIDAS) project, the financial support of which is highly appreciated. Computations for this work were partly conducted on the Lichtenberg II high performance computer of the Technical University of Darmstadt.
|
2310.12700 | Towards Parsimonious Generative Modeling of RNA Families | Generative probabilistic models emerge as a new paradigm in data-driven,
evolution-informed design of biomolecular sequences. This paper introduces a
novel approach, called Edge Activation Direct Coupling Analysis (eaDCA),
tailored to the characteristics of RNA sequences, with a strong emphasis on
simplicity, efficiency, and interpretability. eaDCA explicitly constructs
sparse coevolutionary models for RNA families, achieving performance levels
comparable to more complex methods while utilizing a significantly lower number
of parameters. Our approach demonstrates efficiency in generating artificial
RNA sequences that closely resemble their natural counterparts in both
statistical analyses and SHAPE-MaP experiments, and in predicting the effect of
mutations. Notably, eaDCA provides a unique feature: estimating the number of
potential functional sequences within a given RNA family. For example, in the
case of cyclic di-AMP riboswitches (RF00379), our analysis suggests the
existence of approximately $\mathbf{10^{39}}$ functional nucleotide sequences.
While huge compared to the known $< \mathbf{4,000}$ natural sequences, this
number represents only a tiny fraction of the vast pool of nearly
$\mathbf{10^{82}}$ possible nucleotide sequences of the same length (136
nucleotides). These results underscore the promise of sparse and interpretable
generative models, such as eaDCA, in enhancing our understanding of the
expansive RNA sequence space. | Francesco Calvanese, Camille N. Lambert, Philippe Nghe, Francesco Zamponi, Martin Weigt | 2023-10-19T12:53:32Z | http://arxiv.org/abs/2310.12700v1 | # Towards Parsimonious Generative Modeling of RNA Families
###### Abstract
Generative probabilistic models emerge as a new paradigm in data-driven, evolution-informed design of biomolecular sequences. This paper introduces a novel approach, called Edge Activation Direct Coupling Analysis (eaDCA), tailored to the characteristics of RNA sequences, with a strong emphasis on simplicity, efficiency, and interpretability. eaDCA explicitly constructs sparse coevolutionary models for RNA families, achieving performance levels comparable to more complex methods while utilizing a significantly lower number of parameters. Our approach demonstrates efficiency in generating artificial RNA sequences that closely resemble their natural counterparts in both statistical analyses and SHAPE-MaP experiments, and in predicting the effect of mutations. Notably, eaDCA provides a unique feature: estimating the number of potential functional sequences within a given RNA family. For example, in the case of cyclic di-AMP ribositches (RF00379), our analysis suggests the existence of approximately \(10^{39}\) functional nucleotide sequences. While huge compared to the known \(<4,000\) natural sequences, this number represents only a tiny fraction of the vast pool of nearly \(10^{82}\) possible nucleotide sequences of the same length (136 nucleotides). These results underscore the promise of sparse and interpretable generative models, such as eaDCA, in enhancing our understanding of the expansive RNA sequence space.
+
Footnote †: journal:
## 1 Introduction
RNA molecules play a critical role in many biological processes, including gene expression and regulation. They carry a multitude of functions, such as encoding and transferring genetic information, regulating gene expression, and catalyzing chemical reactions [1; 2; 3].
Functional RNA molecules are expected to be extremely rare in the exponentially vast nucleotide-sequence space, and current databases contain only a tiny fraction of the overall possible, functionally viable sequence diversity. However, it is worth noting that almost identical biological functions can be carried out by different RNA exhibiting substantial sequence variability. Databases like Rfam [4] gather these in diverse yet functionally consistent families of homologous RNA sequences. In computational sequence biology, a significant challenge lies in harnessing the relatively limited pool of existing RNA sequences within a family, often comprising just a few hundred or thousand examples. The objective is to decipher the sequence patterns that underpin the three-dimensional structure and biological functions of these RNA families. This endeavor extends beyond the known sequences, aiming to explore the vast potential space of sequences capable of adopting similar structures and functions. Such analyses provide valuable insights into the complex organization of sequence space and, ultimately, unravel the intricate sequence-to-function relationship. This quest has gained paramount significance, especially in the era of high-throughput sequencing, solidifying its status as one of biology's central and most challenging questions. Generative probabilistic models offer a powerful approach to tackling these challenges by extrapolating beyond the limited pool of known RNA molecules and generating previously unseen functional sequences. When applied to RNA families, these models build a probability distribution, denoted as \(P(a_{1},\ldots,a_{L})\)[5; 6; 7; 8; 9]. This distribution encapsulates the variability found in the known sequences within the family while encompassing all possible sequences of length \(L\) (for a more precise definition, see _Material and Methods_). To provide an intuitive analogy, think of this probability distribution as defining a "landscape" across sequence space. Through maximum-likelihood learning, generative models assign high probabilities to sequences considered functional, akin to the peaks in this landscape. Conversely, non-functional sequences receive lower probabilities. This probability distribution also enables the prediction of mutational effects [10; 11] since mutations can alter the sequence probabilities relative to the wild-type. Additionally, these models allow for generating novel synthetic sequences [12; 9] through a sampling process (as illustrated in Fig. 1). A well-constructed model \(P\) should possess the ability to generate nucleotide sequences that are diverse but statistically indistinguishable from the known sequences in the family. Constructing generative models is an exceptionally complex undertaking due to the sheer volume of probabilities they must assign, all while learning from a relatively small pool of existing molecules. As an example, consider an RNA molecule with an aligned length of \(L=150\) residues, i.e. the sequence may contain both nucleotides and gaps. The model must learn approximately \(5^{L}\simeq 10^{105}\) distinct values, even though typical RNA families may consist of only \(10^{2}-10^{4}\)
known sequences. The lack of abundant RNA data makes it hard for complex models like deep architectures to work well, as seen in other tasks like RNA structure prediction [13]. This suggests that simpler, less complicated models may be more suited to tackle RNA. One of the prominent generative models employed in biomolecular research is the Boltzmann Machine Direct Coupling Analysis (bmDCA) [14; 15]. The core idea behind this model lies in the notion that RNA residues of significant functional importance experience evolutionary pressures that deter deleterious mutations. Consequently, these residues tend to remain conserved across the Multiple Sequence Alignment (MSA) collecting homologous sequences. Conversely, pairs of nucleotides that exhibit co-evolutionary patterns over time display correlated mutations. To capture both types of constraints, bmDCA adjusts its probability distribution to mirror the one-site and two-site frequencies observed in the MSA, which serve as proxies for conservation and co-evolution, respectively, cf. reviews in [16; 17]. In this context, one-site frequencies, denoted as \(f_{i}(a)\), describe how often a nucleotide \(a\in A,U,C,G,-\) (with '\(-\)' representing alignment gaps) appears at a specific site \(i\in 1,\ldots,L\) within the MSA. Meanwhile, two-site frequencies, denoted as \(f_{ij}(a,b)\), provide information about the joint occurrence of nucleotide pairs \((a,b)\) at positions \((i,j)\) within the same sequence. The probability distribution used in bmDCA takes the form of a fully connected Potts/Markov Random Fields model, which captures the interplay of these frequencies,
\[P(a_{1},\ldots,a_{L})=\frac{1}{Z}\exp\left\{\sum_{i=1}^{L}h_{i}(a_{i})+\sum_{i <j}J_{ij}(a_{i},a_{j})\right\}\;, \tag{1}\]
with \(Z\) being the partition function that guarantees normalization. The \(h_{i}(a)\) (\(a\in\{A,U,C,G-\}\)) are the local 'fields' used to fit the one-site statistics. The \(J_{ij}(a,b)\) matrices (with \((a,b)\in\{A,U,C,G-\}^{2}\)) are \(5\times 5\) interaction 'couplings' used to fit the two-site statistics. Although DCA has proven itself as a valuable instrument in investigating proteins, exhibiting achievements in tasks like generating functional sequences [12], forecasting the effect of mutations [10; 11], deciphering protein evolution [18; 19], and identifying structural interactions [20; 21], its application to RNA remains relatively unexplored [5; 6; 7; 8]. Furthermore, the limited availability of RNA data, compared to the wealth of data for proteins, makes the use of intricate models like large language models [22] impractical. Consequently, employing simpler models for RNA is not only suitable but also presents the benefits of enhanced interpretability, reduced computational burden, and local trainability. Nonetheless, conventional bmDCA generates a fully connected coupling network (as seen in Eq. 1): it models co-evolution between all conceivable pairs of residues, even when there is no actual coevolution occurring. As a consequence, this approach can yield a substantial number of noisy couplings \(J_{ij}(a,b)\) in the network that lack any statistical support. To mitigate this issue, network sparsification can be applied to trim down the network by eliminating numerous spurious couplings. This process aids in identifying the most informative and functionally significant couplings, rendering the network more accessible for interpretation and analysis. Previous endeavors in this direction have primarily concentrated on sparsifying coupling networks within proteins [23]. In our work, we introduce a novel approach called Edge Activation Direct Coupling Analysis (eaDCA) specifically tailored for RNA. Unlike previous algorithms, eaDCA takes a unique starting point: an empty coupling network. It then systematically constructs a non-trivial network from scratch, rather than starting with a fully connected network and subsequently simplifying it. This step-by-step process generates a series of models, gradually increasing in complexity until they achieve a statistical performance comparable to that of bmDCA. Our method offers notable advantages. It operates more swiftly than initiating with a fully connected model, resulting in generative Potts models that demand significantly fewer parameters than standard bmDCA. Furthermore, at each stage of our approach, we employ analytical likelihood maximization. This feature allows us to easily track normalized sequence probabilities and estimate entropies throughout the network-building process. This invaluable information enhances our ability to interpret and analyze the vast space of RNA sequences. The organization of the manuscript is as follows. In 'Materials and Methods', we present the foundational
Figure 1: Probabilistic generative models extract a probability distribution for the RNA family from natural data, which can then be used to generate artificial sequences. Each of these artificially generated sequences is consistent with the statistics of the RNA family, yet they cannot be attributed to any natural variant, thereby introducing an element of novelty.
principles and functionality of the model, describe the data used in the model training and analysis, and provide specific information about the SHAPE-MaP experiments conducted to examine artificial molecules. In 'Results and Discussion', we evaluate the statistical properties of the artificial sequences generated by eaDCA, interpret the parameters of the sparse architectures, and examine the model's predictions regarding mutational effects on tRNA. Additionally, using eaDCA to access normalized sequence probabilities and model entropies, we conduct an analysis on how different constraints, such as compatibility with secondary structures or family conservation and coevolution statistics, affect the size of the viable RNA sequence space. Lastly, we assess the SHAPE-MaP experimental results, characterizing the structure of artificially generated tRNA molecules.
## 2 Materials And Methods
In this section we discuss the data and methodological basis of our work: the data used for training and evaluating our models, the new algorithm proposed here, and the experimental protocol to test artificial sequences generated by our approach.
### **Data**
#### 2.1.1 RNA families -
All generative models discussed here are trained for individual RNA families, i.e. homologous but diverged sequences of largely conserved structure and function [4]. Each family is represented by a Multiple Sequence Alignment (MSA) \(\mathcal{D}=(a_{i}^{r}\,|\,i=1,\ldots,L;\,r=1,\ldots,M)\), with \(L\) indicating the aligned sequence length, and \(M\) the number of distinct sequences. The entries \(a_{i}^{r}\) are either one of the four nucleotides \(\{A,C,G,U\}\), or the alignment gap "\(-\)" reflecting insertions and deletions in the original unaligned sequences. Following standards in the literature, phylogenetic effects are partially compensated by reweighting each sequence by a factor \(\omega_{r}\)[17], which equals the inverse number of all sequences having more than 80% sequence identity to sequence \(r\), and which is used when estimating the empirical single-site nucleotide frequencies \(f_{i}(a)\) and pair frequencies \(f_{i}(a,b)\) from the data \(\mathcal{D}\), cf. the supplementary information (SI) for details. The sum of weights \(M_{\text{eff}}=\sum_{r}\omega_{r}\) defines the effective sequence number as a more accurate reflection of the diversity of the dataset.
eaDCA is tested on 25 RNA families of known tertiary structure with \(L\) ranging from 50 to 350 and \(M\) from 30 to 50000. These families are extracted from the CoCoNet benchmark dataset [24] by limiting ourselves to datasets with high \(M_{\text{eff}}\) and sequence length \(L<350\). The MSA were updated using a more recent Rfam release (May 2022), and matched to exemplary PDB structures. A comprehensive list of family names, characteristics, and used PDBs is given in Table 1 of the SI.
The main text concentrates on two families: the tRNA family (RF00005) was selected due to the existence of mutational datasets, and our own experiments were performed on this family. Due to its unusually large size, the MSA was randomly downsampled to \(M=30,000\) sequences. The cyclic di-AMP riboswitches (RF00379) were chosen due to their interesting and non-trivial statistical properties. The robustness of all results is illustrated in the SI, where the other 23 families are exhaustively analyzed.
#### 2.1.2 Mutational Fitness Dataset -
To evaluate our ability to predict mutational effects in RNA molecules, we utilized the data published in [25]. This dataset provides _in vivo_ fitness measurements for \(23,283\) variants of the yeast tRNACCU at temperatures of \(23^{\circ}C\), \(30^{\circ}C\), and \(37^{\circ}C\), with up to 10 mutations compared to wildtype. These mutations may result in non-functional sequence variants, in difference to the natural sequences in the RNA families. We focus on the results at \(37^{\circ}C\) because, at higher temperature, the tRNACCU\({}_{\text{Arg}}\) becomes increasingly important for the survival of the organism. The details of the datasets and the results for \(23^{\circ}C\) and \(30^{\circ}C\) are provided in the SI. Fitness values are organized such that 0.5 represents a mutant yeast strain incapable of reproduction, while 1.0 is the wildtype fitness.
#### 2.1.3 SHAPE Reference Dataset -
In order to empirically validate our generative models, we conducted Selective 2'-Hydroxyl Acylation analyzed by Primer Extension with Mutational Profiling (SHAPE-MaP) experiments on artificially generated tRNA molecules, cf. below. To ensure the robustness of our analysis and to facilitate a meaningful comparison, we utilized an external published dataset comprising SHAPE reactivity profiles for \(20\) RNA sequences with known secondary structure. This dataset, which we will refer to as the 'SHAPE Reference Dataset', was obtained from [26].
### **Edge Activation Direct Coupling Analysis (eaDCA)**
#### 2.2.1 Algorithm Principle -
The proposed algorithm belongs to the family of DCA algorithms, i.e. it learns a Potts model in the form of Eq. 1 from an MSA \(\mathcal{D}\). However, instead of introducing couplings \(J_{ij}(a,b)\) for all pairs of nucleotide positions \(0\leq i<j\leq L\), we aim at a parsimonious model and activate couplings only between those pairs, which are really coevolving and thus essential for the accurate statistical description of the sequence family. All other pairs, which do not have clear signatures of direct coevolution, shall not be included into the set of coevolutionary couplings, to avoid noise overfitting [23].
Since the empirical pair frequencies \(f_{ij}(a,b)\) are shaped both by direct coupling and indirect correlation, the set of coupled pairs, \(\mathcal{E}=\{(ij)\mid J_{ij}\text{ is non-zero}\}\), cannot be fixed in a single step, but has to be constructed recursively, as is shown schematically in Fig. 2 and detailed below: starting from a profile model of independent nucleotides, \(\mathcal{E}_{0}=\emptyset\), we construct a series of edge sets \(\mathcal{E}_{t}\), by activating or updating edges one by one. In this setting, "activating" an edge signifies to introduce a non-zero coupling for a previously uncoupled pair \((ij)\), while "updating" indicates a change of the coupling value on an already activated edge. As a consequence, at any algorithmic step \(t\), the model
can be written as
\[P_{t}(a_{1},\ldots,a_{L}) = \frac{1}{Z_{t}}\exp\left\{-E_{t}(a_{1},\ldots,a_{L})\right\}\] \[E_{t}(a_{1},\ldots,a_{L}) = -\sum_{i=1}^{L}h_{t}(a_{i})-\sum_{(ij)\in\mathcal{E}_{i}}J_{ij}(a_ {i},a_{j})\;, \tag{2}\]
with \(E_{t}\) being called "statistical energy". The log-likelihood of the model given the reweighted data \(\mathcal{D}\) reads
\[\mathcal{L}_{t}=\sum_{r=1}^{M}\omega_{r}\log P_{t}(a_{1}^{r},\ldots,a_{L}^{r})\;. \tag{3}\]
#### 2.2.2 Initialization -
As already mentioned, the model is initialized without couplings, \(\mathcal{E}_{0}=\emptyset\), and reads
\[P_{0}(a_{1},\ldots,a_{L})=\frac{1}{Z_{0}}\exp\left\{\sum_{i=1}^{L}h_{t}(a_{i}) \right\}\;. \tag{4}\]
The log-likelihood \(\mathcal{L}_{0}\) is easily maximized by setting
\[h_{t}(a)=\log f_{t}(a) \tag{5}\]
for all \(i=1,...,L\) and \(a\in\{A,C,G,U,-\}\), i.e. the model reproduces the empirical single-residue statistics. The resulting partition function is \(Z_{0}=1\). This simple model is known under the name of profile model (or independent-site model) and widely used in bioinformatic sequence analysis.
#### 2.2.3 Recursion -
The algorithmic step from \(t\) to \(t+1\) is characterized by a modification of a single \(5\times 5\) coupling matrix \(J_{kl}\) on a single position pair (\(kl\)),
\[E_{t+1}(a_{1},...,a_{L}) = E_{t}(a_{1},...,a_{L})-\Delta J_{kl}^{*}(a_{k},a_{l})\;,\] \[\mathcal{E}_{t+1} = \mathcal{E}_{t}\cup\{(kl)\}\;. \tag{6}\]
If (\(kl\)) was not yet active in \(\mathcal{E}_{t}\), this corresponds to an edge activation, otherwise to an edge update.
The edge (\(kl\)) and the coupling change \(\Delta J_{kl}^{*}(a_{k},a_{l})\) are chosen to maximize the log-likelihood \(\mathcal{L}_{t+1}\). As is proven in the SI, this is realized by choosing the pair
\[(kl)=\operatorname*{argmax}_{1\leq m<n\leq L}\,D_{KL}\left(f_{mn}\,\|\,P_{mn}^ {t}\right)\;, \tag{7}\]
i.e. the currently least accurate position pair, in which the current model's marginal two-residue distribution \(P_{mn}^{t}\) deviates most from the empirical target distribution \(f_{mn}\). \(D_{KL}\) denotes the standard Kullback-Leibler divergence,
\[D_{KL}\left(f\,\|\,P\right)=\sum_{a,b}\,f(a,b)\,\log\frac{f(a,b)}{P(a,b)}\;, \tag{8}\]
for any pair of probability distributions \(f\) and \(P\). Note that the selection goes over all position pairs \(m,n\), independently on their activation status in \(\mathcal{E}_{t}\). Note also that the exact determination of the marginal distributions \(P_{mn}^{t}(a,b)\) is infeasible since it would require to sum over all \(5^{L}\) possible sequences of aligned length \(L\). We therefore use Markov chain Monte Carlo (MCMC) sampling; the exact procedure based on persistent contrastive divergence is detailed in the SI.
The optimal coupling change is also derived in the SI, it equals the log-ratio of the empirical and the current model probabilities on edge (\(kl\)),
\[\Delta J_{kl}^{*}(a,b)=\log\frac{f_{kl}(a,b)}{P_{kl}^{t}(a,b)}\;. \tag{9}\]
To avoid excessively high values for rare amino-acid combinations, this term is regularized using pseudocounts for both the empirical frequencies and the model probabilities, cf. the SI for details.
#### 2.2.4 Termination -
As the process continues, the resulting models become increasingly accurate and complex. It can be observed from Eq. (S14) that a fixed point is attained when all the two-point probabilities are equal to their respective empirical frequencies. This corresponds exactly to the fixed-point condition imposed in bmDCA. Because this condition is impossible to achieve in practice due to MCMC sampling noise, we set an _ad hoc_ stopping criterion by looking at how well the empirical two-site covariances
\[c_{ij}(a,b)=f_{ij}(a,b)-f_{i}(a)f_{j}(b) \tag{10}\]
are reproduced by the connected correlations in the model,
\[c_{ij}^{t}(a,b)=P_{ij}^{t}(a,b)-P_{i}^{t}(a)P_{j}^{t}(b)\;. \tag{11}\]
The algorithm terminates at step \(t_{f}\) when the Pearson correlation \(\rho\) between these two quantities, evaluated over all positions \(i\), \(j\) (including those not in \(\mathcal{E}_{t_{f}}\)) and all nucleotides \(a,b\) (including gaps), reaches 0.95. This value is commonly reached in bmDCA as well. The reason for computing the score based on the \(c_{ij}(a,b)\) instead of the \(f_{ij}(a,b)\) is that the former isolates coevolution statistics from the conservation ones.
Figure 2: Schematic representation of the recursive eaDCA algorithm.
The entire procedure is summarized as a pseudocode in Alg. 1, and represented graphically in Fig. 2.
```
0:
1: Profile model: \(P_{0}(a_{1},\ldots,a_{L})=\prod_{i=1}^{L}f_{i}(a_{i})\)
2: Iteration counter: \(t\gets 0\)
3:
4:\(\rho(c_{ij}^{t},c_{ij})<0.95\)do
5: Estimate two-point probabilities \(P_{ij}^{t}(a,b)\) for all \(i,j\) and all \(a,b\) via MCMC sampling
6: Identify the worst represented edge (\(kl\)) according to Eq. (S13)
7: Update the interaction on the identified edge using Eq. (S14) to get the new model \(P_{t+1}(a_{1},\ldots,a_{L})\)
8: Add the identified edge (\(kl\)) to \(\mathcal{E}_{t}\) to get \(\mathcal{E}_{t+1}\)
9: Increment iteration counter: \(t\gets t+1\)
10:
11:\(t_{f}=t\) :
12: Output \(P_{t_{f}}(a_{1},\ldots,a_{L})\) with 95% reproduced pair correlations
```
**Algorithm 1** (eaDCA)
#### 2.2.5 Normalized Sequence Probabilities and Model Entropy -
Probabilistic generative models typically do not provide normalized sequence probabilities but only relative sequence weights. This limitation arises because obtaining normalized probabilities would necessitate summing over the entire \(5^{L}\) sequence space to get the partition function \(Z\) given by Eq. (1),
\[Z=\sum_{a_{1},\ldots,a_{L}}\exp\left\{\sum_{i=1}^{L}h_{i}(a_{i})+\sum_{(ij)\in E }J_{ij}(a_{i},a_{j})\right\}\;, \tag{12}\]
which is infeasible for any biologically relevant value of \(L\). Relative weights are sufficient for MCMC sampling of artificial sequences, but they are meaningful just within the context of a specific model and cannot be compared across distinct models.
The advantage of eaDCA is that the recursion preserves model's partition function \(Z\), as is shown in SI. Since \(P_{0}\) is trivially normalized, we have
\[Z_{0}=1\;\;\;\text{and}\;\;\;\;Z_{t+1}=Z_{t}\;, \tag{13}\]
i.e. the models remain trivially normalized under recursion:
\[P_{t}(a_{1},\ldots,a_{L})=\exp\left\{-E_{t}(a_{1},\ldots,a_{L})\right\}\;. \tag{14}\]
A nice consequence of this propriety is that we have easy access to the model's entropy \(S_{t}\)
\[S_{t} = -\langle\log P_{t}(a_{1},\ldots,a_{L})\rangle_{P_{t}} \tag{15}\] \[= \langle E_{t}(a_{1},\ldots,a_{L})\rangle_{P_{t}} \tag{16}\]
via the average statistical energy, which can be accurately estimated from an MCMC sample. From the entropy \(S_{t}\) we can deduce the size of the viable sequence space,
\[\Omega_{t}=\exp\left\{S_{t}\right\}\;, \tag{17}\]
which can be thought of as the effective number of different sequences that we can sample from \(P_{t}(a_{1},\ldots,a_{L})\).
In practice, because we depend on stochastic MCMC techniques for estimating the two-site probabilities \(P_{ij}(a,b)\) in eaDCA iterations, the \(Z\) is only approximately conserved. However, it is straightforward to accurately monitor and account for these errors, see the SI for details.
### **SHAPE-MaP** probing of artificial tRNA molecules
To conduct an empirical evaluation of our eaDCA-derived model, we performed a SHAPE-MaP analysis on a set of 76 artificially generated tRNA molecules (RF00005 family). Here we summarize the experimental protocols, full details are provided in the SI.
#### 2.3.1 RNA production -
We designed a total of 76 tRNAs. Each RNA was synthesized with the T7 promoter positioned at its 5' end and the last 16 nucleotides were kept constant for all constructs matching those of the yeast tRNA(asp). The DNA templates (gBlock or oligoPools from Integrated DNA Technologies) were amplified by PCR using the Phusion Hot Start Flex polymerase (New England Biolab). After purification, the DNAs were transcribed via in vitro transcription using the HiScribe T7 High Yield RNA Synthesis Kit (NEB). The resulting RNAs were purified by de-naturing gel electrophoresis.
#### 2.3.2 RNA modification -
The SHAPE reactivity is not only a reflection of RNA structure but also depends on experimental conditions, necessitating careful consideration in comparative analysis of SHAPE-MaP reactivity profiles [27]. Consequently, we chose to probe our artificial tRNA with the same folding buffer (50 mM HEPES pH 8.0, 200 mM potassium acetate pH 8.0, and 3 mM MgCl2) than the yeast tRNA(asp) Reference SHAPE Dataset [26]. For RNA modification, three conditions were performed: positive (with the probe), negative (only the probe solvent) and denaturing (denatured RNA with the probe). For the positive and negative conditions, the RNAs were allowed to refold and the modifying agent (1M7 in DMSO for positive) or the solvent (neat DMSO for negative) were quickly mixed to the RNAs and incubated 5 min at \(37^{\circ}\)C. For the denaturing condition, the RNAs were first denatured by addition of formamide followed by a heat treatment and the RNAs were modified similarly (1M7 probe). After incubation, all modified RNAs were purified via ethanol precipitation and quantified by the Qubit RNA High Sensitivity assay kit (ThermoFisher).
#### 2.3.3 Library preparation -
The modified RNAs were pooled in equimolar proportion based on their conditions (positive, negative, denaturing) and reverse-transcribed using the SuperScript II reverse transcriptase (ThermoFisher) with a buffer allowing the misincorporation of nucleotides at the chemically modified positions. We also used a Template Switching Oligo (TSO) in order to incorporate the Rd1 Illumina adapter during the reverse-transcription, and brought the Rd2 Illumina adapter by the
reverse-transcription primer. After cDNA purification, PCR enrichment was conducted to amplify the DNA libraries and incorporate the P5/P7 Illumina adaptors. The samples were purified by AMPure XP beads (Beckman Coulter), quantified by quantitative PCR (KAPA Library Quantification Kit, Roche), and sequenced on a MiSeq-V3 flow cell (Illumina) at the NGS platform of Institut Curie (Paris, France).
#### 2.3.4 SHAPE Reactivity Mapping -
We employed the ShapeMapper2 [28] software to process the sequencing data, obtaining SHAPE reactivity values for each artificial tRNA molecule, which partition sites into the reactivity classes 'low','medium' and 'high' [29; 27]. ShapeMapper2 was run with default settings, except for the depth-per-site quality threshold that was lowered from 5000 to 3000. This allowed us to gather reactivity data covering more than 50% or the residues for at least 30 of the molecules under investigation.
## 3 Results And Discussion
### **eaDCA models reproduce the natural sequence statistics**
The initial evaluation of the performance of any generative model involves assessing its ability to accurately replicate the statistical properties of natural sequences. For this, we conduct an analysis across 25 RNA families, comparing the statistical properties of their natural sequences with those of independently and identically distributed (i.i.d.) samples, generated from both the eaDCA model and a simpler, secondary-structure based covariance model (CM) [30]. In the latter, only nucleotide pairs involved in secondary structure (S2D) are connected by couplings. The corresponding CM can be written as a Potts model, with all maximum-likelihood parameters given exactly by the empirical one- and two-nucleotide statistics,
\[\begin{split} P_{CM}(a_{1},\dots,a_{L})=\exp\left\{\sum_{i=1}^{L }h_{i}(a_{i})+\sum_{(i)\in 3D}J_{ij}(a_{i},a_{j})\right\}\\ J_{ij}(a_{i},a_{j})=\log\frac{f_{ij}(a_{i},a_{j})}{f_{i}(a_{i})f _{j}(a_{j})},\qquad h_{i}(a_{i})=\log f_{i}(a_{i})\;.\end{split} \tag{18}\]
We use the CM as a performance benchmark over the classical profile model because the information about RNA secondary structure is readily available, and because the base-pair complementarity in RNA secondary structure causes a strong pairwise coevolution. Also, CM are at the basis of Rfam MSA, since they are used in RNA homology detection and sequence alignment by Infernal [31].
Fig. 3 displays statistical analyses for the RF00005 and RF00379 families. Additional results for 23 other RNA families can be found in the SI. Figs. 3D and 3H display the comparison between the connected two-point correlations of the natural data with estimates from a sample of an eaDCA obtained model and the CM. The results indicate a strong correlation of eaDCA with the natural data for all residue pairs, including those not connected by activated edges, while the CM reproduces the pair correlations only on the secondary structure, and totally fails on all other pairs of positions.
Figure 3: **A.** RF0005: Principal component analysis of natural sequences (\(M=28770\)). **B.** RF005: eaDCA generated sequences mapped to the first two principal components of the natural sequences (\(M=12000\)). **C.** RF005: CM generated sequences mapped to the first two principal components of the natural sequences (\(N=12000\)). **D.** RF0005: scatter plot of the connected two-site correlations of the natural sequences vs. eaDCA generated sequences (blue) or CM generated sequences (red - insert) (\(N=12000\)). **E.** RF00379: Principal component analysis of natural sequences (\(N=3808\)). **F.** RF00379: eaDCA generated sequences mapped to the first two principal components of the natural sequences (\(N=12000\)). **G.** RF00379: CM generated sequences mapped to the first two principal components of the natural sequences (\(N=12000\)). **H.** RF00379: scatter plot of the connected two-site correlations of the natural sequences vs. eaDCA generated sequences (blue) or CM generated sequences (red - insert) (\(N=12000\)).
A second test of the eaDCA model's generative properties is demonstrated in Figs. 3A-C, and 3E-G, which present the natural, eaDCA, and CM-generated sequences projected onto the first two principal components (PCs) of the natural MSA [12; 32]. The sequences sampled from the eaDCA model effectively reproduce the visible clustered structure of the natural sequences, while CM are unable to do so, with projections on the PCs being concentrated around the origin.
The observations in Fig. 3 indicate the inability of CM to serve as accurate generative models, while sequences sampled from eaDCA are coherent with the natural data on the tested observables.
From Table 1, we conclude that eaDCA delivers generative models able to reproduce the natural RNA statistics with only a fraction of the number of parameters of a standard bmDCA implementation (parameter reduction of 84.85% for RF00005 and 87.83% for RF00379). A complete table for all the 25 families is provided in the SI and confirms this observation across families.
### Parameter interpretation
A key benefit of employing a parsimonious generative model is the potential for obtaining a more insightful interpretation of its parameters. In the context of RNA, the eaDCA method is producing good generative models with a small percentage of the parameters of fully connected models (bmDCA), which in turn enables easier biological interpretation. Since the edge activation procedure starts from the profile model, all single-site frequencies \(f_{i}(a)\) are accurately reproduced from the beginning. Due to its iterative nature, eaDCA produces additionally an ordered list of edges carrying non-zero couplings. These added edges can be used to explain the connected two-point statistics to high accuracy, and they are thus carrying the full information about residue coevolution in the MSA of the RNA family under consideration.
For this study, we classified the first \(L\) added edges into four categories:'secondary structure base pairs' (\(S2D\)), 'tertiary structure contacts' (if the distance between the involved residues is less than 8 A), 'neighbors' (if the pair is less than four positions apart along the primary sequence), and 'other' (not fitting into any of the prior categories). We present here the analyses for two RNA families (RF0005 and RF00379) but the results of all 25 families can be found in the SI.
In Fig. 4, the analysis revealed a relationship between contacts and added edges, with almost all \(S2D\) pairs being systematically taken in the early iterations. This trend is consistent with their strong coevolutionary relationship, and shows that CM models capture many of the strongest, but by far not all such relationships. Tertiary contacts are included much later (and many never activated even at termination of the algorithm); we therefore conclude that they typically induce a much lower coevolutionary signal than secondary-structure contacts. The presence of activated edges between neighboring residues may in part be attributable to phylogenetic relationships, but also to the insertion or deletion of multiple nucleotides, i.e. to the presence of gap stretches in the MSA.
A relatively small fraction of activated edges do not offer an interpretation (class "other"), it remains unclear if these edges reflect the limited statistics in the natural MSAs, or coevolution beyond structural contacts. eaDCA considers them important for reproducing the natural sequence statistics. In this context, it is important to note that the complete list of edges generated by eaDCA before meeting the termination condition significantly exceeds the sequence length \(L\), consequently leading to a large quantity of 'other' entries.
### Prediction of mutational effects
Potts models (including profile, CM and DCA models) are energy-based statistical model, cf. Eq. (1). The maximum-likelihood strategy used in their training assumes that nicely functional sequences have high probability, or equivalently low energy. Conversely, low-probability / high-energy sequences do not obey the evolutionary constraints learned by the model, and are expected to be non-functional.
This property can be used to predict mutational effects [11; 10] by comparing the energies of the mutated and the wildtype sequences. In this way, a mutant sequence can be characterized by the energy difference
\[\Delta E=E(\text{mutant})-E(\text{wildtype})\.\]
Figure 4: **A.** RF0005: first \(L\) activated edges colored according to their classification. **B.** RF0005: contact map (upper-left) and activated edges (lower-right). Secondary-structure contacts are evidenced in green, non-contacting activated edges in black. **C.** RF00379: first \(L\) activated edges colored according to their classification. **D.** RF00379: contact map (upper-left) and activated edges (lower-right). Secondary-structure contacts are evidenced in green, non-contacting activated edges in black.
A positive \(\Delta E\) implies a reduction in the model probability for the mutant, suggesting that the mutation is likely to be deleterious. On the contrary, a negative \(\Delta E\) signals a potentially beneficial mutation.
To test the quality of these predictions, we use the tRNA fitness dataset [25] discussed above in _Materials and Methods_. We perform the following steps:
* For all mutant sequences in this dataset, we determine the energy differences to wildtype using both the eaDCA model, \(\Delta E_{eaDCA}\), and the covariance model, \(\Delta E_{CM}\), as well as the Hamming distances (i.e. the number of mutations from wildtype).
* We select all mutant sequences having experimental fitness values \(f\geq f_{\theta}\) above an arbitrary fitness threshold \(f_{\theta}\). This threshold is varied in our analyses to focus on diverse strengths of mutational effects.
* We calculate the Spearman rank correlation between the three predictors (eaDCA, CM, Hamming) and the fitness values \(f\) over the selected mutants, as functions of the fitness threshold \(f_{\theta}\).
As is shown in Fig. 5A, when all mutants are included (\(f_{\theta}=0.5\)), all three predictors show similarly good correlation values between 0.6 and 0.7. This results from the fact that most higher-order mutants, i.e. those of higher Hamming distance, have very low fitness, while mutants with one or two mutations frequently show more moderate fitness values. However, when increasing the fitness threshold \(f_{\theta}\), i.e. when including only mutations of more moderate fitness effects, \(\Delta E_{eaDCA}\) correlations remain much more robust while the other two rapidly decay with \(f_{\theta}\). This shows that the eaDCA energies are informative over variable ranges of fitness effects.
To corroborate this finding, Fig. 5B shows a heatmap of the 8101 two-point mutant sequences (at fixed Hamming distance of 2), comparing \(\Delta E_{eaDCA}\) predictions and fitness values \(f\). We observe a robust correlation even in this case, where the Hamming distance is constant and thus uncorrelated to the fitness measures.
### **Size estimation and constraint analysis of RNA sequence space**
The entire space of sequences of given length is enormous. To illustrate this, the number of all ungapped sequences of length \(L=150\) is \(4^{150}\simeq 2\times 10^{90}\), if we include gaps like in our MSA, the number even rises to \(5^{150}\simeq 7\times 10^{104}\), and this exceeds by 10-24 orders of magnitude the estimated number \(\sim 10^{80}\) of atoms in the universe. However, the viable sequence space related to a specific RNA family, i.e. to all sequences taking similar structure and performing similar function, is expected to be much smaller: sequences have to meet constraints imposed by residue conservation and coevolution, and possibly by other evolutionary constraints.
Our models allow for analyzing the impact of the different constraints on the entropy \(S\) and the size \(\Omega=e^{S}\) of the sequence space, using the approach discussed in _Materials and Methods_. More specifically, the influence of conservation is measured via the entropy \(S_{0}\) of the initial profile model, while the combined influence of conservation and coevolution is measured via the entropy \(S_{t_{f}}\) of the final model at termination [33]. These results are corroborated by an independent estimation using a code published in [34], which estimates the size of the se
Figure 5: **A.** Correlation of Hamming distance, eaDCA model energy and CM energy with tRNA fitness at different values of minimum fitness threshold \(f_{\theta}\). **B.** Relation between eaDCA model energy and tRNA fitness for the 8101 double mutants.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Name & \(L\) & \(M\) & \(M_{eff}\) & CM \(c_{ij}\) corr & eaDCA \(c_{ij}\) corr & Parameter Reduction (PR\%) & \(S\) & \(\Omega\) \\ \hline \hline RF00005 & 71 & 28770 & 2267 & 0.66 & 0.95 & 84.85\% & 51.34 & \(1.98\times 10^{22}\) \\ RF00379 & 136 & 3808 & 1428 & 0.25 & 0.95 & 87.83\% & 89.56 & \(1.05\times 10^{39}\) \\ \hline \end{tabular}
\end{table}
Table 1: eaDCA results for RF0005 and RF00379 at termination \(t=t_{f}\).
quences space compatible with a given secondary structure, by efficiently sampling the neutral network related to a given RNA secondary structure.
The results are shown in Fig. 6A for our selected RNA families. All three constraints enforce an exponential relationship between the size of the sequence space \(\Omega\) and the sequence length \(L\), i.e. the per-site reduction of the sequence space due to any individual type of constraint is roughly constant across the tested RNA families. Interestingly, conservation and secondary structure constrain the sequence space similarly, while the constraints imposed by both conservation and coevolution are, in line with expectations, the most stringent one. As illustrated in Fig. 6B, out of the initially \(5^{L}\) possibly gapped sequences of aligned length \(L\) about \((2.98\pm 0.10)^{L}\) are compatible with the empirical conservation statistics, \((2.66\pm 0.09)^{L}\) with the consensus secondary structure of the RNA families, and finally \((1.74\pm 0.09)^{L}\) with both conservation and coevolution. To go back to our initial example \(L=150\), the final eaDCA sequence space would contain about \(10^{36}\) distinct sequences: this number, while remaining enormous as compared to the observed extant sequences found in sequence databases like Rfam, comprises only a tiny fraction of \(~{}10^{-68}\) of the entire sequence space of this length, illustrating the fundamental importance of such constraints in the natural evolution of RNA families.
Note that these numbers also have an interesting interpretation in terms of the effective number of nucleotides, which are, on average, acceptable in a typical position of a functional RNA molecule. Out of the 5 theoretical possibilities (4 nucleotides or an indel), close to three are compatible with familywide conservation, or 2.66 with the consensus secondary structure. However, both constraints are insufficient for generative modeling as shown before. Our generative modeling indicates a much stronger reduction of the effective number of acceptable nucleotides per site to only 1.74 on average.
### Structural characterization of artificial tRNA molecules by SHAPE-MaP probing
The definite test for the generative capacity of a statistical models of biomolecular sequences are experiments performed on artificially sampled sequences. As a first step in this direction, we have performed SHAPE-MaP experiments, which provide non-trivial structural information: chemical probing reveals different reactivities for nucleotide positions, which are paired vs. unpaired in the secondary structure of the tested RNA molecule [27]. This information is statistical: in the Reference dataset of published SHAPE experiments (cf. _Materials and Methods_), out of the paired sites typically more than 80% have low, less than 10% high reactivity, while in the unpaired sites less than 50% have low and around 40% have high reactivity, cf. Fig. 7. Determining the specific pairing status of individual pairs is non-trivial due to a number of confounding factors: first, the correlation between SHAPE reactivity and base pairing is nonlinear. Second, SHAPE data may not mirror a single structure, but an average reactivity across a structural ensemble. Third, SHAPE reactivity can also be influenced by factors beyond secondary structure, such as base stacking and tertiary contacts [27]. Nevertheless, SHAPE-MaP experiments are a valuable instrument for assessing whether the SHAPE profile of a tested RNA molecule - natural or artificial - is statistically coherent with its expected secondary structures.
We used the tRNA family (RF0005) discussed above for generating 76 artificial tRNA sequences. We probed the SHAPE reactivities at each site of these sequences. We categorized the reactivities into three classes: low, medium, and high (as is common practice [29; 27]) and we assessed the distribution of these classes among paired and unpaired residues for each sequence. Due to experimental reasons we did not sample the 76 sequences freely from \(P(a_{1},...,a_{L})\), but we introduced two types of constraints:
* Due to experimental constraints, the last 16 nucleotides were kept constant, cf. _Materials and Methods_ and SI. Only the first 55 positions were generated by the model conditioned to the last 16, i.e. they were sampled from \(P(a_{1},...,a_{55}|\,a_{56},...,a_{71})\). This reduces the effective sequence space \(\Omega=e^{8}\) from \(\sim 10^{22}\) sequences (cf. Table 1) to \(\sim 10^{14}\), which is still a huge number beyond the possibility of exhaustive testing.
Figure 6: **A.** Relations between RNA family length and the size of the sequence space under coevolution, conservation and secondary structure constraints. Dots are results for the different RNA families studied in this work, and lines are exponential fits. **B.** Effective number of nucleotides per site \(x\) for each constraint. The size of the compatible sequence space is \(\Omega=x^{L}\).
* Inspired by works about proteins [12; 35], only sequences of low energy (\(E<44\)) and good secondary-structure score (\(F>0.53\), measured as the F-score between the RNAfold (October 2022) [36] predicted structure and the tRNA consensus one) were included in the test, cf. the details given in the SI. These filters come at relatively low cost: while the energy-based filter is met by about 50% of all sampled sequences, the double filter still preserves about 20% of the sequences, inducing thus a very moderate decrease of the size of the sequence space.
For a more detailed overview of the dataset used in the test, please refer to the SI. Finally after probing, 34 of the 76 tRNAs satisfied the experimental standard of possessing reactivity data for more than 50% of the sequence positions and were included in our further analyses.
In Fig. 7, we observe that the reference dataset and the generated tRNA behave similarly, with clearly visible differences between paired and unpaired sites. We employed Permutational Multivariate Analysis of Variance (PERMANOVA) to test for statistical differences between the reactivity distributions of the reference dataset and of the generated tRNA, and between paired and unpaired sites. While we do not see indications for statistically significant differences between the reference dataset and the generated sequences (\(p\)-values of 0.993 for paired sites, 0.420 for unpaired sites), the paired and unpaired sites in the generated sequences are significantly distinct (\(p\)-value \(1.9\times 10^{-7}\)).
Moreover, observing that the statistics for paired residues are more rigorous, especially on the two 'Reference SHAPE dataset' tRNA, we decided to implement an additional filtering criterion. We deem artificial molecules as 'Criteria Met' if over 85% of their paired residues fell into the low reactivity class. 14 out of 34 generated tRNA are classified as 'Criteria Met'. Those are also the sequences that better pass the qualitative visual criterion (Fig. 7C and SI).
These results, albeit rather qualitative, indicate that the SHAPE reactivities of our artificially generated tRNAs are as consistent with the desired tRNA secondary structure, as the sequences in the reference dataset are with their published secondary structures, thereby supporting the validity of the eaDCA model as a generative statistical model.
## 4 Conclusion And Outlook
As in many disciplines and thanks to the strong increase in data availability, generative models gain growing importance in the modeling of biomolecular sequences. A first practical reason is quite obvious: generative models are of high biotechnological interest in biomolecular optimization and _de novo_ design, directly or in combination with screening or selection assays when suggesting functionally enriched sequence libraries.
A second reason is less obvious, but has the potential to be at the basis of a paradigmatic shift in computational molecular biology. Traditionally, sequence bioinformatics was dominated by simpler statistical models, like the profile or covariance models discussed also in this paper, and which are of great success in analyzing extant biomolecular sequences, detecting homology, annotating sequences functionally, establishing RNA or protein families, reconstructing their phylogenies or aligning sequences. Generative models have the potential to go substantially beyond this, and to substantially contribute to our future understanding of biological molecules in their full complexity as high-dimensional, disordered and interacting systems. When
Figure 7: **A.** Reactivity distribution for non paired residues, the bars refers to the indicated set average (‘Reference SHAPE Dataset’ \(N=21\), ‘Generated tRNA’ \(N=34\), ‘Generated tRNA’ \(N=14\)). **B.** Reactivity distribution for paired residues. C. Example of reactivity-structure projection for the 1A molecule of the 14 ‘Generated tRNA (Criteria Met)’. **D.** Example of reactivity-structure projection for ‘Reference SHAPE Data’ tRNA(asp) Yeast
a model is capable to generate diversified but viable artificial sequences, it necessarily incorporates essential constraints, which are functionally or structurally imposed on the sequences in the course of evolution. Even in this case, there is no guarantee that only such essential constraints are present in the model, and that these are encoded in a biologically interpretable way. In our work, we are therefore searching for _parsimonious_ models, which contain as few as possible useless constraints (by using an information-theoretic criterion for including constraints, or the corresponding parameters, into the modeling), and which in turn should be maximally interpretable.
However, generative modeling is not trivial. The total sequence space is enormous, while the example sequences in RNA or protein family databases are quite limited. Very different models may be generative. In a parallel effort, [9] proposed and experimentally validated restricted Boltzmann machines (RBM) as generative models. In the case of protein families, it was shown before that RBM, which are shallow latent-space models, are able to detect extended functional sequence motifs [37; 38], but at the same time they have difficulties in representing pairwise structural constraints like residue contacts. On the contrary, eaDCA was found to easily detect contacts, but the patterns responsible for the clustered structure of families into subfamilies, easily visible by dimensional-reduction techniques like principal component analysis, remain hidden in the coupling network. It remains a challenge for the future to combine such different approaches to further improve interpretability of generative models.
Another problem is that, by definition, generative models reproduce statistical features of the training data, but there is no guarantee that statistical similarity implies functionality - this dilemma is well known from text or image generation with generative models, which do not always produce correct text contents or possible images. However, generative modeling will naturally benefit from the parallel evolution of more and more quantitative high-throughput experimental approaches in biology. On one hand, these can be used naturally to test model predictions (e.g. mutational effects) and sequences generated by the models, going far beyond the low-throughput experiments we presented in this predominantly computational work. On the other hand, these techniques substantially change the data situation in biology in several aspects (cf. e.g. [12; 25]): while current dataset, i.e. MSA of homologous RNA or protein families, consist of positive but experimentally non annotated data, experiments provide (i) quantitative functional annotations for thousands of sequences and (ii) negative examples for artificial non-functional sequences generated by imperfect methods like random mutagenesis or sampling from imperfect models learned from finite data. This change in data will trigger future methodological work to develop integrative methods using all biologically relevant available information within the modeling process.
## 5 Acknowledgements
We are grateful to Sabrina Cotogno, Matteo Bisardi, Roberto Netti and Vaitea Opuu for helpful discussions during the project and the writing of the paper. We acknowledge also funding by the Institut Pierre-Gilles de Gennes (ANR-10-EQPX-34, to PN), EU H2020 Grant ERC AbioEvo (101002075, to PN), Human Frontier Science Program (RGY0077/2019, to PN), EU H2020 grant MSCA-RISE InferNet (734439, to MW).
High-throughput sequencing was performed by the ICGex NGS platform of the Institut Curie supported by the grants ANR-10-EQPX-03 (Equipex) and ANR-10-INBS-09-08 (France Genome Consortium) from the Agence Nationale de la Recherche ("Investissements d'Avenir" program), by the ITMO-Cancer Avieasen (Plan Cancer III) and by the SiRIC-Curie program (SiRIC Grant INCa-DGOS-465 and INCa-DGOS-Inserm-12554). Data management, quality control and primary analysis were performed by the Bioinformatics platform of the Institut Curie.
## 6 Conflict Of Interest Statement
None declared.
## 7 Data And Code Availability
The data and the code used in this paper are available at [https://github.com/FrancescoCalcvanese/FCSeqTools.jl/](https://github.com/FrancescoCalcvanese/FCSeqTools.jl/)
|
2303.05344 | Recent Advances of Deep Robotic Affordance Learning: A Reinforcement
Learning Perspective | As a popular concept proposed in the field of psychology, affordance has been
regarded as one of the important abilities that enable humans to understand and
interact with the environment. Briefly, it captures the possibilities and
effects of the actions of an agent applied to a specific object or, more
generally, a part of the environment. This paper provides a short review of the
recent developments of deep robotic affordance learning (DRAL), which aims to
develop data-driven methods that use the concept of affordance to aid in
robotic tasks. We first classify these papers from a reinforcement learning
(RL) perspective, and draw connections between RL and affordances. The
technical details of each category are discussed and their limitations
identified. We further summarise them and identify future challenges from the
aspects of observations, actions, affordance representation, data-collection
and real-world deployment. A final remark is given at the end to propose a
promising future direction of the RL-based affordance definition to include the
predictions of arbitrary action consequences. | Xintong Yang, Ze Ji, Jing Wu, Yu-kun Lai | 2023-03-09T15:42:01Z | http://arxiv.org/abs/2303.05344v2 | # Recent Advances of Deep Robotic Affordance Learning: A Reinforcement Learning Perspective
###### Abstract
As a popular concept proposed in the field of psychology, affordance has been regarded as one of the important abilities that enable humans to understand and interact with the environment. Briefly, it captures the possibilities and effects of the actions of an agent applied to a specific object or, more generally, a part of the environment. This paper provides a short review of the recent developments of deep robotic affordance learning (DRAL), which aims to develop data-driven methods that use the concept of affordance to aid in robotic tasks. We first classify these papers from a reinforcement learning (RL) perspective, and draw connections between RL and affordances. The technical details of each category are discussed and their limitations identified. We further summarise them and identify future challenges from the aspects of observations, actions, affordance representation, data-collection and real-world deployment. A final remark is given at the end to propose a promising future direction of the RL-based affordance definition to include the predictions of arbitrary action consequences.
## I Introduction
Humans interact with various objects in the environment in a purposeful and meaningful way, because we have the ability to understand affordances - the functionalities of objects, the possibilities and effects of our actions and the relationship between the two. As originally defined by Gibson [1], the affordances of an object or a place in an environment provide knowledge about what actions are possible and what the consequences of these actions are with respect to a certain agent (a human, an animal or a robot). In short, it indicates **possibilities** and **effects** of the agent's actions given an object or a part (an image observation) of the environment. In the field of robotics, affordances could serve with great potential to bridge robot perception and action [2]. This has been actively integrated and explored with machine learning techniques in recent years [3, 4, 5, 6]. Jamone _et al._ proposed a thorough review and drew connections among the studies of affordances in psychology, neuroscience and robotics [3]. Yamanobe _et al._ summarised the use of affordances specifically in robotic manipulation tasks [4]. Ardon _et al._ summarised and provided guidance on design choices and how affordance relations can be used to boost policy learning [5].
However, as pointed out in [6], there is still a lack of consensus for a formal definition of affordances, and many previous works are limited in the cases with object-affordances. Inspired by a recent works that proposed to define, learn and compute affordances in reinforcement learning (RL) on Markov decision processes (MDPs) of any kind [7], we propose in this paper to summarise and classify recent publications (since 2015) in deep robotic affordance learning (DRAL) following the RL-based definition. There are several motivations to do so:
* The RL-based definition helps to unify and classify DRAL works from a behavioural learning perspective, providing new insights to understand and clarify the different usages of affordances in the literature;
* The definition in [7] is the most general in the literature as all concepts are defined over a generic MDP without any assumption of the environmental or agent aspect. It suits any kind of environmental affordances and agents as long as they can be described by MDPs, which is commonly achievable.
* As the primary aim of DRAL is to enable robots to infer afforded actions, the RL community provides a rich body of methods ready to be integrated with affordances;
* Understanding and analysing the concept based on a mathematical framework helps to provide computationally and practically valuable insights;
In practice, knowing the affordances means knowing the desired effects of some actions and whether these effects can be realised at some situations. With this in mind, Khetarpal _et al._ introduced the notion of _intents_ that captures the desired outcome of an action based on the reinforcement learning (RL) framework [7, 8]. For example, the intent of a moving right action in a gridworld task is the agent being moved to the cell on the right. The intent is not always satisfied, e.g., when the cell on the right is a wall. Thus, the definition of affordances is a subset of the state-action space in which the intent is indeed satisfied [7]. In order words, the moving right action is afforded at every state where the moving right intent is satisfied.
Notice that there are two levels of the topic: 1) the learning and discovery of affordances and 2) the use of affordances. Researchers have only started recently to study the first level, e.g., option/subgoal discovery [9]. Most research focuses on the use of the knowledge of affordances, meaning how to estimate the action possibilities and/or infer the afforded actions. These works are classified into three categories as follow.
* For the majority of the DRAL works, the focus is to estimate the action possibilities given an observation and then infer afforded actions from it (Section III). These works
can be further classified into methods that model the action possibilities as binary variables (subsection III-A) [10, 11, 12, 13, 14, 15, 16, 17] and continuous variables (subsection III-B) [18, 19, 20, 21, 22, 23];
* The second line of works propose to generate afforded actions from a set of object keypoints (Section IV) [24, 25, 26, 27, 28, 29]. The keypoints were used to geometrically constrain the search space of action inference methods within the set of afforded actions.
* The last part of the reviewed papers suggest to learn a partial dynamic model for only afforded actions, resulting in faster model learning and motion planning (Section V) [30, 31, 7].
The rest of this review is organised as follow. Section II briefly recalls the definition of affordances in reinforcement learning proposed by [7], classifies the reviewed works and draws connections between RL and affordances. Section III, V and IV provides the main technical ideas and discusses the pros and cons of the reviewed papers. Section VI summarises these works and poses future challenges from the perspectives of observations, actions, affordance representations, data collection and rel-world deployment. Section VII concludes this review.
## II Affordance definition in MDPs
For the sake of clarity, we recall in this section the reinforcement learning (RL) problem and the definition of affordance based on the Markov Decision Processes (MDPs) [7].
An MDP is a tuple \(M=\langle\mathcal{S},\mathcal{A},r,P,\gamma\rangle\), where \(\mathcal{S}\) is the set of states, \(\mathcal{A}\) is the set of actions, \(r\) is the reward function, \(P(s^{\prime}|s,a)\) is the system transition dynamics and \(\gamma\in[0,1]\) is the discount factor [8]. The RL problem is in general to find an optimal policy, \(\pi:\mathcal{S}\rightarrow\mathcal{A}\), which produces actions that maximise the expected discounted future return \(\mathbb{E}_{\pi}[G_{t}]=\mathbb{E}_{\pi}\left[\sum_{t}^{\infty}\gamma^{t}r_{t}\right]\). The typical process of leaning such a policy loops over the procedures of data collection, policy evaluation and policy improvement [8].
Given an action \(a\in\mathcal{A}\), an intent \(I_{a}(s)\) maps a state \(s\in\mathcal{S}\) to a state distribution that the action is intended to achieve. The intent model can thus be seen as a partial dynamic model: \(P_{I}(s^{\prime}|s,a)\), which only captures the dynamics for a subset of states where the action has a desired effect. Given the full system dynamic model \(P(s^{\prime}|s,a)\), an intent is satisfied (i.e., an action is affordable) at a state, to a degree \(\epsilon\), if and only if:
\[d(P_{I}(s^{\prime}|s,a),P(s^{\prime}|s,a))\leq\epsilon \tag{1}\]
where \(d\) is a function that measures the difference between two distributions and \(\epsilon\in[0,1]\) is a precision parameter. Given a set of intents \(\mathcal{I}=\cup_{a\in\mathcal{A}}I_{a}\), the affordance is then defined as a relation \(\mathcal{AF}_{\mathcal{I}}\subseteq\mathcal{S}\times\mathcal{A}\), such that \(\forall(s,a)\in\mathcal{AF}_{\mathcal{I}}\), Eq. 1 is satisfied. Accordingly, an affordance prediction model, or an action possibility model, gives the probability of whether a pair of state and action belongs to the set of affordance:
\[p^{\mathcal{AF}}(s,a)=p((s,a)\in\mathcal{AF}_{\mathcal{I}})\]
**Remark 1:** Practically speaking, knowing the affordance set means knowing the desired effects of a subset of actions (intents, action effects) and the subset of states that these effects can be achieved (states where the intents are satisfied, action possibilities). Before inferring the afforded actions or computing the action possibilities, one must know what actions, or what effects, are concerned or to be used. This logic implies that a robot must have learnt or been given some prior knowledge of the concerned actions beforehand. At the current stage of DRAL research, this knowledge was given by researchers, who then focused on the estimation of action possibilities and the inference of afforded actions. We categorise and discuss these methods in three classes:
* works that tried to infer the afforded actions from the estimated action possibilities \(\hat{p}^{\mathcal{AF}}\) (section III);
* works that tried to infer the afforded actions of objects in terms of keypoints (section IV);
* works that tried to infer afforded actions by planning with \(\hat{p}^{\mathcal{AF}}\) and a learnt partial dynamic model associated with intents, \(\hat{P}_{I}(s^{\prime}|s,a)\) (section V).
In the following sections, especially section III and IV, the readers shall see that most recent works in using affordances in robotics did not reside their methods in the RL framework, although these methods can be explained from the RL perspective.
**Remark 2:** From the RL perspective, or a behavioural learning perspective, the knowledge of affordances can help to accelerate and improve almost every aspect of the RL process by constraining the action space. These include the learning of a value function, a policy, or a world model, the exploration direction, and the action inference process. For example, if an action possibility model is available, one can integrate it into the exploration process of any RL algorithm such that it only collects experiences where actions do cause changes to the environment. Alternatively, on may constrain the updates of a policy within the set of affordable actions. Also, as demonstrated by [7], focusing on the set of afforded actions simplifies the learning of a world model and accelerate planning.
Either for data collection, policy learning, world-model learning or action planning, the use of affordances in RL may have its best potential in the hierarchical reinforcement learning (HRL) framework where an agent learns to use a set of motion primitives (sub-policies, skills, temporal-extended actions) to achieve different tasks [32]. Knowing the possibilities and effects of the skills can accelerate learning by constraining and guiding the choices of exploring skills, filtering out experiences with irrelevant or non-effective actions, etc., reducing the lengthy exploration and learning processes for tasks with long horizons.
**Remark 3:** A further step to take in this regard is the learning and discovery of affordances. Knowing the set of affordances is promising and valuable in terms of accelerating learning, however, enabling an agent to learn and discover affordances makes the agent robust to potential changes in the environment and the agent itself. This is closely related to the popular topic of option/subgoal discovery in HRL [9]. Future research topics in this regard include learning new skills,
adapting old skills, skill composition, action space design, etc. One can envision a robot acquiring new skills in a new environment or modifying old skills as its hardware wear and tear.
## III Modelling Action Possibilities
This section discusses recent papers on modelling and learning action possibilities. This section examines two lines of works that represents \(\hat{p}^{\mathcal{AF}}\) (whether an action or a set of actions is affordable given an observation) as binary segmentation masks (III-A) and continuous action success scores (III-B). We summarise these works and discuss their limitations in subsection III-C.
Based on the definition given in section II, these methods compute \(\hat{p}^{\mathcal{AF}}\) for a set of actions given a state. The estimated \(\hat{p}^{\mathcal{AF}}\) can be used to infer desirable actions in various ways based on its representations, such as taking the action with the maximum possibility, i.e., computing \(argmax_{a\in\mathcal{A}}\ \hat{p}^{\mathcal{AF}}\). In practice, computing \(\hat{p}^{\mathcal{AF}}\) is commonly based on sensory observations such as point clouds or images, instead of the true system states. The observation representations, training methods, deployment tasks and motion generation methods adopted by these works are summarised in TABLE I.
### _Image segmentation_
Many works propose to model what actions are afforded on which part of an object as an image or point cloud segmentation problem [10, 11, 12, 13, 14, 15, 16, 17]. In these works, a segmented part of an object image or point cloud is labelled with one or more affordable actions, i.e., a binary mask that indicates whether an action can be applied to that part of the object. The action possibilities are simplified into binary variables and represented as pixel-level masks. For example, as shown in Fig. 1, the pixels or points of the handle of a cup are labelled as being graspable, while those of the hollow part of the cup are labelled as containable. It is common for different parts of an object to have different affordances. It is also common for the same part of an object to have multiple affordances [14].
As a natural extension, these pixel-level or point-level affordance predictions were used to provide the downstream manipulation policy with extra task information. The most straightforward way in grasping tasks is to designate the centre of the detected affordance masks as a grasping location [14]. A more recent method treated the predicted segmentation masks as an extra channel of the image observations. A manipulation policy then processed this extended image to determine what actions to take [15]. A self-supervised learning method was proposed to learn to predict the pixel masks for gripper-object interaction centres from human teleoperation demonstrations of a table tidy-up task [17]. These pixel masks were then used in the real world for a model-based policy to move the gripper closer to the interaction point of an object and a reinforcement learning policy to pick up the object. There was also an attempt to learn a latent representation of object affordances with Variational Auto-Encoders [16, 33]. It was successfully trained using simulation data and transferred to a real-world robotic system, aided by domain randomisation technique. They used the latent representation to generate robot trajectories that move the gripper to a point above a cup [16].
### _Action scores_
Several works proposed to represent the action possibility as a continuous variable that indicates how confident it is that an action can be successfully executed (is affordable) [18, 19, 20, 21, 22, 23], while the segmentation masks discussed in the last subsection are binary variables.
Zeng _et al._ proposed to model the success probabilities of four kinds of primitive grasping and suction actions given the RGB-D observation of a clutter scene [18]. The probability distributions are defined as matrices whose entries represent the success rates of executing actions at the pixel locations (see Fig. 2). Similarly, Cai _et al._ proposed to predict graspability, ungraspability and background affordances over image pixels, achieving a grasping success rate of \(93\%\) on a set of household items, \(91\%\) on a set of adversarial items and \(87\%\) in clutter scenarios [19]. The network was trained with synthetic data generated by an antipodal grasp heuristics in simulation in a self-supervised fashion. Wu _et al._ extended such a 2D affordance map defined in the pixel space into a 3D space,
Fig. 1: Segmented image from [14]. Red parts afford grasping, orange afford supporting, deep blue afford containing, blue afford wrap-grasping, and purple afford bounding.
Fig. 2: Examples of action score prediction.
estimating the graspability not only in different x-y positions, but also in different grasping angles [20]. Another work proposed to first train a neural network to predict object classes and segmentation masks of a clutter scene, and then train a DQN network to predict the grasping success scores based only on the segmentation masks [21]. This work successfully transferred the learnt grasping score prediction system to the real world with domain randomisation. Recently, Mo _et al._ proposed to predict action scores for a set of six motion primitives based on RGBD images or point clouds. They designed a three-branch network architecture to 1) predict the actionability of a pixel or a point, 2) propose gripper orientations and 3) estimate the success score of the primitive action given the action pixel and orientation [22]. In another interesting recent work [23], the authors propose to represent the action possibilities of a large number of pretrained motion skills by the action value function in the RL framework based on RGB observations. These papers are closely related to the works in vision-based robotic grasping (VBRG), where many works were not linked to the concept of affordance. For a thorough review for VBRG, please refer to [34, 35].
### _Summary and limitations_
To summarise, though some recent works tried to estimate action possibilities for a variety of actions, most of them focused on grasping tasks when deploying the learning system. These works leveraged motions that are generated by a motion planner or hand-crafted by humans. In terms of affordance learning, they sought to estimate whether a planned motion or primitive can be successfully performed at an image pixel location or a point in the point cloud. The learnt affordance model was used to infer a desired action by extracting a pixel location or a point that is centred at the affordable region or with the highest action possibility. There are several limitations regarding the papers discussed in this section.
1) At the current research stage, the community lacks an image segmentation dataset for object affordances at large scale [5], when compared to datasets like COCO [36] or ImageNet [37]. It is promising to build larger datasets, as demonstrated by the ImageNet dataset for image classification, though a vast amount of human labour is required. To reduce such human labour, self-supervised learning techniques could be employed, such as automatic labelling [38, 17] and interactive labelling [39].
2) Though multi-affordance detection has drawn researchers' attention [13, 14], real-world manipulation experiments using affordances are restricted to only one or two categories (mostly grasping) [11, 12, 13, 14, 15, 16, 17, 18, 19, 21]. Not much attention was given to other actions such as push and pull [20, 22]. In addition, they are subject to fully or partially hand-crafted motion primitives (e.g., top-down parallel-jaw grasping), thus are limited to a very small set of object-action relationships. For example, they cannot represent affordances for 6DoF grasping actions or non-primitive interactions. A recent work in coupling language instructions and mobile robot motion skills makes a pioneering example on more complex action affordances learning and real-world grounding [23].
3) These methods only predict action possibilities, ignoring the knowledge about the effects of these actions. From a human perspective, we tend to use affordance knowledge for planning, which requires us to be aware of not only what the possible actions are, but also what the results of these actions are. The next section elaborates on recent attempts to incorporate both action possibilities and effects.
4) These works exclude the dependencies between the executions of multiple actions and the influences of different manipulation objectives. For example, the possibilities of grasping a cup at its handle would differ when the robot is tasked to hang it up, place it on a table or hand it out to another agent. This involves a planning process for different final task objectives. We discuss more on this point in the next section.
## IV Keypoint Affordance
In the last section, we discuss papers that sought to first compute the action possibilities, \(\hat{p}^{\mathcal{AF}}\), and then infer the afforded actions from the action possibilities. For example,
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline Paper & Cat. & Affordance (afforded actions) & Input & Method & Deployment Task & Motion \\ \hline
[10] & IS & Created UMD dataset & RGBD & SL & None & - \\
[11] & IS & Grasp; Cut; Poke; Pound; Four; Support & PCD & SL & 3-Finger Dexterous grasp (Sim) & Planning \\
[12] & IS & Created IHF-Affect dataset & RGB & SL & Dexterous grasp (Real) & Planning \\
[13] & IS & from IIT-AFFI[12]; [UMD[10] datasets & RGB & SL & Dexterous grasp (Real) & Planning \\
[14] & IS & from UMD[10] dataset & RGBD & SL & 4DoF PIG; bean-scop (Real) & Planning \\
[15] & IS & Dexterous grasp & RGBD & SL & Dexterous grasp (Sim) & RL \\
[16] & IS & from UMD[10] dataset & RGB(D) & SL & Cup-locate (Sim \& Real) & Planning \\
[17] & IS & Grasp & RGBD & SSL & 4DoF PIG (Sim \& Real) & Primitive \& RL \\ \hline
[18] & AS & Grasp; Suction & RGBD & SL & 3DoF PIG \& suction (Real) & Primitive \\
[19] & AS & Grasp & RGB & SSL & 4DoF PIG (Sim \& Real) & Primitive \\
[20] & AS & Grasp; Push & RGB & SSL & 4DoF PIG \& push (Sim \& Real) & Primitive \& RL \\
[21] & AS & Grasp & BOSM & SSL & 4DoF PIG (Sim-to-Real) & Primitive \& RL \\
[22] & AS & Push; Pull & RGBD \& PCD & SSL & Push \& pull (Sim) & Primitive \\
[23] & AS & Pick; Move; Place; Go-to; Open/close drawer & RGB & SL/RL & Kitchen tasks (Sim \& Real) & Primitive \& SL/RL \\ \hline \end{tabular}
\end{table} TABLE I: Summary of papers focused on learning action possibilities. **Cat.:** category; **IS:** image segmentation; **AS:** action scores; **PCD:** point cloud data; **SL:** supervised learning; **SSL:** self-supervised learning; **Sim:** simulation **Real:** real-world; **DoF:** degree of freedom; **PJG:** parallel-jaw grasp; **BOSM:** binary object segmentation mask; **RL:** reinforcement learning; **Sim-to-Real:** simulation to real world transfer.
compute a binary or continuous matrix that indicates whether a gripper can pick up an object at each pixel location of an RGBD image. In these cases, a pixel in an image or a point in a point cloud is associated to an action as a parameter of a motion planner or a primitive.
In this section, we review works that proposed to generate the afforded actions by predicting object keypoints, skipping the computation process of the action possibility [24, 25, 26, 27, 28, 29]. The keypoints were defined as the functional points of an object. They were associated with affordance because they could be used by some action inference methods (e.g., a motion planner) to generate afforded actions. Keypoints provide the action inference method with a smaller search space and easier-to-define task-relevant geometric constraints. From the RL perspective, the keypoints can be seen as an abstract observation that indicates the action space for a policy or value function, or itself as a constrained action space that corresponds to a set of affordable motion primitives. The later one is adopted by many previous works. Previously, keypoint methods with non-deep learning techniques were limited to specific objects of a particular shape and size [3]. In this review we focus on deep learning-based methods that are able to generalise to unseen and novel objects [24, 25, 26, 27, 28, 29]. A summary of the observations, object types, training methods, deployment tasks and motion generation methods of these works are given in TABLE II.
Manuelli _et al._ proposed kPAM, which defined keypoints for objects that belong to the same category (Fig. 3) and supported grasping, placing and hanging actions to be inferred from the keypoints. For example, three keypoints at the handle, top and bottom for mugs. These keypoints were predicted given a segmented RGBD image and then used by a motion planner to generate motions for pick and place tasks. The authors later formulated a feedback control framework with keypoint-based object and action representations, and accomplished a peg-in-hole insertion task with a variety of objects [25]. They also extended the method to include a shape completion technique, named kPAM-SC, so that the generated motions can handle object collision [26]. Another work, KETO, used a three-keypoint pattern, including a grasp point, a function point and an effect point, to represent hammer-like tools and infer hammering motions [27]. A generative network was trained to produce keypoint candidates given an object point cloud. An evaluation network was trained to predict the manipulation success scores for these keypoints. The training process was conducted in a self-supervised manner using task completion signals. These keypoints, along with a set of task keypoints within a simulation environment, were used to generate motions by solving a Quadratic Programming problem [27]. Turpin _et al._, proposed GIFT, which predicted a set of representational keypoints for an object and then selected from them a grasping point and an interaction point. This procedure allowed the functional keypoint pattern to be discovered instead of being specified by users. They represented the functional keypoint proposal model as a Graph Neural Network (GNN) over the representational keypoints. They then computed a robot motion using model predictive control and evaluated the task-specific return for the motion. The functional keypoint proposal model was trained by optimising an REINFORCE loss with the task-specific return.
Instead of predicting keypoints for a category of objects as done in [24, 25, 26, 27, 28], Xu _et al._ proposed to define keypoints for afforded actions on images [29]. They modified the affordance image segmentation dataset UMD [10] by assigning a set of five 2D keypoints to each affordance region. These keypoints defined the position and direction information about the afforded actions. They proposed a two-branch deep neural network, AffKp, to learn affordance image segmentation and keypoint detection in parallel via supervised learning. The predicted keypoints were projected from the image plane to the real-world frame and used to infer the corresponding afforded actions.
**Summary:** To sum up, these works proposed to infer afforded actions that manipulate an object from a set of keypoints defined on the object. According to the affordance definition introduced in section II, they are classified as methods that compute the afforded actions, rather than compute the action possibilities. For example, to infer various grasping configurations from a predicted grasping point on a tool handle [27] instead of a set of action possibilities [18]. Most of the works leveraged human knowledge to create a pattern of keypoints and trained deep neural networks to predict them for a category of objects [24, 25, 26, 27, 29], while only one work, GIFT, proposed to discover functional keypoints using task-completion signals [28]. The main benefits of using keypoints to infer afforded actions include but not limit to:
* keypoints can capture the common properties of a category of objects;
* keypoints can support the inference of various afforded actions;
* keypoints can be used to reduce the searching space of afforded actions for the action inference processes.
**Limitations:** The primary limitation of keypoint-based methods is that pre-defining a fixed pattern of keypoints requires a relatively large amount of human prior. This eases the keypoint prediction model from the difficulty of learning
Fig. 3: Category-level keypoint detection from [24]. (a) Detected keypoints for different cups in planning; (b) keypoint detection; (c) grasping; (d) hanging.
from scratch, but limits the generalisability of the learnt keypoint patterns. In reality, one specific pattern of keypoints is unlikely to be sufficient and flexible enough for the diverse manipulation tasks that may need to be performed on the objects. The aforementioned papers have evaluated their methods on tasks with relatively simplified geometric constraints and manipulation skills [24, 25, 26, 27, 29]. For example, when a robot could only reach a hammer's head, it could not grasp the head and use the handle as a hammering point if it can only recognise the head as a hammering point. Learning to predict keypoint patterns with free interactions and task-completion signals is promising for reducing such human biases [28].
Secondly, sparse keypoint representation is not very compatible for tasks that are sensitive to object shapes and sizes, when compared to a full point cloud representation. For example, when manipulating a deformable object like a soft plastic cup, keypoints are not enough for the robot to determine the grasping force and track the deformation of the cup [40]. In this regard, multi-modal representations may be required, such as using keypoints along with a shape-completion procedure [26]. In the future, other observation modalities, such as tactile sensors, force sensors, etc., may be incorporated with keypoints to better infer afforded actions in real-world manipulation tasks.
Last but not least, the primary method to infer afforded actions using keypoints, motion planning, is difficult and expensive in environments with complex dynamics and large action and state spaces. It poses two problems to classic methods: 1) user-specified dynamic models have difficulties to represent highly stochastic and non-linear real-world systems and to generalise to high-dimension inputs like images and 2) planning over large action and state spaces is very expensive and difficult. Researchers have proposed to address them by learning a system dynamic model from data [41, 42, 43, 44, 45], though they did not explicitly consider the concept of affordances. We elaborate in the next subsection on recent works that propose to plan robot motions using a learnt affordance-aware dynamic model.
## V Modelling Action Possibilities and Effects
As defined in section II, the effects of afforded actions can be modelled by a partial dynamic model \(\hat{P}_{I}(s^{\prime}|s,a)\), which predicts the next system states given a pair of state and _afforded_ action. The motivation of building a dynamic model is to equip a robot with a safer and more efficient method to generate motion plans or learn from imagined data. A dynamic model releases the robot from expensive and potentially unsafe interactions with the real world [41, 45]. Previous works on action effect modelling have relied extensively on manually-abstracted state representations and dynamics [46, 47], which has a deep connection to the field of symbolic planning [48]. It is difficult, however, to hand-craft dynamic models for real-world systems with complex observations. Therefore, in recent years researchers have proposed deep learning methods to learn the dynamic model from data, demonstrating the value of having access to a dynamic model over the space of complex sensory observations [41, 49, 45].
Among many recent advances of learnt world models, Khetarpal _et al._ proposed to integrate the concept of affordances in the model-based reinforcement learning (MRL) paradigm (as rephrased in section II). They first learnt a binary classification model to predict whether some actions are afforded given an observation, which was essentially estimating the action possibilities \(\hat{p}^{\mathcal{AF}}\) as binary variables. Different from methods discussed in section III, they did not infer the afforded actions from the estimated action possibilities. Rather, they proceeded to learn a dynamic model of the world for only actions that were classified as possible or effective. Data of non-effective actions are regarded as redundant and ignored. The resultant model was a partial dynamic model (PDM) of the system. During planning, the PDM is only queried for effective actions according to \(\hat{p}^{\mathcal{AF}}\). In short, the benefits of such a framework are twofold: 1) it accelerates planning by only considering the afforded actions and 2) it accelerates dynamic model learning by focusing on learning part of the system dynamics concerning the afforded actions of interests. They were demonstrated first in a continuous 2D navigation task in [7] and later in unseen long horizon manipulation tasks in simulation with image inputs (Fig. 4) [30]. This affordance-aware model-based reinforcement learning framework was later extended to develop temporally abstract partial dynamic models, considering options (sub-policies) that are only afforded in certain situations. The authors empirically demonstrated the success of learning option affordances and partial option models online, resulting in more efficient learning and planning in a 2D Taxi task [31].
**Limitations:** As a relatively new direction, the first limitation is the lack of evaluation in more realistic examples. Most previous works are performed in simulation using synthetic data. Tasks with image or point cloud observations from real robots with longer time horizon would increase the complexity considerably. More efforts are required to design more realistic
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Paper & Object classes & Affordance (afforded actions) & Input & Method & Deployment Task & Motion \\ \hline
[24] & Shoes; Mugs & 6DoF PIG, place \& hang & RGBD & SL & Shoe-placing, mug-placing \& mug-hanging (Real) & Planning \\
[25] & Erasersers; Pegs; Holes & 6DoF PIG, wipe, insert & RGBD & SL & Whiteboard wiping, peg-in-hole insertion (Real) & Planning \\
[26] & Shoes; Mugs & 6DoF PIG, place \& hang & RGBD & SL & Same as [24] with shape completion (Real) & Planning \\
[27] & Hammers & 6DoF PIG, hammer, push, reach & PCD & SSL & Object hammering, pushing \& reaching (Sim) & Planning \\
[28] & Hammers & 4DoF PIG, hammer, push, hook & RGBD & SSL & Object hooking, reaching, hammering (Sim) & MPC \\
[29] & UMD+GT dataset & UMD+GT dataset & RQ-D & SL & PIG, pouring, arranging, cutting (Sim \& Real) & Planning \\ \hline \end{tabular}
\end{table} TABLE II: Summary of papers focused on affordance keypoint prediction. **PCD:** point cloud data; **SL:** supervised learning; **SSL:** self-supervised learning; **Sim:** simulation **Real:** real-world; **DoF:** degree of freedom; **PJIG:** parallel-jaw grasp; **MPC:** model predictive control.
tasks.
Secondly, the predicted action effects in the proposed examples are more short-term or instant effects of single-step action commands. In practice, planning is often more valuable with macro actions that consist of a series of single-step control commands, exhibiting a particular kind of skill, such as pushing for a certain distance, approaching and grasping an object, lifting up for a certain height, etc. This requires the algorithm to reason about long-term action possibilities and consequences. Though an attempt was made to incorporate affordances with temporally abstract partial models for more efficient planning at a more abstract level, it was only evaluated in a 2D Taxi task [31]. More effort is needed to evaluate and improve its performance on robotic tasks in the future.
Thirdly, the proposed method focuses on affordances of a given state, which is likely to be computationally inefficient for tasks with complex observations containing diverse information irrelevant to the manipulation goal. From a human perspective, we typically only attend to some parts of the observation that are most relevant to the task of interest, saving energy and improving planning efficiency and accuracy.
## VI Discussions and challenges
According to the reviewed papers, this section summarises the limitations of deep robotic affordance learning (DRAL) and identifies its bottlenecks at the current stage. We conduct the discussion and pose future research challenges from the following angles: observations, actions, affordance representations, data collection and real-world deployment.
### _Observation_
For most tasks, especially real world tasks, a robot relies on sensors to perceive the environment _without the access to the true system dynamics_ such as the velocities of objects. This is one of the most common assumptions adopted by robotic researchers. Previous works have spent efforts developing symbolic representations for the observations of the system to simplify the mapping from sensory observations to affordances [4, 50]. In recent DRAL literature, the types of observations have become more complex, in inducing object states (normally in simulation), object point clouds and RGB(D) images.
Another important assumption made by these works is that _the observation contains enough information to reason about affordance_. However, this does not always hold true. For example, a heated plate may be detected as graspable from RGBD or point cloud observations though it may be actually too hot to hold by a human. Some affordances may require information about temperature, softness, transparent surface, reflection, etc., that are difficult for (depth) cameras to capture. It is also worth-noting that languages are becoming more popular to provide instruction or extra information about the desired tasks and skills for affordance learning [23, 51] due to the rise of large language model (LLM). Information about the robot itself, such as sensorimotor states, could also help to reason about affordances like reachability. On the other hand, affordances of occluded objects are difficult to detect from a fixed camera view point. Combining all these, a promising direction for future research is to apply multi-modal and multi-viewpoint observations for affordance detection [52, 50].
The third assumption about observations, especially for deep learning-based methods, is that _the mapping from inputs to actions or action possibilities can be found through gradient descent_. However, given the large space of observations in the real world, it is very challenging to find such a mapping even it does exist. Some works applied pre-processing methods to help the robot focus on the most relevant information for affordance learning or action inference, such as applying object masks [21] or extracting object keypoints [24]. Such ideas make computation more efficient by shrinking the size of observation space, whereas more or less lose some degree of generality due to human priors. In this regard, future research could focus on representation design or learning, giving special attention to the trade-off between generalisability and learning efficiency (or computational cost) for affordance detection or afforded action inference.
### _Action_
Noticeably, researchers preferred motion primitives in recent DRAL works. For example, grasping primitives that move a gripper towards an identified grasping location and close the fingers [20, 21], and placing primitives that move a gripper with an object to a location and release the fingers [24, 26]. Note that these primitives can be motions planned by a planner [20, 24, 26, 27] or parameterised motor skills [30]. These primitives exhibit relatively simple motions, such as pick-and-lift [14, 15, 19, 20, 21], pick-and-place [18, 24, 26], pushing [30] and hammering [27, 28, 29]. The use of motion primitives as actions exhibits a trend that the community is more interested in the affordances of high-level skills, rather than low-level control commands. To follow this trend, we pose some challenges and future directions to consider.
The adaptability of the primitive motions considered by recent works could be improved, as they were mostly designed for open-loop control. For example, given a grasping point, a grasping motion moves the gripper to the grasping point and closes the fingers, without any adaptation in between. However, the detected grasping affordance may be inaccurate or changed during the execution of the motion due to occlusion, human factors, collision with the robot arm or finger slippery, etc. To cope with such challenges, one may consider a feedback control style method for action inference [25, 28]. Another interesting direction to consider is an algorithm
Fig. 4: The multi-step tool-use task designed to evaluate the Deep Affordance Foresight method proposed in [30]. The robot needs to decide which end of the L-shape stick to grasp for reaching the red block or push the blue block out of the tube.
that is permitted to stop and re-select motion primitives. For example, when an insertion motion changes from affordable to unaffordable, the robot may select a re-position motion without waiting for the insertion motion to reach its execution time limit. The notion of _interrupted options_ based on the option framework [32] may serve as a good theoretic foundation.
Predefined primitive motions are very useful when the manipulation task is in a rather structured environment without unexpected factors. However, the real world is highly unstructured and uncertain. A robot needs to generalise its skills to novel situations quickly or sometimes finds new skills to manipulate an object. This means the robot may be required to discover new afforded actions. To achieve this, the action space needs to be general enough. One promising direction is the study of option or subgoal discovery in hierarchical reinforcement learning [9], in which skills (in the form of sub-policies) are discovered instead of predefined.
### _Affordance Representations_
According to Gibson [1], perceiving affordance does not need information processing or any internal representations, but only requires the extraction of fundamental physical properties of the target object or environment. For example, perceiving that a needle has a pointed end leads to the perception that the needle affords piercing. This reasoning is theoretically sound [53] but is however practically limited as in practice, some form of mathematical representation of affordances is required to facilitate action inferences [6]. Also, it is important to note that there is so far no known widely-adopted benchmarking metrics for qualitative or quantitative comparative studies of different representations proposed in the field. What intermediate representations are needed in the spectrum between end-to-end learning and manually constructing everything is mostly specific to the problem of interest.
As this review is inclined to the recent practical applications of affordances in DL-powered RL and robotics, is seems more graspable and plausible from a practical standpoint to discuss the representations of affordances in recent literature according to _how the action inference method works_. Afforded actions are inferred in mainly three manners: 1) from the action possibility estimates, 2) by a direct mapping from the observations and 3) by planning with a partial dynamic model. The first and third classes require an explicit representation of the action possibilities and effects, while the second one may need an intermediate representation that constrains the action space (such as object keypoints).
Action possibilities for primitive motions were represented often by an _affordance map_, which is typically a matrix that has the same size of the observation image. Its entries indicate the success rates or possibilities of executing certain primitive motions at the corresponding pixel locations [18, 19, 20, 21]. Segmentation masks can be regarded as a special case with binary variables [10, 13, 14, 15, 16]. It can also be applied to point clouds in the 3D space [11, 22]. This representation is efficient as it estimates the possibilities for a set of actions simultaneously, but is limited to primitive motions that operate over the discrete image pixels or object points. It may not easily generalise to continuous observations such as sensorimotor states, force feedback, etc. For actions that are not parameterised on images or point clouds, one may need to represent the action possibilities as a classifier [7]. In order to scale to real-world tasks, it is promising to develop methods to accelerate the learning of the action possibility estimator with large and continuous action space, such as learning from demonstrations [54].
Representing and predicting the effects of actions is another difficult topic. Though an action possibility estimator helps to reduce the learning data requirement and increase the planning efficiency for dynamic models [7, 30], the difficulty of reconstructing high dimensional observations (e.g., image or point cloud) remains. Experiences and methods from other fields could be considered, such as video prediction [55]. There is also a large body of works devoted to the learning of dynamic models [45]. Abstract representation for system observations is another closely related topic [56]. Future research may focus on applying general dynamic model learning methods to partial dynamic models with an action possibility estimator. Another challenge in the long term may be how the learning of affordances affect the learnt representation of the world, which is related to the topic of understanding the world through interaction.
Another way to compute afforded actions in the literature is through a direct mapping from observations to a set of afforded actions. The crucial question is how to represent the scene/object in a way that relate to their afforded actions. One popular solution is to use object keypoints that geometrically capture some functions of a category of objects, such as grasping points of mugs [24, 25, 26, 27, 28, 29], as discussed in section IV. From the keypoint methods we can identify some criteria to be satisfied when considering other types of representations. These include: 1) intuitive or convenient for generating robot motions; 2) able to generalise cross robot hardware (grippers, arms, etc.); 3) able to capture the common properties of many objects. Notice that such a representation should be designed as an abstraction of the observations of a scene or an object that relates to the afforded actions. The keypoint-based methods rely on motion planning or model predictive control to generate the desired motions (see Table II), while one may come out with representations that suit other motion generation techniques (e.g., reinforcement learning, imitation learning, etc.).
### _Data collection_
Deep learning methods require a considerable amount of data to achieve good generalisation performances [57]. Previous papers in DRAL have used supervised learning, self-supervised learning and reinforcement learning as their core training methods, each of which has a unique data collection process.
Supervised learning methods rely fully on human prior to collect and generate data, which is expensive for large datasets (e.g., ImageNet [37]). Most papers use the UMD dataset [10] for evaluation. However, it only provides segmentation labels. To alleviate the difficulty of collecting manipulation-specific data (e.g., grasping points, motion trajectories, etc.),
some papers adopt self-supervised learning to collect data automatically through simulations [19, 20, 21, 27, 28]. Reinforcement learning (RL)-based methods generate training data by interacting with the environment using a learnt policy with some degree of randomness [8]. In addition, the performance of the RL policy is evaluated directly on task return or success rate, without intermediate metrics (e.g., accuracy of predicting segmentation masks or keypoints). However, off-policy RL methods can benefit from data generated from other sources, such as human demonstrations [54].
A limitation, at the current stage, is the lack of a consensus on which benchmark should be used to generate the data and evaluate the algorithms for DRAL. Ideally, such a benchmark should provide handy Application Programming Interfaces (APIs) and functions to support the data collection processes for supervised, self-supervised and reinforcement learning. Common functionalities, such as capturing RGB(D) images and point clouds, classic planning algorithms, popular RL baselines, etc. are also considered helpful. It could be more valuable if tasks that feature multiple manipulation objectives and multi-step manipulation are designed and built-in. There are several open-source datasets, simulation environments or benchmarks that may be extended for such purposes [22, 58, 59, 60]. The community has not yet seen a large scale dataset for DRAL that covers the mentioned aspects.
### _Real-world deployment_
For methods that use real-world data, the main difficulty is primarily the expensive data-collection process, which was covered in the last subsection. The main concern that arises during the final deployment or evaluation is then the insufficient generalisation ability, which is largely caused by the limited amount of training data.
1) _Supervised learning_ methods are easier to be deployed in the real world after being trained, though their performances rely extensively on the quality of the dataset. In the past few years, many datasets that support the learning of stable grasping have been constructed [10, 18, 34, 35, 61, 62]. However, very few are built for multiple manipulation objectives or multi-step tasks [63, 64]. Consequently, more efforts are needed to collect data that cover diverse background textures, view-points, objects (in terms of types, shapes, dimensions, etc), manipulation skills (trajectories) in order for supervised learning-based DRAL to work in the real world.
2) _Reinforcement learning_ in the real world is even more difficult due to the high risk of hardware damages during exploration and a considerable amount of human labours for resetting the environment [65].
3) _Sim2real transfer_ is another stepping stone for successful real-world deployment, as researchers have resolved to training in simulation to avoid the painful data-collection process in the real world. Inevitably, deploying models trained in simulation onto the real-world systems will have to face the simulation-to-reality gap. In order to cope with such differences, researchers have proposed to use domain randomisation to extend the distribution of training data [66]. It can be applied to image textures [16, 21, 66, 67], camera parameters [68] and physical properties [65]. Recent DRAL works limit their real-world applications within a relatively unchanged and structured environment. Long horizon tasks that require the reasoning of the long-term effects of diverse skills or objects have mainly been studied in simulation. More efforts are needed to evaluate and adapt existing methods to real world data.
## VII Conclusion
This review paper looks into the recent advances in the topic of deep robotic affordance learning (DRAL). DRAL aims to develop data-driven (deep learning) approaches to apply the concept of affordance to robotic tasks. We suggest in this review to summarise and analyse these works based on the reinforcement learning (RL)-based definition of affordances [7]. We briefly recall this definition in Section II, where we classify recent DRAL papers and discuss the connections between RL and affordances. Accordingly, they are categorised into three classes of works that:
* 1) infer afforded actions from the estimated action possibilities;
* 2) learn an abstract object/scene representation that relates to the set of afforded actions;
* 3) generate afforded actions through planning with a learnt partial dynamic model and an action possibility classifier.
Advances and limitations of the three lines of works are discussed in section III, IV and V, respectively. A more general discussion for the field and its challenges are given in section VI.
**Final remark:** We further propose here a promising direction to extend the RL-based affordance definition. In [7], the intent captures the desired resultant state of an action taken at a system state. Subsequently, the corresponding affordance is defined as a subset of state and action pairs in which the intent is satisfied. In [31], the definitions of intent and affordance are extended to include multiple timesteps prediction in the MDPs. Here we propose to extend the theory by generalise the definition of intent to capture _an arbitrary kind of consequence_ of an action taken at a state, generalising beyond state prediction. Such intents could be called _general intent_. For example, the intent of a grasping action may include the desired success rate, object dropping rate, the weight of water that can be held, etc. Subsequently, the affordance is defined to include a subset of state and action pairs in which the intent is satisfied. Such affordances may be called _general affordances_.
More importantly, this direction is promising if a thorough mathematical definition is developed based on the RL framework. A set of new algorithms can be developed to infer actions according to the predictions of arbitrary action consequences, instead of simply system states. Similar to the dynamics-based affordances, general affordances can help in exploration, value function or policy learning, model learning and planning by constraining the action space, but with respect to arbitrary action consequences beyond state prediction. However, this is outside of the scope of this review, and much more future efforts are required to derive and experiment the theory.
## VIII Acknowledgement
Xintong Yang thanks the Chinese Scholarship Council (CSC) for providing the living stipend for his Ph.D. programme (No. 201908440400). This work was partially supported by the Engineering and Physical Sciences Research Council (grant No. EP/X018962/1).
|
2302.04367 | Thermodynamics of blackbody radiation in nonlinear electrodynamics | We study the blackbody properties and the thermodynamic equilibrium
quantities of a photon gas in the framework of nonlinear electrodynamics. In
this vein, we take into account the photon propagation in a uniform external
magnetic field in the weak field approximation, where an angular anisotropic
energy density distribution appears in the frequency spectrum. The particular
case when the photon propagates perpendicular to the background magnetic field
is also discussed, which allows us to probe the strong field regime. We then
derive a modified blackbody spectral distribution and the Stefan-Boltzmann law
in this situation. Considerations about Wien's displacement law and the
Rayleigh-Jeans formula are contemplated as well. Deviations from the
thermodynamic quantities at thermal equilibrium such as energy, pressure,
entropy, and heat capacity densities are obtained from the Helmholtz free
energy. As an application, we study three nonlinear electrodynamics, namely,
the Euler-Heisenberg, the generalized Born-Infeld, and the logarithmic
electrodynamics. Possible implications on stellar systems with strong magnetic
fields such as magnetars are discussed. | I. Soares, R. Turcati, S. B. Duarte | 2023-02-08T23:16:25Z | http://arxiv.org/abs/2302.04367v4 | # Thermodynamics of Blackbody Radiation in Nonlinear Electrodynamics
###### Abstract
We study the thermodynamic equilibrium properties of three outstanding nonlinear electrodynamics in a background uniform magnetic field, namely, the generalized Born-Infeld, the Euler-Heisenberg and the Logarithmic electrodynamics. In our approach, we will take into account temperatures below the electron rest mass, i.e., \(k_{B}T\ll m_{e}c^{2}\). In this vein, we derive a modified blackbody spectral distribution and the Stefan-Boltzmann law in this situation. Considerations about the Wien's displacement law and the Rayleigh-Jeans formula are contemplated as well. We then show the appearance of an effective Stefan-Boltzmann constant, which depends on the strength of the background magnetic field and the parameters for each electrodynamics model. Deviations from the thermodynamic quantities at thermal equilibrium such as energy, pressure, entropy and heat capacity densities are obtained from the Helmholtz free energy. Possible implications on stellar systems with strong magnetic fields such as Magnetars are discussed.
## I Introduction
Over the past few decades, there has been a growing interest in using nonlinear electrodynamics to probe physical processes in the regime of strong electromagnetic fields. These studies include investigations in the physics of high intensity lasers [1; 2; 3; 4], intense magnetic fields in compact astrophysical objects [5; 6; 7], radiation propagation inside some materials [8; 9], among others [10].
As is well-known, Quantum Electrodynamics (QED) describes with a very high and accurate precision all the electromagnetic phenomena in both classical and quantum scales [11]. On the other hand, the vacuum polarization induces small deviations from the standard results of QED, leading to the appearance of new phenomena such as birefringence, photon-photon scattering, vacuum dichroism, photon acceleration, among others [3]. It is important to remark that these effects become relevant when there exist electric and magnetic fields up to a critical value, \(\varepsilon_{c}\approx m_{e}^{2}c^{3}/e\hbar\approx 10^{18}V/m\approx 10^{9}T\), in some region of the space, where \(m_{e}\) is the electron rest mass [12].
The phenomenological features associated with the QED vacuum polarization are usually studied in the framework of nonlinear electrodynamics [10; 13; 14]. In this sense, a straightforward manner to emulate vacuum polarization effects is by introducing external background fields in the standard theoretical models [12]. In this scenario, phenomena such as birefringence can be easily studied by describing electromagnetic waves propagating in empty space.
From the theoretical perspective, nonlinear electrodynamics have been extensively investigated in a wide range of areas such as gravity, cosmology and condensed matter systems [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34]. Nonlinear electrodynamics also appears as an important ingredient in some fundamental scenarios such as string and M-theory [35; 36]. From the experimental point of view, in turn, the investigation of electromagnetic phenomena in the strong field regime is a straightforward manner to probe not only properties of the QED in the non-perturbative regime, but also effects in Quantum Field Theory in general. Several experimental efforts are currently in progress in order to probe nonlinear effects of the electromagnetic field, which include the measurement of light by light scattering in \(Pb+Pb\) collisions at the Large Hadron Collider [37], the photon splitting in strong magnetic fields [38], experiments with laser beams crossing magnetic fields [39], among others [40]. Indeed, deviations from QED are also to be inspected by some experiments under way, which include: The Station of Extreme Light (SEL), the Europe's Light Infrastructure (ELI Project) and the ExaWatt Center for Extreme Light Studies (XCELS). These recent developments in experimental physics, which probe some fundamental symmetries in physics, also encourage a new look at the possibility of a physics beyond the Standard Model (SM) of particle physics and fundamental interactions.
Effective field theories are vastly used to describe several phenomena at high energies [41; 42]. Here, we will explore the photon propagation in the presence of strong magnetic fields and the consequences to the thermodynamics of blackbody radiation through the study of three nonlinear models, namely: the generalized Born-Infeld theory, the Euler-Heisenberg electrodynamics and the Logarithmic Lagrangian.
The structure of this paper is organized as follows. In Sec. (II) we review the main features of gauge and Poincare invariant nonlinear electrodynamics theories. In Sec. (II.1), the dispersion relation is derived. Aspects related to the blackbody spectral density and thermodynamic equilibrium properties of the system are discussed in Sec. (II.2). The implications on the generalized Born-Infeld, Euler-Heisenberg and Logarithmic electrodynamics are contemplated, respectively, in Secs. (III.1), (III.2) and (III.3). Some comments about the obtained results are discussed in Sec. (III.4). Our final remarks and further perspectives can be found in Sec. (IV).
We shall adopt the gaussian units unless otherwise specified. In our conventions, the signature of the Minkowski metric is \((+,-,-,-)\)
General framework
In this section, we will give a brief review of the main features of nonlinear electrodynamics theories. To accomplish that, we will restrict our analysis to the class of gauge and Lorentz Lagrangians \(\mathscr{L}=\mathscr{L}\left(\mathscr{F},\mathscr{G}\right)\) formed by the invariant bilinear forms:
\[\mathscr{F} =-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}=\frac{1}{2}\left(\mathbf{E}^{2}- \mathbf{B}^{2}\right), \tag{1}\] \[\mathscr{G} =-\frac{1}{4}F_{\mu\nu}\tilde{F}^{\mu\nu}=\mathbf{E}\cdot\mathbf{ B}, \tag{2}\]
where \(F_{\mu\nu}\left(\equiv\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\right)\) is the field-strength of the electromagnetic field and \(\tilde{F}^{\mu\nu}=\left(1/2\right)\epsilon^{\mu\nu\alpha\beta}F_{\alpha\beta}\) is the dual stress-tensor.
To preserve the parity symmetry, only quadratics terms in the fields will be considered. In our approach, it will be considered low energy photons, i.e., we will restrict our analysis to energy scales below the electron rest mass. At this energy range, one can integrate out the electrons and then consider an effective electromagnetic Lagrangian density. Indeed, we can assume that we are actually considering the physics of the purely spin-1 abelian gauge sector.
Therefore, by applying the variational method, one obtains the corresponding field equations, which takes the following form
\[\partial_{\mu}h^{\mu\nu}=0, \tag{3}\]
where
\[h^{\mu\nu}=\frac{\partial\mathscr{L}}{\partial F}F^{\mu\nu}+\frac{\partial \mathscr{L}}{\partial G}\tilde{F}^{\mu\nu}. \tag{4}\]
Besides the field equations, the complete description of the system consists of the Bianchi identity
\[\partial_{\mu}\tilde{h}^{\mu\nu}=0, \tag{5}\]
where \(\tilde{h}^{\mu\nu}=(1/2)\epsilon^{\mu\nu\alpha\beta}h_{\alpha\beta}\).
From now on, we will be considering the propagation of electromagnetic waves in the presence of an external uniform magnetic field. In the next sections, it will be derived the dispersion relation of the photon field and the partition function, which will enable us to obtain the blackbody radiation laws and the thermodynamic equilibrium quantities in this situation.
### Modified Dispersion Relation
We now adopt the linearization procedure in order to obtain the dispersion relation for the photon in the above scenario. To begin with, let us split the electromagnetic field \(F^{\mu\nu}\) as
\[F^{\mu\nu}=F_{B}^{\mu\nu}+\phi^{\mu\nu}, \tag{6}\]
where \(F_{B}^{\mu\nu}\) describes a strong background electromagnetic field and \(\phi^{\mu\nu}\) is a perturbation wave field.
Inserting the relation (6) into the tensor (4) and expanding perturbatively up to the first order, one finds
\[h^{\mu\nu}=\left.h^{\mu\nu}\right|_{f=F}+\left.\frac{\partial h^{\mu\nu}}{ \partial f^{\alpha\beta}}\right|_{f=F}\phi^{\alpha\beta}. \tag{7}\]
Replacing \(h^{\mu\nu}\) in Eq. (3), and take into account that
\[\partial_{\alpha}\phi^{\mu\nu}\gg\partial_{\alpha}F_{B}^{\mu\nu}, \tag{8}\]
one gets the following relation:
\[\frac{\partial h^{\mu\nu}}{\partial f^{\alpha\beta}}\partial_{\mu}\phi^{ \alpha\beta}=0. \tag{9}\]
The relation (8) tell us that we are bound to consider the regime of slow varying but arbitrary background electromagnetic fields.
Now, assuming the solution \(\phi^{\mu\nu}=\epsilon^{\mu\nu}e^{ikx}\), where \(\epsilon^{\mu\nu}=k_{\mu}\epsilon_{\nu}-k_{\nu}\epsilon_{\mu}\), one obtains the set of algebraic equations
\[M^{\mu\nu}\epsilon_{\nu}=0, \tag{10}\]
where
\[M^{\mu\nu} \equiv L_{F}\left(k^{2}\eta^{\mu\nu}-k^{\mu}k^{\nu}\right)-L_{FF}a^ {\mu}a^{\nu}\] \[-L_{FG}\left(\tilde{a}^{\mu}a^{\nu}+a^{\mu}\tilde{a}^{\nu}\right) -L_{GG}\tilde{a}^{\mu}\tilde{a}^{\nu}, \tag{11}\]
and
\[L_{F}=\frac{\partial\mathscr{L}}{\partial F},\quad L_{G}=\frac{ \partial\mathscr{L}}{\partial G},\quad L_{FF}=\frac{\partial^{2}\mathscr{L}}{ \partial F^{2}},\] \[L_{GG}=\frac{\partial^{2}\mathscr{L}}{\partial G^{2}},\quad L_{ FG}=\frac{\partial^{2}\mathscr{L}}{\partial F\partial G}. \tag{12}\]
Furthermore, we also have defined \(a^{\mu}=F^{\mu\nu}k_{\nu}\) and \(\tilde{a}^{\mu}=\tilde{F}^{\mu\nu}k_{\nu}\).
Decomposing in the appropriate basis [14], we are left with the following dispersion relation
\[k^{2}=z_{\pm}a^{2}, \tag{13}\]
where
\[z_{\pm}=\frac{-L_{F}\left(L_{GG}+L_{FF}\right)+2F\left(L_{FF}L_{GG}-L_{FG}^{2} \right)\pm\sqrt{\delta}}{2\left[2L_{F}\left(L_{GG}F-L_{FG}G\right)+G^{2} \left(L_{FF}L_{GG}-L_{FG}^{2}\right)-L_{F}^{2}\right]}, \tag{14}\]
and
\[\delta =\left[2F\left(L_{FF}L_{GG}-L_{FG}^{2}\right)+L_{F}\left(L_{GG}- L_{FF}\right)\right]^{2}\] \[\quad+\left[2G\left(L_{FF}L_{GG}-L_{FG}^{2}\right)-2L_{F}L_{FG} \right]^{2}. \tag{15}\]
From the dispersion relation (13), it is clear that the dependence of each nonlinear model in the photon propagation is fully encoded in the term \(z_{\pm}\).
Since we are interested in the effects of the nonlinearity on the photon modes, we write Eq. (13) in terms of the background electromagnetic field, which give us
\[\Lambda^{\mu\nu}k_{\mu}k_{\nu}=0, \tag{16}\]
with
\[\Lambda^{\mu\nu}\equiv\eta^{\mu\nu}+z_{\pm}F^{\mu\beta}F_{\beta}^{\ \ \nu}. \tag{17}\]
It is convenient to put the relation (16) explicitly in terms of the electric and magnetic fields [43]. In this sense, by using the four wave vector \(k^{\mu}=\left(w/c,{\bf k}\right)\), the dispersion relation assumes the form:
\[w_{\pm}=\frac{cz_{\pm}S+c\sqrt{z_{\pm}^{2}S^{2}-\left(1+z_{\pm}{\bf E}^{2} \right)\left(z_{\pm}R-{\bf k}^{2}\right)}}{1+z_{\pm}{\bf E}^{2}} \tag{18}\]
where \(S\) and \(R\) are defined as
\[S=\left({\bf E}\times{\bf B}\right)\cdot{\bf k},\qquad R=\left({\bf k}\times{ \bf B}\right)^{2}-\left({\bf k}\cdot{\bf E}\right)^{2}. \tag{19}\]
The frequencies in relation (18) denote the two polarization states related to the photon propagation in the framework of nonlinear electrodynamics. Furthermore, polarization modes with different polarizations have distinct dispersion relation, and propagate in different ways, leading to the phenomenon of birefringence [14]. In addition, since the background electromagnetic field is, in general, spacetime dependent, the photon frequency also depends on its location.
Our purpose in this work is to find the blackbody radiation laws in the presence of strong magnetic fields. Therefore, it will be considered a background uniform magnetic field \({\bf B}\) in Eq. (20), where the electric field will be neglected, i.e., \({\bf E}={\bf 0}\). In addition, to simplify our task, we will assume electromagnetic waves propagating perpendicular to the background magnetic field, i.e., \({\bf k}\perp{\bf B}\). With these assumptions, the angular dependence vanishes, and we can compute analytically the thermodynamics properties of our system.
From the preceding considerations, the frequencies \(w_{\pm}\) in Eq. (18) reduces to:
\[w_{\pm}=ck\Omega_{\pm}, \tag{20}\]
where
\[\Omega_{\pm}=\sqrt{1-z_{\pm}B^{2}}, \tag{21}\]
and \(B\) is the background magnetic field magnitude. In addition, \(\Omega_{\pm}\) must be restricted to be real and positive-definite in order to have propagating modes.
The group velocity, in turn, related to the above frequencies, is given by
\[{\bf v}_{g\pm}=c\Omega_{\pm}{\bf\hat{k}}. \tag{22}\]
In the magnetized medium, \(\Omega_{\pm}<1\), which implies a photon propagation with constant speed smaller than the speed of light. In the limit of \(B\to 0\), the frequencies (20) and the group velocity (22) reduce to the Maxwell theory, as expected.
### Blackbody radiation and Thermodynamic Properties of the Photon Gas
Our goal in this section is to use the techniques of statistical mechanics to describe a photon gas in the framework of nonlinear electrodynamics theories. We would like to remark that it will be considered non-zero temperatures below the electron rest mass \(m_{e}\), i.e., \(k_{B}T\ll m_{e}c^{2}\), which enable us to use the effective field theory to compute the free energy of the photon field. Indeed, at the temperatures well below the electron rest mass, the electron-positron concentration are exponentially small, i.e., proportional to \(exp\left(-m_{e}c^{2}/k_{B}T\right)\), and the contributions to the thermodynamics properties of the blackbody radiation mainly comes from the photon sector [44; 45]. Furthermore, we will formulate the partition function in the grand canonical potential for the photon gas with zero chemical potential assuming the Bose-Einstein statistics [46; 43].
The number of available states \(N\) for a given system is given by:
\[N=\frac{\gamma}{\left(2\pi\right)^{3}}\int d{\bf x}\int d{\bf k}, \tag{23}\]
where \(\gamma\) is related to the helicity multiplicity which, in Maxwell electrodynamics, takes the value \(\gamma=2\).
In spherical coordinates, the above equation reduces to
\[N=\gamma\frac{V}{\left(2\pi\right)^{2}}\int_{0}^{\infty}dkk^{2}, \tag{24}\]
where \(V\) is the volume of the reservoir and the integration over the angular directions has been performed.
If one substitutes \(k^{2}\) by the dispersion relation (20), one gets
\[dk_{\pm}=\frac{2\pi}{c}\frac{d\nu}{\Omega_{\pm}}, \tag{25}\]
for each mode.
Hence, the number of available states \(N\) reads
\[N=N_{+}+N_{-}=V\int_{0}^{\infty}\frac{4\pi\nu^{2}}{c^{3}}\Delta\Omega d\nu, \tag{26}\]
where
\[\Delta\Omega\equiv\left(\frac{1}{\Omega_{+}^{3}}+\frac{1}{\Omega_{-}^{3}} \right). \tag{27}\]
The density of states \(g\left(\nu\right)\), in turn, is given by
\[g\left(\nu\right)=\frac{4\pi\nu^{2}}{c^{3}}\Delta\Omega. \tag{28}\]
In the limit \(B\to 0\), one finds \(\Delta\Omega=2\), and the density of states of a photon gas in the Maxwell theory is recovered.
The spectral energy density \(u\), per unit volume, in thermal equilibrium at temperature \(T\) is then given by
\[u\left(\nu,T\right)=\left(\frac{4\pi\nu^{2}\Delta\Omega}{c^{3}}\right)\frac{h \nu}{\left(e^{\beta h\nu}-1\right)}. \tag{29}\]
In the limit \(\Omega_{\pm}\to 1\), we recover the Maxwell theory, and the internal energy density \(u\left(\nu,T\right)\) reduces to the Planck frequency distribution at temperature \(T\). At low frequencies, the distribution (29) reduces to
\[u\left(\nu,T\right)=\left(\frac{4\pi\nu^{2}\Delta\Omega}{c^{3}}\right)\left(k_{B }T\right). \tag{30}\]
From the above relation, we arrive at the conclusion that the Rayleigh-Jeans law is modified due to the existence of strong magnetic fields. On the other hand, the Wien's displacement law is not changed in this situation.
The radiance, in turn, which is given by the energy rate per unit area, takes the form
\[R\left(T\right)=\sigma_{eff}T^{4}, \tag{31}\]
where
\[\sigma_{eff}=\left(\frac{\pi^{2}k_{B}^{4}}{60\hbar^{3}c^{2}}\right)\frac{ \Delta\Omega}{2}, \tag{32}\]
is the effective Stefan-Boltzman constant.
Eq. (31) tell us that the Stefan-Boltzmann law is modified in this scenario. Indeed, the changes induced by the nonlinearity of the magnetic field is encoded in the Stefan-Boltzmann constant, which depends on the strong background magnetic field and on the parameters of the specific nonlinear model.
We can further investigate the consequences of the nonlinearity in the photon sector by evaluating the thermodynamic quantities. The thermodynamic equilibrium properties of the corresponding system may be achieved by the logarithm of the partition function \(\mathscr{Z}\). Therefore, following the standard methodology, one finds
\[log\mathscr{Z}=-V\int_{0}^{\infty}\left(\frac{4\pi\nu^{2}\Delta\Omega}{c^{3} }\right)log\left(1-e^{-\beta h\nu}\right)d\nu. \tag{33}\]
In this sense, we first obtain the free energy \(F\) in this situation, namely,
\[F=-V\left(\frac{\pi^{2}k_{B}^{4}T^{4}}{45\hbar^{3}c^{3}}\right)\frac{\Delta \Omega}{2}. \tag{34}\]
The pressure \(p\), the energy \(\epsilon\), the entropy \(s\) and the heat capacity \(c_{V}\) at constant volume densities are, respectively, given by
\[p=\frac{4}{3c}\sigma_{eff}T^{4},\quad\epsilon=\frac{4}{c}\sigma_{eff}T^{4}, \quad s=\frac{16}{3c}\sigma_{eff}T^{3}, \tag{35}\]
and
\[c_{V}=\frac{16}{c}\sigma_{eff}T^{3}. \tag{36}\]
Relations (34), (35) and (36) show us that the electromagnetic wave propagation in a magnetized medium modifies the Stefan-Boltzmann constant, leading to deviations of the free energy and the corresponding derived thermodynamic equilibrium quantities. On the other hand, the equation of state that relates energy and pressure is maintained even in the presence of strong magnetic fields, i.e., \(p=\varepsilon/3\).
## III Application to nonlinear electrodynamics models
We now apply the above framework to three nonlinear electrodynamics models: the generalized Born-Infeld, the Euler-Heisenberg and the Logarithmic electrodynamics. As shown in the preceding sections, all the changes induced by the nonlinearity can be parameterized through the factor \(\Delta\Omega\) in definition (27). Therefore, we will explicitly evaluate the value of \(\Delta\Omega\) for these nonlinear models.
### Generalized Born-Infeld electrodynamics
Now we apply the previous result to the particular case of the generalized Born-Infeld electrodynamics [25; 47]. The main motivation of Born and Infeld to propose their theory was to ensure the finiteness of the electric field self-energy [48]. Recently, there has been a renewed interest in Born-Infeld theory in the context of string theory, quantum gravity models and theories with magnetic monopoles [49; 50; 51; 52; 53; 54].
The generalized Born-Infeld Lagrangian density is given by:
\[\mathscr{L}_{BI}\left(\mathscr{F},\mathscr{G}\right)=\beta^{2}\left[1-\left(1- 2\frac{\mathscr{F}}{\beta^{2}}-\frac{\mathscr{G}^{2}}{\beta^{4}}\right)^{p} \right], \tag{37}\]
where \(\beta\) is a scale parameter and \(p\) is a real number between \(0<p<1\). The standard Born-Infeld electrodynamics is recovered when one assumes \(p=1/2\).
Following the procedure described in Sec. (II.1), the dispersion relation takes the form [55]
\[w_{1}\left(k\right) = ck\sqrt{1-2\left(1-p\right)\frac{B^{2}}{B^{2}+\beta^{2}}}, \tag{38}\] \[w_{2}\left(k\right) = ck\sqrt{1-\frac{B^{2}}{B^{2}+\beta^{2}}}. \tag{39}\]
Considering the particular case in which \(p=1/2\), which recovers the Born-Infeld theory, the dispersion relation reduces to
\[w_{1}\left(k\right) = ck\sqrt{1-\frac{B^{2}}{B^{2}+\beta^{2}}}, \tag{40}\] \[w_{2}\left(k\right) = ck\sqrt{1-\frac{B^{2}}{B^{2}+\beta^{2}}}. \tag{41}\]
One promptly notes that in the Born-Infeld model there is no birefringence, as expected. In addition, the frequencies above are always real. It is important to note that in order to derive the Planck frequency spectrum, the frequencies are constrained to be real and positive-definite.
With regards to the factor \(\Delta\Omega\), assuming, in units of \(\hbar=c=k_{B}=1\), a magnetic field intensity
and \(\beta=3MeV^{2}\), then \(\Delta\Omega\approx 3,17\). In this scenario, the effective Stefan-Boltzmann constant takes the value \(\sigma_{eff}=1,58\sigma\), which shows an increase of more than \(50\%\). The number of accessible states, on the other hand, allows \(\Delta\Omega/2\approx 1,58\) more photons to each frequency mode.
### The Euler-Heisenberg Effective Lagrangian
The Euler-Heisenberg theory is a full nonperturbative effective action that describes the Quantum Electrodynamics vacuum polarizations effects at one loop order in the presence of an uniform background electromagnetic field [56; 57]. These effects become relevant above the critical field \(\mathscr{E}_{c}\), the so-called Schwinger limit, where there is the production of real electron-positron pairs.
The density Lagrangian of the aforementioned model is given by
\[\mathscr{L}_{EH}=\mathscr{F}-\frac{1}{8\pi^{2}}\int_{0}^{\infty} \frac{ds}{s^{3}}e^{-m^{2}s}\] \[\times\left[(es)^{2}\,\mathscr{G}\frac{\mathscr{R}cosh\left(es \sqrt{-\mathscr{F}+i\mathscr{G}}\right)}{\mathscr{I}cosh\left(es\sqrt{- \mathscr{F}+i\mathscr{G}}\right)}+\frac{2}{3}\left(es\right)^{2}\mathscr{F}- 1\right], \tag{42}\]
where \(\mathscr{R}\) and \(\mathscr{I}\) are related to the real and imaginary parts, respectively.
In the weak field limit, i.e., for low energy photons \(\left(\hbar w\ll m_{e}c^{2}\right)\), the Euler-Heisenberg Lagrangian reduces to [58; 59]
\[\mathscr{L}_{EH}=\mathscr{F}+\frac{2\alpha^{2}\hbar^{3}}{45m^{4}c^{5}}\left( 4\mathscr{F}^{2}+7\mathscr{G}^{2}\right), \tag{43}\]
where \(\alpha=e^{2}/\hbar c\).
The dispersion relation in the presence of a background uniform magnetic field, in turn, takes the form [55]
\[w_{1}\left(\mathbf{k}\right) =ck\left[1-\frac{8\alpha^{2}\hbar^{3}}{45m^{4}c^{5}}\left(\mathbf{ B}\times\mathbf{\hat{k}}\right)^{2}\right], \tag{44}\] \[w_{2}\left(\mathbf{k}\right) =ck\left[1-\frac{14\alpha^{2}\hbar^{3}}{45m^{4}c^{5}}\left( \mathbf{B}\times\mathbf{\hat{k}}\right)^{2}\right], \tag{45}\]
which reduces to
\[w_{1}\left(k\right) =ck\left[1-\frac{8\alpha}{45}\left(\frac{B^{2}}{B_{c}^{2}}\right) \right], \tag{46}\] \[w_{2}\left(k\right) =ck\left[1-\frac{14\alpha}{45}\left(\frac{B^{2}}{B_{c}^{2}} \right)\right], \tag{47}\]
when the propagation vector \(\mathbf{k}\) is perpendicular to \(\mathbf{B}\).
The critical magnetic field, in natural units, is \(B_{c}\approx 2,99MeV^{2}\). Therefore, taking these frequencies into account, and considering the magnetic field intensity of \(B=3MeV^{2}\), one promptly finds \(\Delta\Omega\approx 2\). For the effective Stefan-Boltzmann constant, one gets \(\sigma_{eff}\approx\sigma\), a value very close to the theory of thermodynamic equilibrium of blackbody radiation for the chosen value of \(B\). The number of accessible states in this case is \(N_{EH}\approx N\). For a magnetic field intensity \(B=30MeV^{2}\), \(\Delta\Omega\approx 3.69\), \(\sigma_{eff}\approx 1,84\sigma\) and \(N_{EH}\approx 1,84N\). It is important to stress that the magnetic field intensity cannot assume arbitrary large values in the case of the Euler-Heisenberg model. For magnetic field intensities \(B\) greater than \(\approx 20,9B_{c}\), which gives \(B\approx 62,8MeV^{2}\), the negative sign in relation (47) becomes dominant, and the result turns into nonphysical.
### Logarithmic electrodynamics
Another nonlinear model we intend to explore is the Logarithmic electrodynamics [24], where the lagrangian density is given by
\[\mathscr{L}_{ln}\left(\mathscr{F},\mathscr{G}\right)=-\beta^{2}ln\left[1- \frac{\mathscr{F}}{\beta^{2}}-\frac{\mathscr{G}^{2}}{2\beta^{4}}\right]. \tag{48}\]
Maxwell electromagnetism is recovered in the limit where \(\beta\rightarrow\infty\). The dispersion relation for each mode, in turn, yields [55]
\[w_{1}\left(\mathbf{k}\right) =ck\sqrt{1-\frac{2B^{2}}{B^{2}+2\beta^{2}}}, \tag{49}\] \[w_{2}\left(\mathbf{k}\right) =ck\sqrt{1-\frac{B^{2}}{B^{2}+\beta^{2}}}. \tag{50}\]
To ensure that the energy density is positive-definite, the condition \(B<\sqrt{2}\beta\) must be satisfied [55]. Therefore, assuming \(B=3MeV^{2}\) and \(\beta=3MeV^{2}\), one gets \(\Delta\Omega\approx 8\). The effective Stefan-Boltzmann constant, in turn, takes the value \(\sigma_{eff}\approx 4\sigma\) and \(N_{LE}\approx 4N\).
To conclude this section, it is important to note that for magnetic fields \(B\) with an intensity greater than \(\sqrt{2}\beta\), there will be the emergence of imaginary terms in the frequency modes. In this case, the electromagnetic waves will be attenuated and will not contribute to the thermalization process, leading no contribution to the emission frequency spectrum.
### Some remarks about Blackbody Radiation in nonlinear electrodynamics
Let us now discuss some further consequences of the nonlinearity in the thermodynamics of blackbody radiation. From the preceding sections, we have shown that the parameter that carries information about the nonlinearity of the magnetic field, \(\Delta\Omega\), is always greater than \(2\) in the analyzed models, leading to a modification in the value of Stefan-Boltzmann constant (32). As a consequence, the energy density of the photon gas (35), for instance, will have more stored energy than in Maxwell electrodynamics. Physically, the photon propagation in
the background magnetic field leads to an energy transfer to the photon gas increasing, in this way, its energy. Analogously, the pressure, entropy and heat capacity densities associated with the photon ideal gas in Eqs. (35) and (36) will increase.
With regards to the spectral density deviations due to nonlinearity, we plotted, in Fig. 1, the Planck frequency spectrum arising from the Maxwell theory and from the nonlinear models under consideration. The graph shows that, for a given temperature, the nonlinear models present an increase in the blackbody curve in comparison to the standard Planck distribution. This fact can be understood by evaluating Eq (28), where one notes that the nonlinearity of the magnetic field leads to more accessible states to the photon gas, which cause an increase in the number of photons in each state.
## IV Final remarks
In this paper we have investigated the consequences of electromagnetic waves propagating in a direction perpendicular to a magnetized medium. Specifically, we have derived the blackbody radiation laws in this situation, such as the Planck frequency distribution and the Stefan-Boltzmann law. The Rayleigh-Jeans formula was contemplated as well. It was realized the appearance of an effective Stefan-Boltzmann constant, which led to deviations in the thermodynamic quantities. We also studied the free energy, as well as the energy, pressure, entropy and heat capacity densities. As an application, we have considered three distinct nonlinear electrodynamics, namely, the generalized Born-Infeld, the Euler-Heisenberg and the Logarithmic electrodynamics. We also would like to remark that our approach can be used to any nonlinear electrodynamics model within the validity of our assumptions. On the other hand, our framework does not treat rigorously the self-interaction of the photons, only effectively. A way to generalize this framework and take the photon self-interaction into account could be by considering the procedures of Field Theory at Finite Temperature [60].
As a future prospect, we intend to extend our analysis and investigate the thermodynamics of blackbody radiation in Lorentz symmetry violating scenarios in connection with nonlinear electrodynamics. Such scenarios seem plausible in neutron stars with strong magnetic fields, which could, in principle, unveil phenomena of physics beyond the SM. Features related to the blackbody phenomenon in compact extra dimensions similar to what have been done by Ramos [61] can also be contemplated. In this sense, it might be worthwhile to explore nonlinear models which depend exclusively on powers of \(\mathscr{F}\) and then study the role of the extra dimensions in the blackbody radiation.
Finally, we would like to stress that there is an intense research in modelling the emission spectrum of Magnetars in the region of soft X-rays. Magnetars are neutron stars having extreme magnetic fields intensity. The study of their spectrum can be valuable to understand features related to the strong magnetic field in such compact objects. Usually, the emission spectrum is modelled taking into account a superposition of two blackbody components or a blackbody plus a power law model. A computational implementation of our results and the use of the observational data from the Chandra X-ray Observatory, XMM-Newton and Suzaku, can be very promising and have the potential to improve the observed X-ray luminosity of Magnetars as well as be used to test the linearity of Maxwell theory and to set constraints on nonlinear electrodynamics models.
Last but not least, we would like to call attention to the fact that the nonlinearity in the regime of critical fields can have an important role in the physical properties of Magnetars during the cooling process, impacting the internal structure of these objects, such as the equation of state of the dense matter, superfluidity of several baryon species and the neutrino emission mechanisms. In this sense, it would be expected a distinct luminosity pattern coming from Magnetars in comparison to neutron stars, which could be useful to distinguish such objects besides providing valuable information about the interior of Magnetars. We hope that these interesting features will stimulate further work on the subject.
Figure 1: Graph of the spectral density distribution of the evaluated models for \(T=0.5keV\). Here, we adopted \(c=\hbar=k_{B}=1\). The conversion of Tesla \(T\) to the natural system is \(1T=6.8\times 10^{-16}GeV^{2}\). In addition, in each model, we have considered a background magnetic field intensity \(B=3MeV^{2}\). For both Born-Infeld and Logarithmic electrodynamics, we set \(\beta=3MeV^{2}\) The blue line corresponds to the Planck spectrum, while the dashed green, orange and red are associated to Euler-Heisenberg, Born-Infeld and Logarithmic electrodynamics, respectively. According to Wien’s law, \(\nu_{max}\approx 0.45T\), the peak is localized at \(\nu_{max}\approx 225eV\).
## V Acknowledgements
This work is a part of the project INCT-FNA proc. No. 464898/2014-5. RT acknowledges financial support from the PCI program of the Brazilian agency Conselho Nacional de Desenvolvimento Cientifico e Tecnologico - CNPq. SBD thanks CNPq for partial financial support.
|
2308.11057 | Nonanalytic Corrections to the Landau Diamagnetic Susceptibility | We analyze potential non-analytic terms in the Landau diamagnetic
susceptibility, $\chi_{dia}$, at a finite temperature $T$ and/or in-plane
magnetic field $H$ in a two-dimensional (2D) Fermi liquid. To do this, we
express the diamagnetic susceptibility as $\chi_{dia} = (e/c)^2
\lim_{Q\rightarrow0} \Pi^{JJ}_\perp (Q)/Q^2$, where $\Pi^{JJ}_\perp$ is the
transverse component of the static current-current correlator, and evaluate
$\Pi^{JJ}_\perp (Q)$ for a system of fermions with Hubbard interaction to
second order in Hubbard $U$ by combining self energy, Maki-Thompson, and
Aslamazov-Larkin diagrams. We find that at $T=H=0$, the expansion of
$\Pi^{JJ}_\perp (Q)/Q^2$ in $U$ is regular, but at a finite $T$ and/or $H$, it
contains $U^2 T$ and/or $U^2 |H|$ terms. Similar terms have been previously
found for the paramagnetic Pauli susceptibility. We obtain the full expression
for the non-analytic $\delta \chi_{dia} (H,T)$ when both $T$ and $H$ are
finite, and show that the $H/T$ dependence is similar to that for the Pauli
susceptibility. | R. David Mayrhofer, Andrey V. Chubukov | 2023-08-21T21:55:01Z | http://arxiv.org/abs/2308.11057v2 | # Nonanalytic Corrections to the Landau Diamagnetic Susceptibility
###### Abstract
We analyze potential non-analytic terms in the Landau diamagnetic susceptibility, \(\chi_{dia}\), at a finite temperature \(T\) and/or finite magnetic field \(H\). To do this, we express the diamagnetic susceptibility as \(\chi_{dia}=(e/c)^{2}\lim_{Q\to 0}\Pi_{2}^{\perp J}(Q)/Q^{2}\), where \(\Pi_{2}^{\perp J}\) is the transverse component of the static current-current correlator, and evaluate \(\Pi_{1}^{\perp J}(Q)\) for a system of fermions with Hubbard interaction to second order in Hubbard \(U\) by combining self energy, Maki-Thompson, and Aslamazov-Larkin diagrams. We find that at \(T=H=0\), the expansion of \(\Pi_{2}^{\perp J}(Q)/Q^{2}\) in \(U\) is regular, but at a finite \(T\) and/or \(H\), it contains \(U^{2}T\) and/or \(U^{2}|H|\) terms. Similar terms have been previously found for the paramagnetic Pauli susceptibility. We obtain the full expression for the non-analytic \(\delta\chi_{dia}(H,T)\) when both \(T\) and \(H\) are finite, and show that the \(H/T\) dependence is similar to that for the Pauli susceptibility.
## I Introduction
This communication is about the Landau diamagnetic susceptibility, \(\chi_{dia}\), of interacting electrons in a 2D Fermi liquid. Landau diamagnetism comes from the orbital motion of electrons in the presence of a field [1; 2]. For non-interacting fermions, the Landau diamagnetic susceptibility is one third in magnitude and opposite in sign to the paramagnetic Pauli susceptibility \(\chi_{para}\), associated with the alignment of the electron spin in an applied magnetic field. For interacting electrons, the Pauli susceptibility \(\chi_{para}\) at zero temperature and in the limit of zero magnetic field differs from free-fermion expression \(\chi_{para}^{0}=2\mu_{B}^{2}N_{F}\) by a factor [3; 4]:
\[\chi_{para}=\chi_{para}^{0}\frac{m^{*}/m}{1+F_{0}^{s}}, \tag{1}\]
where \(m\) is the electron mass, \(m^{*}\) is the effective mass, dressed by the interaction, and \(F_{0}^{s}\) is the Landau coefficient in the spin channel with angular momentum component \(l=0\) Both \(m^{*}/m\) and \(F_{0}^{s}\) can be obtained perturbatively, in the expansion either in \(r_{s}\) for the Coulomb interaction, or in the Hubbard \(U\) for short-range interaction (the dimensionless expansion parameter is \(N_{F}U\), where \(N_{F}\) is the density of states on the Fermi surface). In the Galilean-invariant case, the expansion in \(N_{F}U\) in 2D yields, to order \((N_{F}U)^{2}\)[5]
\[\frac{m^{*}}{m} = 1+\frac{1}{2}(N_{F}U)^{2}\] \[F_{0}^{s} = -N_{F}U+(N_{F}U)^{2}\log 2\] \[\chi_{para} = \chi_{para}^{0}\left(1+N_{F}U+(N_{F}U)^{2}\left(\frac{3}{2}-\log 2 \right)\right) \tag{2}\]
where \(N_{F}=m/(2\pi)\).
At a finite temperature \(T\) and/or a finite magnetic field \(H\), \(\chi_{para}\) has been obtained by analyzing corrections to Landau Fermi liquid theory both in 3D and in 2D [6; 7; 8; 9; 10; 11; 12; 13; 14; 15]. The Pauli susceptibility of free fermions has a regular expansion in \((T/E_{F})^{2}\) and \((\mu_{B}H/E_{F})^{2}\), where \(E_{F}\) is the Fermi energy, however in the presence of interactions \(\chi_{para}(T,H)\) in 2D has a linear in \(T\) dependence at small \(T\) and a linear in \(H\) dependence at small \(H\). These dependencies, along with a \(|Q|\) dependence of \(\chi_{para}(Q)\) at \(T=H=0\), come exclusively from backscattering and reflect a special role of the subset of 1D scattering processes in a multi-dimensional system (a 2D system in our case). More specifically, 1D scattering accounts for the \(\Omega/q\) form of the Landau damping in the limit when \(\Omega\ll v_{F}q\). Because of the \(1/q\) dependence, the effective interaction dressed by Landau damping is long-ranged. A finite \(T\) and/or a finite \(H\) acts as a mass term that converts long-range interaction into a short-range one, this makes the derivatives \(d^{2}\chi_{para}(T)/dT^{2}\) and \(d^{2}\chi_{para}(H)/dH^{2}\) singular and leads to non-analytic \(T\) and \(H\) dependencies.
For the 2D Hubbard model, the paramagnetic spin susceptibility to order \(U^{2}\) at a finite \(T\) and \(H=0\) and at a finite \(H\) and \(T=0\) are [10; 15]
\[\delta\chi_{para}(T) = \chi_{para}(T)-\chi_{para}(0)=\chi_{para}^{0}\frac{N_{F}^{2}U^{2} }{2}\frac{T}{E_{F}},\] \[\delta\chi_{para}(H) = \chi_{para}^{0}N_{F}^{2}U^{2}\frac{\mu_{B}|H|}{E_{F}} \tag{3}\]
When both \(H\) and \(T\) are non-zero, we have [9]
\[\delta\chi_{para}(H,T)=\chi_{para}^{0}\frac{N_{F}^{2}U^{2}}{2}\frac{\mu_{B}H}{E_ {F}}\csc{\rm s}^{2}\left(\frac{\mu_{B}H}{T}\right)\left[\sinh\left(2\frac{\mu_{B }H}{T}\right)-\frac{\mu_{B}H}{T}\right]. \tag{4}\]
The linear in \(T\) behavior of the paramagnetic spin susceptibility in 2D has been detected in iron pnictides [16; 17; 18]. The same physics gives rise to non-analytic temperature dependence of the specific heat coefficient, \(C(T)/T=a_{2}+b_{2}T\) in 2D and \(C(T)/T=a_{3}+b_{3}T^{2}\log T\) in 3D (see e.g., Refs. [77; 7; 7; 7; 1]). The latter has been observed first in UAl\({}_{2}\)[19] and later in other uranium alloys as well as TiBe\({}_{2}\)[20; 21; 22]. The linear in \(T\) behavior of \(C(T)/T\) has also been observed in helium films on a variety of substrates [23]. The goal of our work is to perform the same type of analysis for the diamagnetic susceptibility, \(\chi_{dia}\). It has been argued [4] that the Landau diamagnetic susceptibility cannot be obtained within Fermi liquid theory. This can be understood by recognizing that the diamagnetic susceptibility is proportional to the gradient \(Q^{2}\) term of the transverse static current-current correlator \(\Pi_{\perp}^{JJ}(Q,0)\). This gradient term generally comes from fermions away from the Fermi surface and falls outside the realm of the Fermi liquid theory [3; 4]. Still, \(\chi_{dia}\) can be computed directly in the expansion in either \(r_{s}\) or \(N_{F}U\). We consider short-range interaction and compute \(\chi_{dia}\) to second order in \(N_{F}U\) in 2D. We address two issues: (i) whether \(\chi_{dia}(T=H=0)\) is a regular function of \(N_{F}U\) and (ii) whether \(\chi_{dia}(T,H)\) is a non-analytic function of temperature and magnetic field. We specifically consider the case of an infinitesimally small transverse field, which causes orbital motion of 2D fermions, and a finite Zeeman field within the plane. It is not clear a'priori whether \(\chi_{dia}(T,H)\) has to be non-analytic. On one hand, it is a component of the magnetic susceptibility, and its other component, \(\chi_{para}\), is non-analytic. On the other hand, \(\chi_{dia}\) is expressed in terms of the correlator of charge currents. A charge susceptibility does not have a non-analytic \(T\) and \(H\) dependence, because it measures the response to a variation of the chemical potential \(\mu\), and such a variation does not affect the \(\Omega/q\) term of the Landau damping [24].
On (i), we show that \(\chi_{dia}(T=H=0)\) is regular, much like \(\chi_{para}\) in Eq. (2). The only difference is that the linear in \(U\) term is absent. A regular \(\chi_{dia}(T=H=0)\) implies that \(\Pi_{\perp}^{JJ}(Q,H=T=0)\) scales as \(Q^{2}\). This is expected but not a'priori guaranteed as we will see that individual diagrams for \(\Pi_{\perp}^{JJ}(Q,H=T=0)\) do contain \(|Q|\) terms. Such terms exist for spin-spin correlator, where they combine into a non-zero total \(|Q|\) term, and for charge-charge correlator (same, up to a prefactor, as the density-density correlator), where \(|Q|\) contributions from individual diagrams cancel out. We show that for \(\Pi_{\perp}^{JJ}(Q,H=T=0)\) the \(|Q|\) terms from individual diagrams cancel out. In this respect, the behavior of the current-current correlator is similar to that of the charge-charge correlator. Our analysis of \(\chi_{dia}(T=H=0)\) complements several earlier studies [25; 26; 27], which computed \(\chi_{dia}(T=H=0)\) for a system with Coulomb interaction to first order in \(r_{s}\), and found regular \(O(r_{s})\) corrections.
On (ii) we find that at a finite \(T\) and/or finite \(H\), the \(Q^{2}\) term in \(\Pi_{\perp}^{JJ}\) contains \(Q^{2}|H|\) and \(Q^{2}T\) terms, i.e., \(\chi_{dia}(T,H)\) is non-analytic, much like \(\chi_{para}(T,H)\). We note that the in-plane field \(H\) does not directly affect the diamagnetic susceptibility. Rather, this magnetic field serves to induce a spin dependent dispersion via Zeeman splitting. It is precisely this change in dispersion that will lead to a nonanalytic dependence of the diamagnetic susceptibility on \(H\). We combine \(\chi_{para}\) and \(\chi_{dia}\) and obtain the non-analytic term in the full magnetic susceptibility.
## II General theory
The Landau diamagnetic susceptibility is related to the static current-current correlation function as
\[\chi_{dia}=\frac{e^{2}}{c^{2}}\lim_{Q\to 0}\frac{\Pi_{\perp}^{JJ}(Q)}{Q^{2}}=4m^{2} \mu_{B}^{2}\lim_{Q\to 0}\frac{\Pi_{\perp}^{JJ}(Q)}{Q^{2}}, \tag{5}\]
where \(\Pi_{\perp}^{JJ}\) is the component of the current-current correlation perpendicular to the direction of \(Q\)[4; 25] (we set \(\hbar=1\)). We note that \(\Pi_{\perp}^{JJ}(Q)\) in (5) is the total current-current correlator, subject \(\lim_{Q\to 0}\Pi_{\perp}^{JJ}(Q,0)=0\)[28]. Diagrammatically, \(\Pi_{\perp}^{JJ}(Q)\) is expressed as the fully dressed particle-hole bubble with full Green's functions and one dressed and one bare current vertex (Fig. 1). In analytic form,
\[\Pi_{\perp}^{JJ}({\bf Q})=-2T\sum_{\omega_{m}}\int\frac{d^{2}k}{(2\pi)^{2}}v_{ \bf k}^{\perp}\Gamma_{\perp}({\bf k},{\bf Q})\left(G_{Q}-G_{Q\to 0}\right), \tag{6}\]
where \(v_{\bf k}\perp\) is the component of the velocity perpendicular to the direction of \({\bf Q}\), \(\Gamma_{\perp}({\bf k},{\bf Q})\) is the fully dressed transverse current vertex, and \(G_{Q}=G({\bf k}+{\bf Q}/2,\omega_{m})G({\bf k}-{\bf Q}/2,\omega_{m})\).
For non-interacting fermions, \(\Gamma_{\perp}({\bf k})=v_{\bf k}^{\perp}\), and
\[\Pi_{\perp}^{JJ}({\bf Q})=-2T\sum_{\omega_{m}}\int\frac{d^{2}k}{(2\pi)^{2}} \left(v_{\bf k}^{\perp}\right)^{2}\left(G_{0}({\bf k}+{\bf Q}/2,\omega_{m})G_{0 }({\bf k}-{\bf Q}/2,\omega_{m})-G_{0}^{2}({\bf k},\omega_{m})\right), \tag{7}\]
where \(G_{0}({\bf k},\omega_{m})=\left(i\omega_{m}-\varepsilon_{k}\right)^{-1}\) is free-fermion Green's function. At \(T=0\), \(T\sum_{\omega_{m}}=\int d\omega_{m}/(2\pi)\). The momentum and frequency integral is infra-red and ultra-violet convergent and can be evaluating by integrating over momentum and frequency in any order. For a parabolic dispersion, Eq. (7) yields, to lowest order in \(Q\), \(\Pi_{\perp}^{IJ}({\bf Q})=-Q^{2}/(12\pi m)\) in both 2D and 3D, and Eq. (5) reproduces the usual expression for the Landau diamagnetic susceptibility, \(\chi_{dia}^{0}=-\frac{1}{3}\chi_{para}^{0}\)[3; 4]. We show in Appendix A that for an arbitrary dispersion, Eq. (7) reproduces the Landau-Peierls expression:
\[\chi_{dia}=\frac{2\mu_{B}^{2}m^{2}}{3(2\pi)^{d}}\int d^{d}k\,n_{F}^{\prime}( \varepsilon_{k})\left(\frac{\partial^{2}\varepsilon_{\bf k}}{\partial k_{x}^ {2}}\frac{\partial^{2}\varepsilon_{\bf k}}{\partial k_{y}^{2}}-\left(\frac{ \partial^{2}\varepsilon_{\bf k}}{\partial k_{x}\partial k_{y}}\right)^{2} \right). \tag{8}\]
The diagrammatic series for the full \(\Pi_{\perp}^{JJ}({\bf Q})\) are obtained by adding vertex corrections and by dressing the fermionic propagators. Diagrams for \(\Pi_{\perp}^{JJ}({\bf Q})\) to first and second order in \(N_{F}U\) are presented in Figs 2 and 3. The wavy lines in these diagrams are the Hubbard \(U\). In Fig. 2, the diagram with the renormalization of the fermionic line is traditionally called the "self-energy" or "density of states" diagram and the one with the vertical wave line is called Maki-Thompson diagram. In Fig. 3 for \(\Pi_{\perp}^{JJ}({\bf Q})\) to order \(U^{2}\), the first two diagrams renormalize \(G\) into \(G_{0}\), others renormalize one \(v_{\bf k}^{\perp}\) into \(\Gamma_{\perp}({\bf k},{\bf Q})\). The last two diagrams in Fig. 3 (diagrams f and g) are traditionally called Aslamazov-Larkin diagrams and we will use this notation [29].
We will see below that the full \(\Pi_{\perp}^{JJ}({\bf Q})\) at order \(U^{2}\) and at \(T=0\), \(H=0\) can be expressed in terms of the three diagrams (a), (c), and (g) in Fig. 3 (other four diagrams in Fig. 3 are expressed in terms on these three). To make our notation more concise, we will designate the diagram (a) as the second-order self-energy diagram, diagram (b) as the second order Maki-Thompson diagram, and diagram (g) as the Aslamazov-Larkin diagram.
For definiteness, in the analysis below we assume that the fermionic dispersion is parabolic.
## III Nonanalyticities of the polarization bubble
Diagrams (a), (c) and (f) in Fig. 3 all contain a polarization bubble of free fermions \(\Pi_{ph}(q,\Omega_{m})\). Before we proceed with the calculation of these diagrams first at \(T=H=0\) and then at finite \(T\) and \(H\), it is instructive to list the expressions for \(\Pi_{ph}(q,\Omega_{m})\) at small frequency \(\Omega_{m}\) and momenta \(q\) near either \(0\) or \(2k_{F}\), as these expressions will determine the non-analyticities of \(\Pi_{\perp}^{JJ}({\bf Q})(T,H)\)[7; 9; 10; 30; 31].
At \(T=H=0\), the particle-hole polarization bubble of free fermions in 2D is given by
\[\Pi_{ph}(q,\Omega_{m})=\int\frac{d^{2}k}{(2\pi)^{2}}\frac{d\omega_{n}}{2\pi}G( {\bf k},\omega_{n})G({\bf k}+{\bf q},\omega_{n}+\Omega_{m}), \tag{9}\]
At small \(q\) and \(\Omega_{m}\),
\[\Pi_{ph}^{q\to 0}(q,\Omega_{m})=\frac{m}{2\pi}\left(-1+\frac{\left|\Omega_{m} \right|}{\sqrt{\left(v_{F}q\right)^{2}+\Omega_{m}^{2}}}\right). \tag{10}\]
Figure 1: The fully dressed polarization bubble that represents the current-current correlation function.
At \(v_{F}q\gg|\Omega_{m}|\), \(\Pi_{ph}^{q\to 0}(q,\Omega_{m})\) contains a non-analytic \(|\Omega_{m}|/q\) term.
Near \(q=2k_{F}\)
\[\Pi_{ph}^{q\to 2k_{F}}(q,\Omega_{m})=\frac{m}{2\pi}\left(-1+\frac{1}{2}\left(\sqrt{ \frac{\tilde{q}}{k_{F}}-\frac{i\Omega_{m}}{v_{F}k_{F}}}+\sqrt{\frac{\tilde{q} }{k_{F}}+\frac{i\Omega_{m}}{v_{F}k_{F}}}\right)\right), \tag{11}\]
where \(\tilde{q}=q-2k_{F}\). For \(\tilde{q}<0\) and \(v_{F}|\tilde{q}|>|\Omega_{m}|\), \(\Pi_{ph}^{q\to 2k_{F}}(q,\Omega_{m})\) again contains a non-analytic \(|\Omega_{m}|/\tilde{q}\) term, as one can readily verify by expanding in small \(\Omega_{m}/\tilde{q}\) around the branch cuts in the square roots.
These forms of the polarization bubbles give rise to the appearance of non-analytic \(|Q|\) terms in the individual diagrams for the current-current correlation function already at \(T=H=0\). We show later that these nonanalyticities cancel once all diagrams are added together.
For the analysis of non-analyticities in \(\chi_{dia}\), we are ultimately interested in the cases of finite temperature and finite magnetic field. When \(T=0\) and \(H\) is finite, the polarization is spin dependent:
\[\Pi_{ph}^{\alpha\beta}(q,\Omega_{m})=\int\frac{d^{d}k}{(2\pi)^{d}}\frac{d\omega }{2\pi}G^{\alpha}(\mathbf{k},\omega)G^{\beta}(\mathbf{k}+\mathbf{q},\omega+ \Omega_{m}) \tag{12}\]
Then one has to distinguish between \(\Pi_{ph}^{\uparrow\uparrow}(q,\Omega_{m})\) and \(\Pi^{\uparrow\downarrow}(q,\Omega_{m})\). At small \(q\) and \(\Omega_{m}=0\),
\[\Pi^{\uparrow\uparrow}(q,\Omega_{m}) = \frac{m}{2\pi}\frac{|\Omega_{m}|}{v_{F}q}+\cdots \tag{13}\] \[\Pi^{\uparrow\downarrow}(q,\Omega_{m}) = \frac{m}{2\pi}\frac{|\Omega_{m}|}{\sqrt{(v_{F}q)^{2}-(2\mu_{B}H)^ {2}}}+\cdots \tag{14}\]
where dots stand for regular terms. We see that a finite \(H\) is crucial for \(\Pi_{ph}^{\uparrow\downarrow}(q,\Omega_{m})\), where it cuts a long-range interaction and causes singularity in the derivative with respect to \(H\), but not essential for \(\Pi_{ph}^{\uparrow\uparrow}(q,\Omega_{m})\). Near \(q=2k_{F}\), the situation is opposite:
\[\Pi_{ph}^{\uparrow\uparrow}(q,\Omega_{m}) = \frac{m}{4\pi}\left(\sqrt{\frac{\tilde{q}}{k_{F}}-\frac{i\Omega_{ m}}{v_{F}k_{F}}-\frac{2\mu_{B}H}{v_{F}k_{F}}}+\sqrt{\frac{\tilde{q}}{k_{F}}+ \frac{i\Omega_{m}}{v_{F}k_{F}}-\frac{2\mu_{B}H}{v_{F}k_{F}}}\right)+\cdots \tag{15}\] \[\Pi_{ph}^{\uparrow\downarrow}(q,\Omega_{m}) = \frac{m}{4\pi}\left(\sqrt{\frac{\tilde{q}}{k_{F}}-\frac{i\Omega_{ m}}{v_{F}k_{F}}}+\sqrt{\frac{\tilde{q}}{k_{F}}+\frac{i\Omega_{m}}{v_{F}k_{F}}} \right)+\cdots, \tag{16}\]
where the ellipses again stand for analytic terms. We see that a finite \(H\) affects the term where both spin indices are the same and does not affect the term with opposite spin indices. Below we combine fermions into particle-hole pairs in such a way that we only get terms \(\Pi_{ph}^{\uparrow\downarrow}(q,\Omega_{m})\). With this we ensure that all nonanalytic contributions come from only internal \(q\approx 0\).
At a finite \(T\) and \(H=0\), the particle-hole polarization bubble near \(q=0\) has the same form as at \(T=0\), Eq. (10), but now Matsubara frequencies are discrete, \(\Omega_{m}=2\pi mT\). The dynamical piece is present at \(m\neq 0\), when \(\Omega_{m}\) are finite. The same finite \(\Omega_{m}\) then appears in the denominator and cuts long-range interaction at \(v_{F}q<2\pi T\), i.e., at distances \(r>v_{F}/(2\pi T)\). This in turn causes singularity in the temperature derivative of \(\Pi_{ph}(q,\Omega_{m})\). We are not aware of the tractable analytic form of the polarization bubble near \(2k_{F}\) at \(T\neq 0\). We present an approximate result in Appendix E.2.2.
## IV Zero temperature and zero magnetic field
### First Order in \(U\)
We first consider corrections to the diamagnetic susceptibility in the Hubbard model at both \(T=0\) and \(H=0\). The diagrams for \(\Pi_{\perp}^{JJ}(Q,0)\) are shown in Fig. 2. These diagrams have already been evaluated for the diamagnetic susceptibility in the case of a dynamically screened Coulomb interaction in RPA [25]. We show that in the Hubbard model, each of these diagrams evaluate to \(0\).
e can write the contribution of the Maki-Thompson diagram as
\[\Pi_{\perp}^{JJ,MT}(Q,0)=2U\left(\int\frac{d^{2}k}{\left(2\pi\right)^{2}}\frac{d \omega}{2\pi}v_{k}^{y}G_{k-Q/2}G_{k+Q/2}\right)^{2}, \tag{17}\]
where \(G_{k\pm Q/2}=G(\mathbf{k}\pm\mathbf{Q}/2,\omega)\). Taking \(\mathbf{k}\rightarrow-\mathbf{k}\), and noting \(\varepsilon_{-\mathbf{k}}=\varepsilon_{\mathbf{k}}\), we find
\[\int\frac{d^{2}k}{\left(2\pi\right)^{2}}\frac{d\omega}{2\pi}v_{k}^{y}G_{k-Q/2 }G_{k+Q/2}=-\int\frac{d^{2}k}{\left(2\pi\right)^{2}}\frac{d\omega}{2\pi}v_{k}^ {y}G_{k-Q/2}G_{k+Q/2}=0. \tag{18}\]
For the self energy diagram, we first note that there is a combinatorial factor of two in addition to the factor of two due to spin summation. The resulting susceptibility is then
\[\Pi_{\perp}^{JJ,SE}(Q,0)=4U\left(\int\frac{d^{2}k^{\prime}}{(2\pi)^{2}}\frac{d \omega^{\prime}}{2\pi}G_{k^{\prime}}\right)\left(\int\frac{d^{2}k}{(2\pi)^{2 }}\frac{d\omega}{2\pi}\left(v_{k}^{y}\right)^{2}G_{k+Q/2}^{2}G_{k-Q/2}\right) \tag{19}\]
Changing \(\mathbf{k}\rightarrow-\mathbf{k}\) in the second term, we find
\[\int d^{2}kd\omega(v_{k}^{y})^{2}G_{k+Q/2}^{2}G_{k-Q/2}=\int d^{2}kd\omega(v_ {k}^{y})^{2}G_{k+Q/2}G_{k-Q/2}^{2}. \tag{20}\]
On the other hand, doing frequency integration first, we find
\[\int d\omega\,G_{k+Q/2}^{2}G_{k-Q/2} =i\int d\omega\frac{\partial}{\partial\omega}\left(G_{k+Q/2} \right)G_{k-Q/2}=-i\int d\omega G_{k+Q/2}\frac{\partial}{\partial\omega}\left( G_{k-Q/2}\right) \tag{21}\] \[=-\int d\omega\,G_{k+Q/2}G_{k-Q/2}^{2} \tag{22}\]
Comparing the two expressions, we see that \(\Pi_{\perp}^{JJ,SE}(Q,0)=0\). We then must move to second order in \(U\) to detect the effects of interaction.
### Second Order in \(U\)
To second order in \(U\), there are a total of seven nontrivial diagrams that contribute to the current-current correlator, as shown in Fig. 3. We call the corresponding contribution \(\Pi_{i}\) (\(i=a\) to \(g\)). We incorporate factors of 2 from combinatorics and from spin summation into \(\Pi_{i}\).
The calculation of the diagrams is tedious but straightforward. We present some details in Appendices B and C and here cite the results. First, we verified that there are particular relations between different \(\Pi_{i}\), namely \(\Pi_{a}=-2\Pi_{b}\), \(\Pi_{c}=\Pi_{f}=-\Pi_{d}\), and \(\Pi_{e}=-\frac{1}{2}\Pi_{g}\). The total contribution will then be
\[\Pi_{\perp}^{JJ}(Q,0)=\frac{1}{2}\Pi_{a}+\Pi_{c}+\frac{1}{2}\Pi_{g}=\frac{1}{2 }\left(\Pi_{a}+\Pi_{c}\right)+\frac{1}{2}\left(\Pi_{f}+\Pi_{g}\right) \tag{23}\]
Figure 2: The two distinct diagrams which appear at first order in \(U\). The first diagram is often called the Maki-Thompson diagram and the second is often called the self energy diagram.
Next, we find that \(O(Q^{2})\) contributions from diagrams (f) and (g) cancel (see Appendix C). Then we can write the current-current correlator as
\[\Pi_{\perp}^{JJ}(Q,0)=\frac{1}{2}\left(\Pi_{a}+\Pi_{c}\right)\] \[=2U^{2}\int_{k,q}\Pi(q,\Omega_{m})\Big{(}2\left(v_{k}^{y}\right)^ {2}G_{k+Q/2}^{2}G_{k-Q/2}G_{k+q+Q/2} \tag{24}\] \[+v_{k}^{y}v_{k+q}^{y}G_{k+Q/2}G_{k+q+Q/2}G_{k-Q/2}G_{k+q-Q/2} \Big{)},\]
where \(\Pi(q,\Omega_{m})=\int\frac{d^{2}p}{(2\pi)^{2}}\frac{\omega_{p}}{2\pi}G_{p}G_{ p+q}\) and we have used the abbreviation \(\int_{k}=\int d^{2}kd\omega_{n}/(2\pi)^{3}\). Finally, we verified that while both \(\Pi_{a}\) and \(\Pi_{c}\) contain non-analytic \(|Q|\) terms, the sum of the two has no net nonanalyticity, i.e., the expansion in \(Q\) starts with \(Q^{2}\), the details of which are in Appendix B
In explicit calculation of \(\Pi_{\perp}^{JJ}(Q,0)\) from (24), we evaluate the integral over \(\omega_{n}\) first, then expand out to order \(Q^{2}\). This procedure ensures that relevant contributions from \(\omega_{m}\sim v_{F}Q\) are all included. The result is
\[\Pi_{\perp}^{JJ}(Q,0)=-\frac{mQ^{2}U^{2}}{(2\pi)^{4}}\int\limits_ {0}^{\infty}d\Omega_{m}\int\limits_{0}^{\infty}\frac{dq}{6q^{3}}\left(\sqrt{ \alpha^{2}-1}+\sqrt{(\alpha^{*})^{2}-1}\right)\] \[\left(\frac{\alpha^{2}q^{2}+2q^{2}+6\alpha q+3}{\left(\alpha^{2} -1\right)^{5/2}}+\frac{\left(\alpha^{*}\right)^{2}q^{2}+2q^{2}+6\alpha^{*}q+3 }{\left(\left(\alpha^{*}\right)^{2}-1\right)^{5/2}}\right), \tag{25}\]
where \(\alpha=\frac{i\Omega_{m}}{q}-\frac{q}{2}\). Numerical evaluation of the integral gives
\[\delta\Pi_{\perp}^{JJ}(Q,0)=-0.2618\frac{mQ^{2}U^{2}}{(2\pi)^{4}} \tag{26}\]
For the correction to the diamagnetic susceptibility, we then have
\[\delta\chi_{dia}=\frac{e^{2}}{c^{2}}\lim_{Q\to 0}\frac{\delta\Pi_{\perp}^{JJ}(Q,0)}{Q^{2}}= A\chi_{dia}^{0}N_{F}^{2}U^{2} \tag{27}\]
Figure 3: The seven diagrams which contribute to the current-current correlator at order \(U^{2}\). We call diagram (a) the second order self energy correction, diagram (c) the second order Maki-Thompson correction, and (g) the Aslamazov-Larkin diagram.
where \(A=0.7854/\pi\). To high numerical accuracy, \(A=1/4\). This is very likely the exact value. We see this correction enhances the diamagnetic susceptibility compared to that for free fermions. We note that the sign of this correction is opposite the sign found in previous work in the case of the dynamically screened Coloumb interaction [25; 27]. In that case, interactions have been found to decrease the magnitude of the diamagnetic susceptibility. However, Refs. [25; 27] only considered diagrams to first order in the interaction. In our case, a non-zero result for \(\delta\chi_{dia}\) appears at second order in \(U\). By magnitude, \(\delta\chi_{dia}/\chi_{dia}^{0}\) is about a third of \(\delta\chi_{para}/\chi_{para}^{0}\) in Eq. (2). We see that even though diamagnetism is enhanced by the Hubbard interaction, the enhancement is smaller than the increase in the paramagnetic susceptibility.
## V Nonanalytic contributions to the current-current correlator
We now consider the effects of finite temperature and finite in-plane magnetic field. To do so, we consider precisely the same \(U^{2}\) terms as before, but set either \(H\) or \(T\) finite. At a non-zero \(H\), fermionic dispersion becomes spin dependent \(\varepsilon_{k}\rightarrow\varepsilon_{k}^{\uparrow(\downarrow)}=\varepsilon_ {k}\pm\mu_{B}H\), while for finite temperature the integrals over frequency are replaced with sums \(\Omega_{m}\to 2\pi mT\) and \(\omega_{n}\to(2n+1)\pi T\). We recall that we have chosen an in-plane field to make a direct comparison to the case of spin susceptibility, for which a Zeeman field gives rise to non-analyticity.
We calculate both \(\delta\chi_{dia}(H,0)\) and \(\delta\chi_{dia}(0,T)\) analytically by restricting to contributions from small \(q\) in the polarization bubble \(\Pi_{ph}(q,\Omega_{m})\). For a finite \(H\), we argue that this is the full non-analytic contribution. For a non-zero \(T\) and \(H=0\), there may be an additional contribution from \(q\sim 2k_{F}\) (see below).
### Magnetic Field
As we said, in a finite in-plane field, fermionic Green's functions become spin-dependent, \(G_{k,\alpha}=(i\omega_{n}-\varepsilon_{k}^{\alpha})^{-1}\) and \(\varepsilon_{k}^{\uparrow(\downarrow)}=\varepsilon_{k}\pm\mu_{B}H\). We first note that upon adding all diagrams, the terms that exclusively contain \(G_{\uparrow}\) or \(G_{\downarrow}\) will cancel. We can see this by explicitly adding up diagrams with the same momentum labeling, then noting the difference in the spin indices for each of the Green's functions. As an example, consider diagrams (a) and (b). Writing
Figure 4: The relevant diagrams for the current-current correlation in the presence of a finite magnetic field. \(\alpha\) and \(\beta\) label spin-up and spin-down states, respectively.
them together, we have
\[\int_{k,q}\left(v_{k}^{y}\right)^{2}\Bigg{(}\sum_{\alpha,\beta}\Pi^{ \alpha\beta}(q,\Omega)\left(G_{k+Q/2,\alpha}\right)^{2}G_{k-Q/2,\alpha}G_{k+q+Q/2,\beta}\] \[-\sum_{\alpha}\Pi^{\alpha\alpha}(q,\Omega)\left(G_{k+Q/2,\alpha} \right)^{2}G_{k-Q/2,\alpha}G_{k+q+Q/2,\alpha}\Bigg{)}\] \[= \int_{k,q}\left(v_{k}^{y}\right)^{2}\sum_{\alpha\neq\beta}\Pi^{ \alpha\beta}\left(G_{k+Q/2,\alpha}\right)^{2}G_{k-Q/2,\alpha}G_{k+q+Q/2,\beta}. \tag{28}\]
This immediately implies that out of seven diagrams in Fig. 4, only diagrams (a), (c), (f) and (g) contribute, with spin index \(\beta\neq\alpha\). Next, we explicitly verify (see Appendix C) that at order \(Q^{2}\), diagrams (c) and (g) cancel each other, i.e., \(\delta\chi_{dia}(H,0)\) is the sum of diagrams (a) and (f). Finally, we use the fact that non-analyticity in the polarization bubble made of fermions with opposite spin projections comes from momenta \(q\approx 0\) and construct the polarization bubble in the diagram (f) out of fermionic propagators shown by vertical lines (they have opposite spins \(\alpha\) and \(\beta\)), and construct the polarization bubble in the diagram (a) using one of the two \(\beta\) fermions and the \(\alpha\) fermion "located" immediately below \(\beta\) fermions in Fig. 4. The sum of diagrams (a) and (f) is then expressed as
\[\delta\Pi_{\perp}^{JJ}(H)=2U^{2}\sum_{\alpha\neq\beta}\int_{k,q} \Big{[}\Pi^{\alpha\beta}(q,\Omega_{m})\Big{(}2(v_{k}^{y})^{2}\left(G_{k+Q/2}^ {\alpha}\right)^{2}G_{k-Q/2}^{\alpha}G_{k+q+Q/2}^{\beta}\] \[+v_{k}^{y}v_{k+q}^{y}G_{k+Q/2}^{\alpha}G_{k-Q/2}^{\alpha}G_{k+q+Q/ 2}^{\beta}G_{k+q-Q/2}^{\beta}\Big{)}\Big{]} \tag{29}\]
The evaluation of this expression is again tedious but straightforward. We present the details in Appendix D. The result is
\[\delta\Pi_{\perp}^{JJ}(H)=-\frac{U^{2}Q^{2}k_{F}^{2}}{m^{2}(2\pi)^{4}}\int_{- \infty}^{\infty}d\Omega_{m}\int_{0}^{\infty}dq\frac{q^{3}\Omega_{m}^{2}\left( 4\left(i\Omega_{m}+2\mu_{B}H\right)^{2}+v_{F}^{2}q^{2}\right)}{4\left(\left(i \Omega_{m}+2\mu_{B}H\right)^{2}-v_{F}^{2}q^{2}\right)^{4}}. \tag{30}\]
Integrating over \(q\), subtracting off the \(H=0\) case, and then integrating over \(\Omega_{m}\), we find
\[\delta\Pi_{\perp}^{JJ}(H)=-\frac{U^{2}Q^{2}m}{24(2\pi)^{3}}\frac{\mu_{B}|H|}{ E_{F}}\implies\delta\chi_{dia}(H)=\frac{1}{4}\chi_{dia}^{0}U^{2}N_{F}^{2} \frac{\mu_{B}|H|}{E_{F}}. \tag{31}\]
To verify this result, we computed the sum of diagrams (a) and (f) numerically, not restricting to small \(q\). We plot the results in Fig 5. We see that there is a fairly good agreement with the analytical analysis, in which we restricted to only small \(q\). The agreement confirms that non-analytic \(\delta\chi_{dia}(H,0)\) comes from only \(q\ll k_{F}\).
### Finite Temperature
We now perform the same analysis as above in the case of \(H=0\) but \(T\neq 0\). We first consider analytically the contribution from small \(q\), i.e. from \(v_{F}q\sim\Omega_{n}\sim T\). The calculation is very similar to the one in the previous section and the result is Eq. (29) with \(H=0\), and \(\int d\Omega_{m}/2\pi\to T\sum_{\Omega_{m}}\). Using the expression for \(\Pi(q,\Omega_{m})\) in Eq. (10), this equation can be re-expressed as
\[\delta\Pi_{\perp}^{JJ}(T)=-\frac{U^{2}Q^{2}k_{F}^{2}}{m^{2}(2\pi)^{3}}T\int dq \sum_{m}\frac{q^{3}\Omega_{m}^{2}\left(v_{F}^{2}q^{2}-4\Omega_{m}^{2}\right)} {4\left(\Omega_{m}^{2}+v_{F}^{2}q^{2}\right)^{4}}, \tag{32}\]
We now sum over \(\Omega_{m}\), subtract off the \(T=0\) contribution, and integrate over \(q\). Doing so, we find
\[\delta\Pi_{\perp}^{JJ}(T)=-\frac{U^{2}Q^{2}m}{48(2\pi)^{3}}\frac{T}{E_{F}} \implies\delta\chi_{dia}(T)=\frac{1}{8}\chi_{dia}^{0}U^{2}N_{F}^{2}\frac{T}{E_ {F}}, \tag{33}\]
where, we recall, \(\chi_{dia}^{0}=-\frac{1}{3}\mu_{B}^{2}N_{F}\) is the bare diamagnetic susceptibility. We see that \(\delta\Pi_{\perp}^{JJ}(T)\), and hence \(\delta\chi_{dia}(0,T)\), scales linearly with \(T\). For completeness, in Appendix E.2 we calculate this term by summing over the two fermionic
Matsubara frequencies first, expanding to order \(Q^{2}\), and evaluating the resulting term. The result gives precisely the same expression for this linear in T term as above.
We also analyze the potential linear in \(T\) contributions to \(\delta\Pi_{\perp}^{JJ}(T)\) from \(q\sim 2k_{F}\). The calculation requires special care as the result is the sum of two terms, each of which scales as \(T\log T\). We explicitly verified that \(T\log T\) cancels out in the full expression, but we were not able to unambiguously determine whether there is a non-zero \(O(T)\) term. We present details in Appendix E.2.2. Below we proceed by keeping only small-q contribution to \(\delta\Pi_{\perp}^{JJ}(T)\).
### Finite Magnetic Field and Temperature
We note that, when internal \(q\sim 0\), we do not need to take either \(H=0\) or \(T=0\). In fact, if we make the replacement \(\int\frac{d\Omega}{2\pi}\rightarrow\sum_{m}\), \(\Omega\rightarrow\Omega_{m}=2\pi mT\) in Eq. (D), we can directly calculate the contribution at finite temperature and finite magnetic field. Doing this, we find
\[\delta\chi_{dia}(H,T)=\chi_{dia}^{0}\frac{U^{2}N_{F}^{2}}{8}\frac{\mu_{B}H}{E_ {F}}\csc{\rm ch}^{2}\left(\frac{\mu_{B}H}{T}\right)\left[\sinh\left(2\frac{\mu _{B}H}{T}\right)-\frac{\mu_{B}H}{T}\right]. \tag{34}\]
By taking the limit as \(H\to 0\) or \(T\rightarrow\infty\), we recover the \(H=0\) case, and by taking \(H\rightarrow\infty\) or \(T\to 0\), we recover the \(T=0\) case. We find that the scaling form in Eq. (34) is the same as in Eq. (4) for the paramagnetic Pauli susceptibility. Comparing the prefactors, we find that
\[\frac{\delta\chi_{dia}(H,T)}{\chi_{dia}^{0}}=\frac{1}{4}\frac{\delta\chi_{para} (H,T)}{\chi_{para}^{0}}, \tag{35}\]
Adding both the paramagnetic and diamagnetic contributions together into the total magnetic susceptibility, we find
\[\chi(H,T)=\frac{2}{3}\chi_{para}^{0}\left(1+\frac{3}{2}N_{F}U-1.09(N_{F}U)^{2 }\right)+\frac{11}{12}\delta\chi_{para}(H,T) \tag{36}\]
where \(\delta\chi_{para}(H,T)\) is given by (4). We caution however that Eq. (36) will change if there is a non-zero \(2k_{F}\) contribution to \(\delta\chi_{dia}(0,T)\). We note in this regard that in the case of the paramagnetic susceptibility, both \(q\sim 0\) and \(q\sim 2k_{F}\) contributions to \(\delta\chi_{para}(0,T)\) have been included.
## VI Conclusions
We have analyzed the Landau diamagnetic susceptibility diagrammatically for a model of 2D fermions with Hubbard-like interaction. We used the relation \(\chi_{dia}=(e/c)^{2}\lim_{Q\to 0}\Pi_{\perp}^{JJ}(Q)/Q^{2}\), where \(\Pi_{\perp}^{JJ}\) is the transverse component of the static current-current correlator. For free fermions, we reproduced diagrammatically the Landau-Peierls
Figure 5: The numerical evaluation of diagrams (a) and (f). Here we have not restricted the magnitude of \(q\) to small values. The result for \(\delta\chi_{dia}(H,0)\propto H\) agrees with the analytical calculation done by restricting to small \(q\).
formula for arbitrary fermionic dispersion (it reduces to \(\chi_{dia}=-(2\mu_{B}^{2})N_{F}/3\) for a parabolic dispersion). For interacting fermions, we evaluated \(\Pi_{\perp}^{JJ}(Q)\) up to second order in Hubbard \(U\) by combining self energy, Maki-Thompson, and Aslamazov-Larkin-type diagrams. At first order in \(U\), we found no correction to the diamagnetic susceptibility. At order \(U^{2}\), we obtained a regular correction \(\delta\chi_{dia}\propto U^{2}\) at zero temperature and zero magnetic field, and explicitly obtained the prefactor. In the process of calculations, we found that individual diagrams for \(\Pi_{\perp}^{JJ}(Q)\) contain non-analytic \(|Q|\) terms, but these terms cancel out in the full expression, and \(\Pi_{\perp}^{JJ}(Q)\propto Q^{2}\). In this respect, \(\Pi_{\perp}^{JJ}(Q)\) behaves similarly to charge polarization, for which \(|Q|\) terms from individual diagrams also cancel out.
We next considered the corrections to the prefactor of the \(U^{2}\) term in \(\delta\chi_{dia}\) in both temperature and magnetic field. We showed that the Landau diamagnetic susceptibility does indeed have nonanalytic linear in \(T\) and linear in \(H\) terms. In this respect, the behavior of the diamagnetic susceptibility is similar to that of the paramagnetic Pauli susceptibility, which also contains such terms. We computed analytically the prefactors for \(O(U^{2}T)\) and \(O(U^{2}H)\) terms in \(\delta\chi_{dia}\) for parabolic fermionic dispersion. We found that for both finite temperature and magnetic field, the nonanalytic contributions are of the same sign as the bare Landau diamagnetic susceptibility, and therefore serve to enhance the diamagnetic effects as temperature and magnetic field increase. By magnitude, nonanalytic corrections to the diamagnetic susceptibility are comparable to non-analytic corrections to the spin susceptibility.
## VII Acknowledgements
We thank D.L. Maslov for useful discussions and comments. We also thank Keiya Shirahama for bringing our attention to references in Ref. [23]. The research was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, under Award No. DE-SC0014402.
## Appendix A Reproducing the Landau-Peierls Formula from Diagrammatics
In this Appendix we derive diagrammatically the Landau-Peierls formula for diamagnetic susceptibility of free fermions with arbitrary dispersion. The calculation has been performed with D. L. Maslov.
The formula for the diamagnetic susceptibility in a free electron gas has been derived by Landau in 1930 [2]. Three years later, Peierls obtained a correction to this expression when the electrons experience a periodic potential due to ions [32; 33]. This correction is usually written as the sum of the contribution from the ions and from conduction electrons. The contribution from the conduction electrons is
\[\chi_{dia}=\frac{2\mu_{B}^{2}m^{2}}{3(2\pi)^{d}}\int d^{d}k\,n^{\prime}_{F}( \varepsilon_{k})\left(\frac{\partial^{2}\varepsilon_{\mathbf{k}}}{\partial k_ {x}^{2}}\frac{\partial^{2}\varepsilon_{\mathbf{k}}}{\partial k_{y}^{2}}-\left( \frac{\partial^{2}\varepsilon_{\mathbf{k}}}{\partial k_{x}\partial k_{y}} \right)^{2}\right), \tag{10}\]
where \(n^{\prime}_{F}(\varepsilon_{k})=\frac{\partial n_{F}}{\partial\varepsilon_{k}}\) and \(d=2,3\). We note that as \(T\to 0\), \(n^{\prime}_{F}(\varepsilon_{\mathbf{k}})\rightarrow-\delta(\varepsilon_{ \mathbf{k}})\), so at sufficiently small \(T\), the above integral is equivalent to averaging the integrand over the Fermi surface. Eq. (10) is known as Landau-Peierls formula.
It is well known that the Landau diamagnetic susceptibility \(\chi_{dia}\) for free fermions with a parabolic dispersion can be reproduced diagrammatically by expressing \(\chi_{dia}\) via the transverse component of the static current-current correlator, \(\Pi_{\perp}^{JJ}(Q)\) as \(\chi_{dia}=(e/c)^{2}\lim_{Q\to 0}\Pi_{\perp}^{JJ}(Q)/Q^{2}\) (Eq. (5) in the main text), and evaluating \(\Pi_{\perp}^{JJ}(Q)\) as the particle-hole bubble with transverse velocity \(v_{\mathbf{k}}^{\nu}\) in the vertices [4]. Our aim here is to show that the diagrammatic formalism can also reproduce Eq. (10) for an arbitrary dispersion relation.
For arbitrary dispersion, the particle-hole current-current bubble is given by
\[\Pi_{\perp}^{JJ}(Q\hat{x},0) =-2T\sum_{\omega_{n}}\int\frac{d^{d}k}{(2\pi)^{d}}\left(v_{\mathbf{ k}}^{y}\right)^{2}G(\mathbf{k}+\mathbf{Q}/2,\omega_{n})G(\mathbf{k}-\mathbf{Q}/2, \omega_{n})-\cdots \tag{11}\] \[=-2\int\frac{d^{d}k}{(2\pi)^{d}}\left(v_{\mathbf{k}}^{y}\right)^ {2}\frac{n_{F}(\varepsilon_{\mathbf{k}+\mathbf{Q}/2})-n_{F}(\varepsilon_{ \mathbf{k}-\mathbf{Q}/2})}{\varepsilon_{\mathbf{k}+\mathbf{Q}/2}-\varepsilon _{\mathbf{k}-\mathbf{Q}/2}}-\cdots, \tag{12}\]
where dots stand for the \(Q=0\) terms that needs to be subtracted. To get the \(Q^{2}\) term in \(\Pi_{\perp}^{JJ}(Q\hat{x},0)\), we must expand
each term in the r.h.s of (11) to order \(Q^{2}\). Doing so, we find
\[v_{k}^{y} =\frac{\partial\varepsilon_{\mathbf{k}}}{\partial k_{y}}=\partial_ {y}\varepsilon_{\mathbf{k}} \tag{12}\] \[\varepsilon_{\mathbf{k}\pm\mathbf{Q}/2} =\varepsilon_{\mathbf{k}}\pm\frac{Q}{2}\partial_{x}\varepsilon_{ \mathbf{k}}+\frac{Q^{2}}{8}\partial_{x}^{2}\varepsilon_{\mathbf{k}}\pm\frac{Q ^{3}}{48}\partial_{x}^{3}\varepsilon_{\mathbf{k}}\] (13) \[\varepsilon_{\mathbf{k}+\mathbf{Q}/2}-\varepsilon_{\mathbf{k}- \mathbf{Q}/2} =Q\partial_{x}\varepsilon_{\mathbf{k}}+\frac{Q^{3}}{24}\partial_{x }^{3}\varepsilon_{\mathbf{k}}\] (14) \[\frac{n_{F}(\varepsilon_{\mathbf{k}+\mathbf{Q}/2})-n_{F}( \varepsilon_{\mathbf{k}-\mathbf{Q}/2})}{\varepsilon_{\mathbf{k}+\mathbf{Q}/ 2}-\varepsilon_{\mathbf{k}-\mathbf{Q}/2}} =n_{F}^{\prime}(\varepsilon_{\mathbf{k}})+\frac{Q^{2}}{24}\left( 3n_{F}^{\prime\prime}(\varepsilon_{\mathbf{k}})\partial_{x}^{2}\varepsilon _{\mathbf{k}}+n_{F}^{\prime\prime\prime}(\varepsilon_{\mathbf{k}})\left( \partial_{x}\varepsilon_{\mathbf{k}}\right)^{2}\right), \tag{15}\]
where have made use of the notation \(\frac{\partial}{\partial k_{i}}=\partial_{i}\). Inserting these expressions into Eq. (11), subtracting off the \(Q^{0}\) term, and using Eq. (5) to find the diamagnetic susceptibility, we have
\[\chi_{dia}=-\frac{\mu_{B}^{2}m^{2}}{3(2\pi)^{d}}\int d^{d}k(\partial_{y} \varepsilon_{\mathbf{k}})^{2}\left(3n_{F}^{\prime\prime}(\varepsilon_{ \mathbf{k}})\partial_{x}^{2}\varepsilon_{\mathbf{k}}+n_{F}^{\prime\prime \prime}(\varepsilon_{\mathbf{k}})(\partial_{x}\varepsilon_{\mathbf{k}})^{2}\right) \tag{16}\]
Examining first the term proportional to \(n_{F}^{\prime\prime\prime}(\varepsilon_{\mathbf{k}})\), the expression can be simplified by using the chain rule to write \(n_{F}^{\prime\prime\prime}(\varepsilon_{\mathbf{k}})\partial_{x}\varepsilon _{\mathbf{k}}=\partial_{x}n_{F}^{\prime\prime}(\varepsilon_{\mathbf{k}})\) and then integrating by parts to find
\[\int d^{d}k\,n_{F}^{\prime\prime\prime}(\varepsilon_{\mathbf{k}})(\partial_ {y}\varepsilon_{\mathbf{k}})^{2}(\partial_{x}\varepsilon_{\mathbf{k}})^{2}=- \int d^{d}k\,n_{F}^{\prime\prime}(\varepsilon_{\mathbf{k}})\frac{\partial}{ \partial k_{x}}\left((\partial_{y}\varepsilon_{\mathbf{k}})^{2}(\partial_{x} \varepsilon_{\mathbf{k}})\right) \tag{17}\]
Simplifying the above expression and inserting it back into the diamagnetic susceptibility, we have
\[\chi_{dia}=-\frac{\mu_{B}^{2}m^{2}}{6\pi^{2}}\int d^{d}k\,n_{F}^{\prime}( \varepsilon_{\mathbf{k}})\left(\partial_{x}^{2}\varepsilon_{\mathbf{k}}\left( \partial_{y}\varepsilon_{\mathbf{k}}\right)^{2}-\partial_{x}\varepsilon_{ \mathbf{k}}\partial_{y}\varepsilon_{\mathbf{k}}\right) \tag{18}\]
Again using chain rule to write \(n_{F}^{\prime\prime}(\varepsilon_{k})\partial_{y}\varepsilon_{\mathbf{k}}= \partial_{y}n_{F}^{\prime}(\varepsilon_{\mathbf{k}})\) for the first term and \(n_{F}^{\prime\prime}(\varepsilon_{k})\partial_{x}\varepsilon_{\mathbf{k}}= \partial_{x}n_{F}^{\prime}(\varepsilon_{\mathbf{k}})\) for the second term, then integrating by parts once more, we get
\[\chi_{dia}=\frac{2\mu_{B}^{2}m^{2}}{3(2\pi)^{d}}\int d^{d}k\,n_{F}^{\prime}( \varepsilon_{k})\left(\frac{\partial^{2}\varepsilon_{\mathbf{k}}}{\partial k _{x}^{2}}\frac{\partial^{2}\varepsilon_{\mathbf{k}}}{\partial k_{y}^{2}}- \left(\frac{\partial^{2}\varepsilon_{\mathbf{k}}}{\partial k_{x}\partial k_{y} }\right)^{2}\right). \tag{19}\]
This is precisely the Landau-Peierls formula for the conduction electron part of the diamagnetic susceptibility, Eq. (10). We note that this formula implies that the Landau diamagnetism comes entirely from fermions near the Fermi surface. This is contrast with conventional thoughts on the Landau diamagnetic susceptibility, which often claim that contributions to the Landau diamagnetism come from both near and far from the Fermi surface [34; 35]. That said, it is well established that in most real materials, the Landau-Peierls term is not the dominant contribution to diamagnetism. In fact, it has been rigorously shown that the Landau-Peierls formula is the right result for \(\chi_{dia}\) only in the limit of small electron density [36]. However, there is evidence from ab initio calculations that, in at least some materials like the alkali metals, the Landau diamagnetic susceptibility still comes primarily from the Fermi surface [37].
We note for completeness that if we take the ratio of this quantity with the expression for the paramagnetic susceptibility for free fermions with an arbitrary dispersion relation, we find
\[\frac{\chi_{dia}}{\chi_{para}}=-\frac{1}{3}\frac{m^{2}}{\int d^{d}k\,n_{F}^{ \prime}(\varepsilon_{k})}\int d^{d}k\,n_{F}^{\prime}(\varepsilon_{k})\left( \frac{\partial^{2}\varepsilon_{\mathbf{k}}}{\partial k_{x}^{2}}\frac{\partial^{2 }\varepsilon_{\mathbf{k}}}{\partial k_{y}^{2}}-\left(\frac{\partial^{2} \varepsilon_{\mathbf{k}}}{\partial k_{x}\partial k_{y}}\right)^{2}\right). \tag{20}\]
The factor of \(-\frac{1}{3}\) emerges when the quantity \(\frac{\partial^{2}\varepsilon_{\mathbf{k}}}{\partial k_{x}^{2}}\frac{\partial^{2 }\varepsilon_{\mathbf{k}}}{\partial k_{y}^{2}}-\left(\frac{\partial^{2} \varepsilon_{\mathbf{k}}}{\partial k_{x}\partial k_{y}}\right)^{2}\) is \(m^{-2}\), that is when the dispersion is parabolic.
## Appendix B \(|Q|\) Nonanalyticities of Self Energy and Maki-Thompson Diagrams
Here we explicitly calculate \(|Q|\) nonanalyticities in the current-current correlator for the diagrams (a) and (c) in Fig. 3, i.e. the second order self energy and Maki-Thompson diagrams. From similar calculations for the charge and spin susceptibilities, we know that these nonanalyticities comes from contributions of small momentum and frequency transfers as well as momentum transfers close to \(2k_{F}\). We begin by considering these small momentum transfers in both the case of the second order self energy and Maki-Thompson diagrams, then consider the backscattering case in these diagrams afterwards. To simplify notations, below we re-label \(\Pi_{\perp}^{JJ}\) by just \(\Pi\).
### Small \(q\) Nonanalyticity
We first consider the case of the self energy diagram, which is given by
\[\Pi_{a}(Q,0)=8U^{2}\int_{k,q}\Pi(q,\Omega_{m})\left(v_{k}^{y}\right)^{2}G_{k+Q/2} ^{2}G_{k-Q/2}G_{k+q+Q/2} \tag{10}\]
For small \(q\) and \(Q\), all contributions come from close to the Fermi surface, so we can make the approximations
\[\varepsilon_{k} =v_{F}(k-k_{F}) \tag{11}\] \[\varepsilon_{k\pm Q/2} =\varepsilon_{k}\pm\mathbf{v}_{F}\cdot\mathbf{Q}/2\] (12) \[\varepsilon_{k+q\pm Q/2} =\varepsilon_{k}+\mathbf{v}_{F}\cdot\mathbf{q}\pm\mathbf{v}_{F} \cdot\mathbf{Q}/2\] (13) \[v_{k}^{y} =v_{F}\sin\theta \tag{14}\]
where \(\mathbf{v}_{F}=v_{F}\hat{k}\), and \(\theta=\angle(\mathbf{k},\mathbf{Q})\). Noting the expression is even over \(\Omega_{m}\) so that we can reduce the expression to an integral over \(\Omega_{m}\) from \(0\) to \(\infty\), and then integrate first over \(\varepsilon_{k}\). The integrals over \(\omega_{n}\) and \(\theta\) are also elementary, and can be evaluated to give
\[\delta\Pi_{a}^{q=0}(Q,0) =\frac{U^{2}v_{F}^{2}m}{\pi^{4}}\int d^{2}q\int_{0}^{\infty}d \Omega_{m}\,\Pi(\mathbf{q},\Omega_{m})\frac{i\Omega_{m}}{v_{F}^{2}Q^{2}(i \Omega_{m}-\mathbf{v}_{F}\cdot\mathbf{q})^{2}} \tag{15}\] \[\times\left(i\Omega_{m}-\mathbf{v}_{F}\cdot\mathbf{q}-i\sqrt{v_{F }^{2}Q^{2}-(i\Omega_{m}-\mathbf{v}_{F}\cdot\mathbf{q})^{2}}\right),\]
where we note that in the small \(q\) approximation, we can write
\[\Pi(q,\Omega_{m})=\frac{m}{2\pi}\left(-1+\frac{\Omega_{m}}{\sqrt{v_{F}^{2}q^{ 2}+\Omega_{m}^{2}}}\right). \tag{16}\]
We make the change to polar coordinates here, so that \(\Omega_{m}=r\sin\phi\) and \(q=r\cos\phi\). We also rescale \(r\) to be in units of \(Q\), so that the total function is now
\[\frac{iU^{2}k_{F}|Q|}{\pi^{4}}\int_{0}^{\pi/2}d\phi\int_{0}^{2\pi }d\xi \int_{0}^{\infty}dr\Pi(\phi)\sin\phi\cos\phi\frac{r}{(i\sin\phi- \cos\phi\cos\xi)^{2}} \tag{17}\] \[\times\left(r(i\sin\phi-\cos\phi\cos\xi)-i\sqrt{1-r^{2}(i\sin\phi -\cos\phi\cos\xi)^{2}}\right),\]
where \(\xi=\angle(\mathbf{k},\mathbf{q})\) and \(\Pi(\phi)=m/2\pi\left(-1+\sin\phi\right)\). We note that this term is divergent over \(r\), and needs a cutoff \(r_{max}\). However, we are only interested in the nonanalytic contribution of this term, which is a low energy contribution independent of the cutoff. Integrating over \(r\) and negelecting the cutoff dependent terms leaves only
\[\delta\Pi_{a}^{q=0}(Q,0)= -\frac{U^{2}k_{F}|Q|}{3\pi^{4}}\int\limits_{0}^{\pi/2}d\phi\int \limits_{0}^{2\pi}d\xi\Pi(\phi)\sin\phi\cos\phi\frac{1}{(i\sin\phi-\cos\phi \cos\xi)^{4}}\] \[= -\frac{U^{2}k_{F}m|Q|}{24\pi^{4}}\int\limits_{0}^{\pi/2}d\phi(-1+ \sin\phi)\sin\phi\cos\phi\left(5\sin 3\phi-3\sin\phi\right)\] \[= \frac{U^{2}k_{F}m}{72\pi^{4}}|Q|. \tag{18}\]
Now we can also consider the Maki-Thompson contribution. The diagram gives
\[\Pi_{b}(Q,0)=4U^{2}\int_{k,q}\Pi(q,\Omega_{m})v_{k}^{y}v_{k+q}^{y}G_{k+Q/2}G_{ k-Q/2}G_{k+q+Q/2}G_{k+q-Q/2} \tag{19}\]
We note that to lowest order, we have \(v_{k+q}^{y}=v_{k}^{y}\), as the additional corrections due to \(q\) yield only contributions to orders \(Q^{2}\) and higher, so we may discard them. Knowing this, we can integrate over \(\varepsilon_{k}\), \(\omega_{n}\), and \(\theta\) to give
\[-\frac{U^{2}v_{F}^{2}m}{\pi^{4}}\int\ d^{2}q\int_{0}^{\infty}d \Omega_{m} \Pi(\mathbf{q},\Omega_{m})\frac{i\Omega_{m}}{v_{F}^{2}Q^{2}(i \Omega_{m}-\mathbf{v}_{F}\cdot\mathbf{q})^{2}} \tag{20}\] \[\times\left(i\Omega_{m}-\mathbf{v}_{F}\cdot\mathbf{q}-i\sqrt{v_{F }^{2}Q^{2}-(i\Omega_{m}-\mathbf{v}_{F}\cdot\mathbf{q})^{2}}\right)\]
Comparing the above equation to Eq. (100), we find that this contribution exactly cancels with the contribution from the self energy term. Therefore, there is no net nonanalyticity at \(q\sim 0\) for these two diagrams.
### \(2k_{F}\) Nonanalyticity
We now consider the nonanalyticities which come from an internal momentum close to \(2k_{F}\), corresponding to backscattering. In this case, we can approximate \(\varepsilon_{k+q}=-\varepsilon_{k}+v_{F}\tilde{q}+2v_{F}k_{F}(1+\cos\theta)\), where \(\tilde{q}=(q-2k_{F})\) and \(\hat{k}\cdot\tilde{q}=\cos\theta\). In addition, the dominant contributions will come from angles close to perfect backscattering, so we can additionally write \(1+\cos\theta=\frac{1}{2}(\pi-\theta)^{2}=\frac{1}{2}\tilde{\theta}^{2}\). Then, we can write the self energy contribution as
\[\begin{split}\delta\Pi_{a}^{2k_{F}}(Q,0)&=8U^{2} \int_{k,q}(v_{k}^{y})^{2}\Pi(q,\Omega_{m})\left(G_{k+Q/2}\right)^{2}G_{k-Q/2}G_ {k+q+Q/2}\\ &=8U^{2}\int_{k,q}\Pi(q,\Omega_{m})\left(v_{F}\sin\theta_{1} \right)^{2}\left(\frac{1}{i\omega_{n}-\varepsilon_{k+Q/2}}\right)^{2}\\ &\times\frac{1}{i\omega_{n}-\varepsilon_{k-Q/2}}\frac{1}{i( \omega_{n}+\Omega_{m})+\varepsilon_{k+Q/2}-v_{F}\tilde{q}-v_{F}k_{F}\tilde{ \theta}^{2}},\end{split} \tag{101}\]
We can rescale \(\varepsilon_{k}\), \(\omega_{n}\), \(\Omega_{m}\), \(\tilde{q}\), and \(\tilde{\theta}\) to be unitless, and then integrate over \(\varepsilon_{k}\). Doing so, after some simplification, we find
\[\frac{2U^{2}k_{F}|Q|m}{\pi^{6}}\int\limits_{-\infty}^{\infty}d \tilde{q}\int\limits_{0}^{\infty}d\Omega_{m}\int\limits_{0}^{\pi}d\theta_{1} \int\limits_{0}^{\infty}d\tilde{\theta}\left(\sqrt{\tilde{q}+i\Omega_{m}}+ \sqrt{\tilde{q}-i\Omega_{m}}\right)\sin^{2}\theta_{1}\] \[\text{Im}\left(\int\limits_{0}^{\infty}d\omega_{n}\frac{1}{\left( i(\omega_{n}+\Omega_{m})-\tilde{q}-\tilde{\theta}^{2}\right)^{2}\left(i(\omega_{n}+ \Omega_{m})-\tilde{q}-\tilde{\theta}^{2}+\cos\theta_{1}\right)}\right), \tag{102}\]
where we have written \(\Pi(q,\Omega_{m})\) as \(m/2\pi\left(\sqrt{\tilde{q}+i\Omega_{m}}+\sqrt{\tilde{q}-i\Omega_{m}}\right)\). We can then convert this to polar coordinates, with \(\tilde{q}=r\cos\phi\) and \(\Omega_{m}=r\sin\phi\). Rescaling variables so that \(\omega_{n}\to r\omega_{n}\) and \(\tilde{\theta}\rightarrow\sqrt{r}\tilde{\theta}\), we get
\[\frac{2U^{2}k_{F}|Q|m}{\pi^{6}}\text{Im}\int\limits_{0}^{\pi}d \phi\int\limits_{0}^{\pi}\theta_{1}\int\limits_{0}^{\infty}d\tilde{\theta} \int\limits_{0}^{\infty}d\omega_{n}\int\limits_{0}^{\infty}dr\cos\phi/2\sin^{ 2}\theta_{1} \tag{103}\] \[\times\frac{1}{\left(i\omega_{n}-e^{-i\phi}-\tilde{\theta}^{2} \right)^{2}}\frac{r}{r\left(i\omega_{n}-e^{-i\phi}-\tilde{\theta}^{2}\right)+ \cos\theta_{1}}\]
Integrating over both \(r\), taking only the low energy, cutoff independent term, and then integrating over \(\theta_{1}\), we find
\[-\frac{2U^{2}k_{F}|Q|m}{3\pi^{5}}\text{Re}\int\limits_{0}^{\pi}d\phi\int \limits_{0}^{\infty}d\tilde{\theta}\int\limits_{0}^{\infty}d\omega_{n}\cos \phi/2\frac{1}{\left(i\omega_{n}-e^{-i\phi}-\tilde{\theta}^{2}\right)^{4}} \tag{104}\]
The remaining integrals are elementary, and give the result
\[\delta\Pi_{a}^{2k_{F}}(Q,0)=\frac{k_{F}mU^{2}}{72\pi^{4}}|Q|. \tag{105}\]
Now, for the second order Maki-Thompson result. We can write out this contribution as
\[\begin{split}\delta\Pi_{b}^{2k_{F}}(Q,0)&=4U^{2} \int_{k,q}\Pi(q,\Omega_{m})v_{k}^{y}v_{k+q}^{y}G_{k+Q/2}G_{k-Q/2}G_{k+q+Q/2}G_ {k+q-Q/2}\\ &=-4U^{2}\int_{k,q}\Pi(q,\Omega_{m})\left(v_{F}\sin\theta_{1} \right)^{2}\frac{1}{i\omega_{n}-\varepsilon_{k+Q/2}}\frac{1}{i\omega_{n}- \varepsilon_{k-Q/2}}\\ &\times\frac{1}{i(\omega_{n}+\Omega_{m})+\varepsilon_{k+Q/2}-v_{F} \tilde{q}-v_{F}k_{F}\tilde{\theta}^{2}}\frac{1}{i(\omega_{n}+\Omega_{m})+ \varepsilon_{k-Q/2}-v_{F}\tilde{q}-v_{F}k_{F}\tilde{\theta}^{2}}\end{split} \tag{106}\]
We note here that, like in the case of the \(q\sim 0\) nonanalyticity, we can neglect the contribution of \(\tilde{q}\) to \(v_{k+q}\). In addition, we can take the direction of \(\mathbf{q}\) to be exactly antiparallel to \(\mathbf{k}\), as including deviations from \(\mathbf{q}=-\mathbf{k}\) will also only contribute at higher orders of \(|Q|\). However, for \(q\sim 2k_{F}\), \(\varepsilon_{k+q}=-\varepsilon_{k}+v_{F}\tilde{q}+2v_{F}k_{F}(1+\cos\theta)\), so neglecting contributions of order \(\tilde{q}\) and setting \(\theta=\pi\) means \(\varepsilon_{k+q}=-\varepsilon_{k}\). Then, \(v_{k}^{y}v_{k+q}^{y}=-(v_{k}^{y})^{2}\). We rescale variables as before, and then integrate over \(\varepsilon_{k}\). Then, converting to polar coordinates, we find
\[\frac{U^{2}k_{F}|Q|m}{2\pi^{6}}\mathrm{Im}\int_{0}^{\phi}d\phi \int_{0}^{\pi}d\theta_{1}\int_{0}^{\infty}d\tilde{\theta}\int_{0}^{\infty}d \omega_{n}\int_{0}^{\infty} dr\cos\phi/2\sin^{2}\theta_{1}\frac{1}{\left(i\omega_{n}-e^{-i\phi}-\tilde{ \theta}^{2}\right)^{2}}\] \[\times\frac{r^{2}}{r^{2}\left(i\omega_{n}-e^{-i\phi}-\tilde{ \theta}^{2}\right)^{2}-\cos^{2}\theta_{1}} \tag{101}\]
Now, we can integrate over \(r\) and \(\theta_{1}\) as before, discarding the cutoff dependent term, getting
\[\frac{2U^{2}k_{F}|Q|m}{3\pi^{5}}\mathrm{Re}\int\limits_{0}^{\pi}d\phi\int \limits_{0}^{\infty}d\tilde{\theta}\int\limits_{0}^{\infty}d\omega_{n}\cos \phi/2\frac{1}{\left(i\omega_{n}-e^{-i\phi}-\tilde{\theta}^{2}\right)^{4}} \tag{102}\]
Comparing with Eq. (100), this is precisely the same contribution as the self energy term with a minus sign. Therefore, the nonanalyticity in the Maki-Thompson diagram from momentum transfers of \(2k_{F}\) is
\[\delta\Pi_{b}^{2k_{F}}(Q,0)=-\frac{k_{F}mU^{2}}{72\pi^{4}}|Q|. \tag{103}\]
With this results, we have confirmed there is no net nonanalyticity between the self energy and Maki-Thompson diagrams for both \(q=0\) and \(q=2k_{F}\).
## Appendix C Sum of Diagrams (c) and (g)
In this appendix, we consider the sum of diagrams (c) and (g) at finite temperature and magnetic field. We show that at order \(Q^{2}\), the two diagrams in fact cancel. At zero magnetic field, diagram (c) and diagram (f) are equal, so this calculation also confirms that when \(H=0\), the pair of Aslamazov-Larking diagrams cancel.
We can write the sum of these two diagrams as
\[\Pi_{c}+\Pi_{g}=2U^{2}T\sum_{\alpha\neq\beta}\sum_{\Omega_{m}}\int_{\mathbf{q }}I_{\alpha\beta}(\mathbf{Q},\mathbf{q},\Omega_{m})\left(I_{\alpha\beta}( \mathbf{Q},\mathbf{q},\Omega_{m})+I_{\beta\alpha}(\mathbf{Q},-\mathbf{q},- \Omega_{m})\right), \tag{104}\]
where \(\int_{\mathbf{q}}=\int d^{2}q/(2\pi)^{2}\) and \(I_{\alpha\beta}(\mathbf{Q},\mathbf{q},\Omega)\) is a triad of Green's functions defined as
\[I_{\alpha\beta}(\mathbf{Q},\mathbf{q},\Omega_{m})=\sum_{\omega_{n}}\int_{ \mathbf{k}}v_{k}^{y}G_{k+Q/2}^{\alpha}G_{k-Q/2}^{\beta}G_{k+q}^{\beta}. \tag{105}\]
Symmetrizing Eq. (104) with respect to \(\Omega_{m}\), and also noting we can exchange \(\alpha\) and \(\beta\) in the sum, we can rewrite the expression as
\[U^{2}T\sum_{\alpha\neq\beta}\sum_{m=1}^{\infty}\int_{\mathbf{q}}\left(I_{ \alpha\beta}(\mathbf{Q},\mathbf{q},\Omega_{m})+I_{\beta\alpha}(\mathbf{Q},- \mathbf{q},-\Omega_{m})\right)^{2}. \tag{106}\]
We have left out the \(\Omega_{m}=0\) term in this sum. One can confirm that this term is zero by following the same steps that we outline below to show all \(\Omega_{m}\neq 0\) terms are zero. Series expanding Eq. (106), we can see the term proportional to \(Q^{2}\) has the form
\[\frac{1}{2}U^{2}Q^{2}T\sum_{\alpha\neq\beta}\sum_{m}\int_{\mathbf{q}}\left(I_{ \alpha\beta}(0,\mathbf{q},\Omega_{m})+I_{\beta\alpha}(0,-\mathbf{q},-\Omega_{m })\right)\left(I_{\alpha\beta}^{\prime\prime}(0,\mathbf{q},\Omega_{m})+I_{ \beta\alpha}^{\prime\prime}(0,-\mathbf{q},-\Omega_{m})\right), \tag{107}\]
where \(I^{\prime\prime}_{\alpha\beta}(0,\mathbf{q},\Omega_{m})=\lim_{Q\to 0}\frac{d^{2}}{dQ^{2}}I_{ \alpha\beta}(\mathbf{Q},\mathbf{q},\Omega_{m})\), and we have dropped terms proportional to \(I^{\prime}_{\alpha\beta}(0,\mathbf{q},\Omega_{m})\) as these go to zero. We claim that \(I_{\alpha\beta}(0,\mathbf{q},\Omega_{m})+I_{\beta\alpha}(0,-\mathbf{q},- \Omega_{m})=0\), so that the entire \(Q^{2}\) term vanishes. To show this, we sum over over fermionic Matsubara frequencies, and then take the limit as \(Q\to 0\). Doing so, we find
\[I_{\alpha\beta}(0, \mathbf{q},\Omega_{m})+I_{\beta\alpha}(0,-\mathbf{q},-\Omega_{m})=\] \[\int_{\mathbf{k}}\left(\frac{v_{k}^{y}\left(n_{F}\left(\varepsilon _{k+q}^{\beta}\right)-n_{F}\left(\varepsilon_{k}^{\alpha}\right)\right)}{ \left(i\Omega_{m}-\varepsilon_{k+q}^{\beta}+\varepsilon_{k}^{\alpha}\right)^{ 2}}+\frac{v_{k}^{y}n_{F}^{\prime}\left(\varepsilon_{k}^{\alpha}\right)}{i \Omega_{m}-\varepsilon_{k+q}^{\beta}+\varepsilon_{k}^{\alpha}}-\left(c.c., \alpha\leftrightarrow\beta\right)\right), \tag{100}\]
where second term denotes the complex conjugate of the first term with \(\alpha\) and \(\beta\) interchanged, and we have used the fact that \(I_{\beta\alpha}(0,-\mathbf{q},-\Omega_{m})=-I_{\beta\alpha}(0,\mathbf{q},- \Omega_{m})\). One can derive this relation from Eq. (101) by making the transformation \(\mathbf{k}\rightarrow-\mathbf{k}\). To evaluate the terms proportional to \(n_{F}\left(\varepsilon_{k+q}^{\alpha(\beta)}\right)\), we can make the transformation \(\mathbf{k}\rightarrow-\mathbf{k}-\mathbf{q}\) so that \(n_{F}\left(\varepsilon_{k+q}^{\alpha(\beta)}\right)\to n_{F}\left( \varepsilon_{k}^{\alpha(\beta)}\right)\). Doing so and simplifying the expression, we find
\[\frac{1}{m}\int_{\mathbf{k}}\left(\frac{q_{y}n_{F}\left(\varepsilon_{k}^{\alpha }\right)}{\left(i\Omega_{m}-\varepsilon_{k+q}^{\beta}+\varepsilon_{k}^{\alpha} \right)^{2}}+\frac{k_{y}n_{F}^{\prime}\left(\varepsilon_{k}^{\alpha}\right)}{ i\Omega_{m}-\varepsilon_{k+q}^{\beta}+\varepsilon_{k}^{\alpha}}-\left(c.c., \alpha\leftrightarrow\beta\right)\right) \tag{101}\]
We can explicitly write out \(q_{y}=q\sin\theta_{qQ}\), \(\varepsilon_{k}-\varepsilon_{k+q}=-\frac{1}{m}kq\cos\theta_{kq}-\frac{q^{2}}{2 m}\pm 2\mu_{B}H\), and \(k_{y}=k\sin(\theta_{qQ}+\theta_{kq})\). The sign of \(\mu_{B}H\) determines whether \(\alpha=\uparrow,\beta=\downarrow\) or \(\alpha=\downarrow,\beta=\uparrow\). Then, integrating over \(\theta_{kq}\), we find
\[\int\frac{dkk}{2\pi}\frac{m\sin\theta_{qQ}}{q}\left(\frac{i\xi n_{F}(\varepsilon _{k}^{\alpha})}{\left(k^{2}-\xi^{2}\right)^{3/2}}-\frac{1}{m}\left(1+\frac{i \xi}{\sqrt{k^{2}-\xi^{2}}}\right)n_{F}^{\prime}(\varepsilon_{k}^{\alpha})- \left(c.c.,\alpha\leftrightarrow\beta\right)\right), \tag{102}\]
where we have written \(\xi=\frac{i\Omega_{m}\pm\mu_{B}H}{q}-\frac{q}{2}\). Formally, the above expression also depends on the sign of the imaginary part of \(\xi\). However, since we initially symmetrized the function so that \(\Omega_{m}>0\), we can simply write the function as it is shown above. Lastly, we can do integration by parts on the term proportional to \(n_{F}^{\prime}(\varepsilon_{k})\) to combine both above terms. Doing so, we find
\[-\int dkk\frac{1}{m}\left(1+\frac{i\xi}{\sqrt{k^{2}-\xi^{2}}} \right)n_{F}^{\prime}(\varepsilon_{k}^{\alpha}) =-\frac{1}{m}\left(1+\frac{i\xi}{\sqrt{k^{2}-\xi^{2}}}\right)n_{ F}(\varepsilon_{k}^{\alpha})\bigg{|}_{k=0}^{k=\infty} \tag{103}\] \[-\int dkk\frac{i\xi n_{F}(\varepsilon_{k}^{\alpha})}{\left(k^{2}- \xi^{2}\right)^{3/2}},\]
We can see this second term exactly cancels the term proportional to \(n_{F}(\varepsilon_{k})\) in Eq. (102), leaving only the term at the bounds,
\[-\left(1+\frac{i\xi}{\sqrt{-\xi^{2}}}\right)n_{F}(\varepsilon_{k=0}^{\alpha}). \tag{104}\]
However, we can simplify this term further yet. Recalling that \(\Omega_{m}>0\), and noting the branch cut that occurs in the square root, we can rewrite \(\frac{i\xi}{\sqrt{-\xi^{2}}}=-1\) so that the entire expression is zero. Therefore, we only need diagrams (a) and (f) when calculating the \(Q^{2}\) term of the current-current correlation function, even when both temperature and external magnetic field are finite.
## Appendix D Evaluation of \(\delta\chi_{dia}(H,0)\)
The point of departure is Eq. (29) in the main text:
\[\delta\Pi_{\perp}^{JJ}(H)=2U^{2}T^{2}\sum_{\alpha\neq\beta}\int_ {k,q} \left[\Pi^{\alpha\beta}(q,\Omega_{m})\Big{(}2(v_{k}^{y})^{2}\left(G_{k+Q/2}^{ \alpha}\right)^{2}G_{k-Q/2}^{\alpha}G_{k+q+Q/2}^{\beta}\right. \tag{105}\] \[\left.+v_{k}^{y}v_{k+q}^{y}G_{k+Q/2}^{\alpha}G_{k-Q/2}^{\alpha}G_ {k+q+Q/2}^{\beta}G_{k+q-Q/2}^{\beta}\Big{)}\right]\]
We re-express it as
\[\delta\Pi_{\perp}^{JJ}(H)2U^{2}T^{2}\sum_{\alpha\neq\beta}\int_{k,q} \left[\Pi^{\alpha\beta}(q,\Omega_{m})\Big{(}(v_{k}^{y})^{2}G_{k+Q/2}^{a}G_{k-Q/2}^ {\alpha}G_{k+q+Q/2}^{\beta}\left(2G_{k+Q/2}^{\alpha}+G_{k+q-Q/2}^{\beta}\right)\right.\] \[\left.\hskip 113.811024pt+v_{k}^{y}v_{q}^{y}G_{k+Q/2}^{\alpha}G_{k- Q/2}^{\alpha}G_{k+q+Q/2}^{\beta}G_{k+q-Q/2}^{\beta}\Big{)}\right]\] \[= 2U^{2}T^{2}\sum_{\alpha\neq\beta}\int_{k,q}\Pi^{\alpha\beta}(q, \Omega_{m})\left(\tilde{G}_{1}+\tilde{G}_{2}\right), \tag{100}\]
where we have used the fact that \(v_{k+q}^{y}=(k_{y}+q_{y})/m=v_{k}^{y}+v_{q}^{y}\) for a parabolic dispersion. Since we are only interested in the nonanalytic contribution to function when \(q=0\), we can easily integrate over \(\varepsilon_{k}\) first. Therefore, unlike in the case of \(H=0,T=0\), we can series expand in \(Q\) before integrating as long as the integral over \(\varepsilon_{k}\) is done before the integral over frequency. We can therefore expand \(\tilde{G}_{1}\) and \(\tilde{G}_{2}\) to order \(Q^{2}\) in a manner that is similar calculation of the gradient term of the spin and charge susceptibilities [38]. Expanding \(\tilde{G}_{1}\), we find a total of four terms, such that \(\tilde{G}_{1}=\tilde{G}_{1}^{a}+\tilde{G}_{1}^{b}+\tilde{G}_{1}^{c}+\tilde{G} _{1}^{d}\), where
\[\tilde{G}_{1}^{a} =\frac{1}{2m^{2}}\left(v_{k}^{y}\right)^{2}\left(\mathbf{k} \cdot\mathbf{Q}\right)^{2}\left(G_{k,\alpha}^{2}G_{k+q,\beta}^{4}+2G_{k, \alpha}^{3}G_{k+q,\beta}^{3}+3G_{k,\alpha}^{4}G_{k+q,\beta}^{2}+4G_{k,\alpha} ^{5}G_{k+q,\beta}\right) \tag{101}\] \[\tilde{G}_{1}^{b} =\frac{Q^{2}}{2m}\left(v_{k}^{y}\right)^{2}\left(G_{k,\alpha}^{2 }G_{k+q,\beta}^{3}+2G_{k,\alpha}^{3}G_{k+q,\beta}^{2}+3G_{k,\alpha}^{4}G_{k+q, \beta}\right)\] (102) \[\tilde{G}_{1}^{c} =\frac{1}{m^{2}}\left(v_{k}^{y}\right)^{2}\left(\mathbf{k} \cdot\mathbf{Q}\right)\left(\mathbf{q}\cdot\mathbf{Q}\right)\left(G_{k,\alpha }^{2}G_{k+q,\beta}^{4}+2G_{k,\alpha}^{3}G_{k+q,\beta}^{3}+G_{k,\alpha}^{4}G_{k +q,\beta}^{2}\right)\] (103) \[\tilde{G}_{1}^{d} =\frac{1}{2m^{2}}\left(v_{k}^{y}\right)^{2}\left(\mathbf{q} \cdot\mathbf{Q}\right)^{2}\left(G_{k,\alpha}^{2}G_{k+q,\beta}^{4}+2G_{k,\alpha }^{3}G_{k+q,\beta}^{3}\right). \tag{104}\]
We assume that \(q\ll k_{F}\) so that \(\varepsilon_{k}=v_{F}(k-k_{F})\), \(\varepsilon_{k+q}=\varepsilon_{k}+\mathbf{v}_{F}\cdot\mathbf{q}\), and \(v_{k}^{y}=v_{F}^{y}\). Doing so, we can immediately see that both \(\tilde{G}_{1}^{a}\) and \(\tilde{G}_{1}^{b}\) vanish after integration over \(\varepsilon_{k}\). In addition, one can show that \(\tilde{G}_{1}^{c}\) is odd over \(\Omega_{m}\), so it too will vanish. This leaves only \(\tilde{G}_{1}^{d}\). Now, evaluating \(\tilde{G}_{2}\) in a similar way, we get
\[\tilde{G}_{2}^{a} =\frac{1}{2m^{2}}v_{k}^{y}v_{q}^{y}\left(\mathbf{k}\cdot\mathbf{ Q}\right)^{2}\left(G_{k,\alpha}^{4}G_{k+q,\beta}^{2}+G_{k,\alpha}^{2}G_{k+q, \beta}^{4}\right) \tag{105}\] \[\tilde{G}_{2}^{b} =\frac{Q^{2}}{2m}v_{k}^{y}v_{q}^{y}\left(G_{k,\alpha}^{3}G_{k+q, \beta}^{3}+G_{k,\alpha}^{2}G_{k+q,\beta}^{3}\right)\] (106) \[\tilde{G}_{2}^{c} =\frac{1}{m^{2}}v_{k}^{y}v_{q}^{y}\left(\mathbf{k}\cdot\mathbf{Q} \right)\left(\mathbf{q}\cdot\mathbf{Q}\right)G_{k,\alpha}^{2}G_{k+q,\beta}^{4}\] (107) \[\tilde{G}_{2}^{d} =\frac{1}{2m^{2}}v_{k}^{y}v_{q}^{y}\left(\mathbf{q}\cdot\mathbf{ Q}\right)^{2}G_{k,\alpha}^{2}G_{k+q,\beta}^{4} \tag{108}\]
As before, \(\tilde{G}_{2}^{b}\) vanishes after integration over \(\varepsilon_{k}\). In addition, both \(\tilde{G}_{2}^{a}\) and \(\tilde{G}_{2}^{d}\) are odd \(\Omega_{m}\), so they too will not contribute. This leaves solely \(\tilde{G}_{2}^{c}\). The total contribution will then come from only \(\tilde{G}_{2}^{d}\) and \(\tilde{G}_{2}^{c}\). After some simplification by doing integration by parts on the integral over \(\varepsilon_{k}\), we find
\[\delta\Pi_{\perp}^{JJ}(H)=\frac{2U^{2}}{m^{2}}\sum_{\alpha\neq\beta}\int_{k,q} \Pi^{\alpha\beta}(q,\Omega_{m})\left(\mathbf{q}\cdot\mathbf{Q}\right)v_{k}^{y} \left(\left(\mathbf{q}\cdot\mathbf{Q}\right)v_{k}^{y}-\left(\mathbf{k}\cdot \mathbf{Q}\right)v_{q}^{y}\right)\left(G_{k}^{\alpha}\right)^{5}G_{k+q}^{\beta} \tag{109}\]
Noting that, to leading order, the factor of \(\mathbf{k}\) is simply \(k_{F}\hat{k}\), then integrating over \(\varepsilon_{k}\), and lastly over \(\omega_{n}\) and both angles, we obtain Eq. (30) in the main text.
## Appendix E Nonanalyticities from Full Expressions for \(\chi_{dia}\)
Here we present verification that the analytic expressions we obtain in Sec. V match with the results obtained when one does integration over frequency first. In both situations, the process involves evaluating the first several integrals and/or sums analytically until we are just left with \(q\) and \(\Omega_{m}\). In the case of the magnetic field, we evaluate these terms numerically, and then compare the plot of \(H\) to the analytically derived value. The numerics were done in Mathematica using the PrincipalValue option to avoid complications from points where \(\Omega\to 0\) and \(q\to 2k_{F}\), where the integral is singular but convergent in the principal value sense. In the case of finite temperature, we show that
the function one obtains when doing these sums over Matsubara frequencies first reduce to the same expression as in Eq. (32) when one consider \(v_{F}q\sim\Omega_{n}\sim T\). In the case of \(q\sim 2k_{F}\), we were able to confirm that there are no nonanalytic terms of order \(T\log T\), but were unable to determine the exact coefficient of a potential linear in \(T\) term that may arise from these internal \(q\).
### Magnetic Field
In the case of the magnetic field, we first integrate over as many integrals analytically as we can, then do the ones that remain numerically. As a reminder, the total contribution to the diamagnetic susceptibility can be written as
\[\Pi_{\perp}(H)= 2U^{2}\sum_{\alpha\neq\beta}\int_{k,p}\left[\Pi^{\alpha\beta}(q, \Omega_{m})\Big{(}2(v_{k}^{y})^{2}\left(G_{k+Q/2}^{\alpha}\right)^{2}G_{k-Q/2}^ {\alpha}G_{k+q+Q/2}^{\beta}\right.\] \[\left.\hskip 113.811024pt+v_{k}^{y}v_{k+q}^{y}G_{k+Q/2}^{\alpha}G_ {k-Q/2}^{\alpha}G_{k+q+Q/2}^{\beta}G_{k+q-Q/2}^{\downarrow}\Big{)}\right] \tag{10}\]
To do this evaluation, we first integrate over \(\omega_{n}\), then series expand over \(Q\) as before. After this expansion, we can then evaluate over angles and \(k\). The remaining expression is then
\[\delta\chi_{dia}(H)= U^{2}N_{F}^{2}\chi_{dia}^{0}\int_{0}^{\infty}d\Omega_{m}\int_{0}^{ \infty}dq\frac{1}{8\pi q^{3}}\left(\sqrt{1-\alpha^{2}-\tilde{H}}-\sqrt{1- \beta^{2}+\tilde{H}}\right)\] \[\times \Bigg{(}\frac{3\tilde{H}^{2}-2\tilde{H}\left(q^{2}+3\alpha q+3 \right)+\alpha^{2}q^{2}+2q^{2}+6\alpha q+3}{\left(1-\alpha^{2}-\tilde{H} \right){}^{5/2}} \tag{11}\] \[\left.\hskip 113.811024pt-\frac{3\tilde{H}^{2}+2\tilde{H}\left(q ^{2}+3\beta q+3\right)+\beta^{2}q^{2}+2q^{2}+6\beta q+3}{\left(1-\beta^{2}+ \tilde{H}\right){}^{5/2}}\right)+(\tilde{H}\rightarrow-\tilde{H}),\]
where here \(\tilde{H}=\mu_{B}H/E_{F}\), \(\alpha=\frac{i\Omega_{n}+\tilde{H}}{q}-\frac{q}{2}\) and \(\beta=-\frac{i\Omega_{m}+\tilde{H}}{q}-\frac{q}{2}.\) We then subtract off the \(H=0\) term to obtain \(\delta\chi_{dia}(H)\), and integrate numerically over \(q\) and \(\Omega\) for \(\mu_{B}H\) ranging from \(-.02E_{F}\) to \(.02E_{F}\), producing the plot in Fig. 5.
### Finite Temperature
Here we show the procedure for evaluating the non-analytic term in the diamagnetic susceptibility at a finite \(T\) and zero Zeeman field. We first do summation over internal fermionic frequencies and then expand out to order \(Q^{2}\), as in the case of \(T=0\). Once the series expansion is done, the remaining integrals over angles are elementary. The resulting expression takes an unwieldy form, consisting of the sum of terms proportional to the Fermi distribution function and its derivatives. However, it can be simplified to something more manageable by doing the subsequent integration over fermionic momenta by parts. Reducing the dependence of the fermionic distribution to Fermi functions, we obtain
\[\delta\chi_{dia}(T)=\frac{3}{2}U^{2}N_{F}^{2}\chi_{dia}^{0}mT\sum_{n=-\infty} ^{\infty}\int_{0}^{\infty}dqI_{p}(q)I_{k}(q) \tag{12}\]
where
\[I_{p} =\int_{0}^{\infty}dpp\left(-\frac{n_{F}\left(\varepsilon_{p} \right)}{\sqrt{\alpha^{2}-p^{2}}}-\frac{n_{F}\left(\varepsilon_{p}\right)}{ \sqrt{\left(\alpha^{*}\right)^{2}-p^{2}}}\right) \tag{13}\] \[I_{k} =\int_{0}^{\infty}dkk\left(\frac{n_{F}\left(\varepsilon_{k} \right)\left(\alpha^{2}\left(4k^{2}+3q^{2}\right)+k^{2}\left(k^{2}+2q^{2} \right)+6\alpha qk^{2}+4\alpha^{3}q\right)}{q^{3}\left(\alpha^{2}-k^{2} \right){}^{7/2}}+c.c.\right), \tag{14}\]
and \(\alpha=\frac{im\Omega_{n}}{q}-\frac{q}{2}\). To proceed, we examine separately the contributions from small bosonic \(q\) and from \(|q|\approx 2k_{F}\). For both contributions we conjecture that the non-analytic \(O(T)\) term comes from the difference between summation and integration over bosonic Matsubara frequencies, while fermionic distribution functions can be approximated by step functions.
#### e.2.1 Contribution from small \(q\).
We assume and then verify that a nonanalytic contribution to \(\chi_{dia}^{q=0}\) comes from \(v_{F}q\sim\Omega_{n}\sim T\). i.e., from \(\Omega_{n}/q=\mathcal{O}(1)\). integrals over \(k\) and \(p\). Using this, we approximate \(\alpha^{2}-p^{2}\) by \(-\frac{m^{2}\Omega_{n}^{2}}{q^{2}}-p^{2}+im\Omega_{n}\) in the integral over \(p\) and do the same in the integral over \(k\). The integral over \(p\) can be written as
\[I_{p}=\int_{0}^{k_{F}}dpp\,\left(-\frac{1}{\sqrt{-\frac{m^{2} \Omega_{n}^{2}}{q^{2}}-p^{2}-im\Omega_{n}}}-\frac{1}{\sqrt{-\frac{m^{2}\Omega_ {n}^{2}}{q^{2}}-p^{2}+im\Omega_{n}}}\right), \tag{100}\]
Expanding to leading order in \(\Omega_{n}\), we find
\[I_{p}=-\int_{0}^{k_{F}}dpp\,\frac{m|\Omega_{n}|}{\left(p^{2}+ \frac{m^{2}\Omega_{n}^{2}}{q^{2}}\right)^{3/2}} \tag{101}\]
Integrating over \(p\), we find
\[I_{p}=q\left(-1+\frac{|\Omega_{n}|}{\sqrt{v_{F}^{2}q^{2}+\Omega_ {n}^{2}}}\right). \tag{102}\]
We next do similar analysis of the integral over \(k\). Keeping terms of order one and of order \(\Omega_{n}\) in the numerator, we obtain
\[I_{k}=\int_{0}^{k_{F}}dkk\left(\frac{k^{4}+2im\Omega_{n}k^{2}-4k ^{2}m^{2}\Omega_{n}^{2}/q^{2}-4im^{3}\Omega_{n}^{3}/q^{2}}{q^{3}\left(-\frac{m^ {2}\Omega_{n}^{2}}{q^{2}}-k^{2}-im\Omega_{n}\right)^{7/2}}+c.c.\right) \tag{103}\]
Expanding further the denominator to leading order in \(\Omega_{n}\), we find after simple algebra
\[I_{k}=\int_{0}^{k_{F}}dkk\frac{m|\Omega_{n}|\left(24k^{2}m^{2} \Omega_{n}^{2}/q^{2}-3k^{4}-8m^{4}\Omega_{n}^{4}/q^{4}\right)}{q^{3}\left(k^{2 }+\frac{m^{2}\Omega_{n}^{2}}{q^{2}}\right)^{9/2}} \tag{104}\]
Integrating over \(k\), we find
\[I_{k}=\frac{|\Omega_{n}|v_{F}^{2}q^{2}\left(v_{F}^{2}q^{2}-4 \Omega_{n}^{2}\right)}{m^{2}\left(v_{F}^{2}q^{2}+\Omega_{n}^{2}\right)^{7/2}} \tag{105}\]
Now, combining this with the results from the \(p\) integral, we have
\[I_{k}I_{p}=\frac{|\Omega_{n}|v_{F}^{2}q^{3}\left(v_{F}^{2}q^{2}- 4\Omega_{n}^{2}\right)}{m^{2}\left(v_{F}^{2}q^{2}+\Omega_{n}^{2}\right)^{7/2}} \left(\frac{|\Omega_{n}|}{\sqrt{v_{F}^{2}q^{2}+\Omega_{n}^{2}}}-1\right) \tag{106}\]
The term with \(-1\) in the last bracket vanishes after integration over \(q\), as one can easily verify. Dropping this term and substituting \(I_{k}I_{p}\) into (100), we obtain
\[\delta\chi_{dia}^{q=0}(T)=\frac{3}{2}U^{2}N_{F}^{2}\chi_{dia}^{0}T \sum_{n=-\infty}^{\infty}\int_{0}^{\infty}dq\frac{v_{F}^{2}q^{3}\Omega_{n}^{2 }\left(v_{F}^{2}q^{2}-4\Omega_{n}^{2}\right)}{m\left(v_{F}^{2}q^{2}+\Omega_{n }^{2}\right)^{4}} \tag{107}\]
Re-expressing this result in terms of the current-current correlator \(\Pi_{JJ}\), we obtain the result that we presented in Eq. (32) in the main text.
In the main text, we evaluated the frequency sum over \(n\) and the integral over \(q\) by summing over \(\Omega_{n}\) first, subtracting off the \(T=0\) contribution, and then integrating over \(q\). For completeness, here we demonstrate that one can also obtain the same result by integrating over \(q\) first and then summing over Matsubara frequencies. Since this integral is formally divergent, we must institute an upper cutoff \(\Lambda\) on the integral over \(q\). In addition, there is ambiguity for the \(n=0\) term in the Matsubara sum. For any finite \(q\), it is easy to see that the \(n=0\) term is zero because of \(\Omega_{n}^{2}\) in the numerator of (107). That said, if we integrate over \(q\) first, we can see that the \(\Omega_{n}^{2}\) cancels out
because the q-integration yields \(1/\Omega_{n}^{2}\). This last term comes from the lower bound of \(q\)-integration, i.e., from \(q=0+\). This ambiguity can be resolved by formally instituting a lower cutoff to this term. This lower cutoff will not affect any terms with \(n\neq 0\), but will eliminate the \(n=0\) contribution. A more physically sound method is to evaluate this integral for a finite system, eliminate the \(n=0\) term, and then extend the system size to infinity [10]. Once this is done, we have
\[\delta\chi_{dia}^{q=0}(T) =3U^{2}N_{F}^{2}\chi_{dia}^{0}T\sum_{n=1}^{\infty}\int_{0}^{ \Lambda}dq\frac{v_{F}^{2}q^{3}\Omega_{n}^{2}\left(v_{F}^{2}q^{2}-4\Omega_{n}^{ 2}\right)}{m\left(v_{F}^{2}q^{2}+\Omega_{n}^{2}\right)^{4}} \tag{14}\] \[=-U^{2}N_{F}^{2}\chi_{dia}^{0}\frac{T}{4E_{F}}\sum_{n=1}^{\infty} \frac{\Lambda^{4}\left(\Lambda^{2}+6\Omega_{n}^{2}\right)}{\left(\Omega_{n}^{ 2}+\Lambda^{2}\right)^{3}} \tag{15}\]
The sum over Matsubara frequencies can be evaluated analytically, and gives
\[\sum_{n=1}^{\infty}\frac{\Lambda^{4}\left(\Lambda^{2}+6\Omega_{n}^{2}\right)} {\left(\Omega_{n}^{2}+\Lambda^{2}\right)^{3}}=\frac{-64T^{3}+36\Lambda T^{2} \coth\left(\frac{\Lambda}{2T}\right)-5\Lambda^{3}\sinh\left(\frac{\Lambda}{T} \right)\csc h^{4}\left(\frac{\Lambda}{2T}\right)+18\Lambda^{2}T\csc h^{2} \left(\frac{\Lambda}{2T}\right)}{128T^{3}} \tag{16}\]
In the limit of \(T\to 0\), the above expression is \(\frac{9\Lambda}{32T}\). Subtracting this term off, and then taking \(\Lambda\rightarrow\infty\), we find for the diamagnetic susceptibility.
\[\delta\chi_{dia}^{q=0}(T)=U^{2}N_{F}^{2}\chi_{dia}^{0}\frac{T}{8E_{F}}, \tag{17}\]
This is the same expression as Eq. (33) in the main text.
#### e.2.2 Contribution from \(q\approx 2k_{F}\).
Near \(q=2k_{F}\), we introduce \(q^{2}=4k_{F}^{2}(1+\delta)\) and approximate \(\alpha^{2}\) as \(\alpha^{2}=k_{F}^{2}(1+\delta)-im\Omega_{n}-m^{2}\Omega_{n}^{2}/(4k_{F}^{2})\). The last term can be absorbed into \(\delta\) and we neglect it below. Substituting this \(\alpha^{2}\) into the integrand for \(I_{p}\) in (15) and integrating over \(p\), we obtain
\[I_{p}=k_{F}\left(\sqrt{\delta-it_{n}}+\sqrt{\delta+it_{n}}\right) \tag{18}\]
where \(t_{n}=m\Omega_{n}/k_{F}^{2}\). For \(I_{k}\), the analysis requires more care as we will need both the leading and the subleading terms in \(\delta\) and \(t\). Expanding the numerator in the integrand for \(I_{k}\) in (15) to order \(\delta\) and \(t_{n}\) and integrating over \(k\), we obtain
\[I_{k}=\frac{1}{16k_{F}^{4}}\left[\left(\frac{1}{(\delta-it_{n})^{5/2}}+\frac{1 }{(\delta+it_{n})^{5/2}}\right)-\frac{2}{3}\left(\frac{1}{(\delta-it_{n})^{3/2 }}+\frac{1}{(\delta+it_{n})^{3/2}}\right)\right] \tag{19}\]
Substituting \(I_{p}\) and \(I_{k}\) into (13) and using \(dq=2k_{F}d\delta\), we obtain the contribution to \(\delta\chi_{dia}\) from \(|q|\approx 2k_{F}\) in the form
\[\delta\chi_{dia}^{2k_{F}}(T)=\frac{3}{2}U^{2}N_{F}^{2}\chi_{dia}^{0}\frac{T}{8 E_{F}}\sum_{n=-\infty}^{\infty}\int_{-\infty}^{\infty}d\delta S(\delta,t_{n}) \tag{20}\]
where
\[S(\delta,t_{n})=\frac{\delta^{2}-t_{n}^{2}}{(\delta^{2}+t_{n}^{2})^{2}}-\frac {2}{3}\frac{\delta^{2}-t_{n}^{2}}{(\delta^{2}+t_{n}^{2})^{3/2}}. \tag{21}\]
In (21) we neglected terms that are odd in \(\delta\) and vanish after integration over \(\delta\) in symmetric limits in (20).
The frequency sum in (20) contains terms with \(n\neq 0\), for which \(t_{n}\) is finite, and the term with \(n=0\) (the thermal contribution), for which \(t_{n}=0\). For the latter, \(S(\delta,0)\) contains non-integrable singularities: the first term scales as \(1/\delta^{2}\) and the second term scales as \(1/|\delta|\). The first singularity can be resolved simply by noting that, since we took the Fermi functions to be step functions, there must be a lower cutoff of order \(T/E_{F}\) in the integral over \(\delta\), hence the integral gives \(1/T\). Combining with the overall factor of \(T\) in (20) we find that the corresponding contribution to \(\delta\chi_{dia}^{2k_{F}}\) is \(T-\)independent. For the second term, the contribution from the lower cutoff is \((4/3)T\log(T/E_{F})+O(T)\)
We show below that the \(T\log T\) term is a parasitic one, and it cancels out with the analogous contribution from the sum over the terms with non-zero Matsubata frequency.
We now move to terms in (110) with \(n\neq 0\). For the first term in (110), the integral over \(\delta\) vanishes at any non-zero \(n\) because \(\int_{-\infty}^{\infty}dx(x^{2}-1)/(x^{2}+1)^{2}=0\). For the second term, integrating over \(\delta\) and neglecting the contribution from high energies (which has to be properly regularized), we obtain
\[\int d\delta S(\delta,t_{n})=\frac{4}{3}\left(\log\left(\pi|n|T/E_{F}\right)+2\right) \tag{111}\]
For the summation over \(n\), we use the reasoning in [39] and sum up to \(n_{max}=\Lambda/(2\pi T)-1/2\), where \(\Lambda\sim E_{F}\). Using
\[\sum_{1}^{n_{max}}\log n=(n_{max}+1/2)\log\left((n_{max}+1/2)/e\right)+\frac{ 1}{2}\log 2\pi \tag{112}\]
we find
\[\frac{8}{3}T\sum_{1}^{n_{max}}\left(2+\log\left(n\pi T/E_{F}\right)\right)=- \frac{8}{3}T-\frac{4}{3}T\log T+\cdots \tag{113}\]
where dots stand for non-universal terms.
Combining the contributions from \(n=0\) and finite \(n\), we see that \(T\log T\) term cancels out. Keeping the first term in (113) and substituting into (110), we obtain
\[\delta\chi_{dia}^{2k_{F}}(T)=-U^{2}N_{F}^{2}\chi_{dia}^{0}\frac{T}{2E_{F}} \tag{114}\]
Taken at a face value, this result would imply that there exists a universal, linear in \(T\) contribution to \(\delta\chi_{dia}(T)\) from \(q\approx 2k_{F}\), which is opposite in sign and 4 times larger in magnitude than the contribution from \(q\approx 0\). We caution, however, that we neglected terms in \(\delta\chi_{dia}^{2k_{F}}(T)\), which come from energies of order \(E_{F}\). These terms are also formally linear in \(T\), and there may be another universal contribution to \(\delta\chi_{dia}^{2k_{F}}(T)\propto T\) from the ratio of the scales of order \(E_{F}\), e.g., from the numerical prefactor in the lower cutoff that we set at \(\delta_{min}\sim T/E_{F}\). Trying to obtain such a term would involve substantially more complicated analysis than the one we presented here. In this respect, the key result of the analysis in this section is the proof of the cancellation of the \(T\log T\) terms.
|
2303.13532 | Enhanced Iterated local search for the technician routing and scheduling
problem | Most public facilities in the European countries, including France, Germany,
and the UK, were built during the reconstruction projects between 1950 and
1980. Owing to the deteriorating state of such vital infrastructure has become
relatively expensive in the recent decades. A significant part of the
maintenance operation costs is spent on the technical staff. Therefore, the
optimal use of the available workforce is essential to optimize the operation
costs. This includes planning technical interventions, workload balancing,
productivity improvement, etc. In this paper, we focus on the routing of
technicians and scheduling of their tasks. We address for this purpose a
variant of the workforce scheduling problem called the technician routing and
scheduling problem (TRSP). This problem has applications in different fields,
such as transportation infrastructure (rail and road networks),
telecommunications, and sewage facilities. To solve the TRSP, we propose an
enhanced iterated local search (eILS) approach. The enhancement of the ILS
firstly includes an intensification procedure that incorporates a set of local
search operators and removal-repair heuristics crafted for the TRSP. Next, four
different mechanisms are used in the perturbation phase. Finally, an elite set
of solutions is used to extensively explore the neighborhood of local optima as
well as to enhance diversification during search space exploration. To measure
the performance of the proposed method, experiments were conducted based on
benchmark instances from the literature, and the results obtained were compared
with those of an existing method. Our method achieved very good results, since
it reached the best overall gap, which is three times lower than that of the
literature. Furthermore, eILS improved the best-known solution for $34$
instances among a total of $56$ while maintaining reasonable computational
times. | Ala-Eddine Yahiaoui, Sohaib Afifi, Hamid Afifi | 2023-03-12T23:44:49Z | http://arxiv.org/abs/2303.13532v1 | # Enhanced Iterated local search for the technician routing and scheduling problem
###### Abstract
Most public facilities in the European countries, including France, Germany, and the UK, were built during the reconstruction projects between 1950 and 1980. Owing to the deteriorating state of such vital infrastructure has become relatively expensive in the recent decades. A significant part of the maintenance operation costs is spent on the technical staff. Therefore, the optimal use of the available workforce is essential to optimize the operation costs. This includes planning technical interventions, workload balancing, productivity improvement, etc. In this paper, we focus on the routing of technicians and scheduling of their tasks. We address for this purpose a variant of the workforce scheduling problem called the technician routing and scheduling problem (TRSP). This problem has applications in different fields, such as transportation infrastructure (rail and road networks), telecommunications, and sewage facilities. To solve the TRSP, we propose an enhanced iterated local search (eILS) approach. The enhancement of the ILS firstly includes an intensification procedure that incorporates a set of local search operators and removal-repair heuristics crafted for the TRSP. Next, four different mechanisms are used in the perturbation phase. Finally, an _elite set_ of solutions is used to extensively explore the neighborhood of local optima as well as to enhance diversification during search space exploration. To measure the performance of the proposed method, experiments were conducted based on benchmark instances from the literature, and the results obtained were compared with those of an existing method. Our method achieved very good results, since it reached the best overall gap, which is three times lower than that of the literature. Furthermore, eILS improved the best-known solution
for 34 instances among a total of 56 while maintaining reasonable computational times.
keywords: Maintenance, technician routing and scheduling, iterated local search, elite solutions, diversification, intensification. +
Footnote †: journal: Journal of Computational and Graphical Analysis
## 1 Introduction
Workforce scheduling is a relevant research topic in transportation and logistics, since it can be applied in many fields [6], such as technician routing and scheduling, manpower allocation, security personnel routing and rostering and home care services. Interest in this research area is also driven by the importance of ensuring an efficient and satisfying client service policy after a product delivery, which substantially contributes to the maintain of the market share [15]. The workforce scheduling problem focuses on the elaboration of models and solution methods for planning in-field personnel activities, including their mobilization between different locations. Moreover, the problem consists in the elaboration of workload allocation and routing of technician crews, as well as the scheduling of their operations at the level of task locations, which include industrial facilities, patient homes, telecommunication infrastructure, etc. In addition, many objectives and challenges may be considered, such as increasing productivity, reducing transportation costs, increasing the number of fulfilled tasks, reducing outsourcing costs, reducing overtime, balancing technician workloads, etc. Furthermore, to have a reliable and satisfactory organization of the workforce in the field, several requirements and constraints have to be met : in addition to the vehicle routing problem classical constraints (capacity and time windows) and work regulations (breaks and workload). Other aspects could be taken into consideration such as skill types and competency levels required by each task, precedence constraints between several tasks for the same customer, priorities, limited crews of technicians, and sometimes the use of specific tools and spare parts.
In this paper, we address a variant of the technician routing and scheduling problem (TRSP) presented by Pillac et al.[24]. Given a crew of technicians and a set of tasks to fulfill at their respective locations, the goal is to assign subsets of tasks to individual technicians and construct the routes for each technician in such a way that the total duration of the routes is minimized. Several types of constraints must be respected by each route. First,
given the multi-depot structure of the TRSP, where each technician is associated with their home depot where they should start and finish the allocated route. A second type of constraints is the compatibility between technicians and tasks, where each task requires proficiency in a specific type of skill by the assigned technician, whereas a given technician may not have necessary the proficiency in all those skills. Both previous conjugated constraints give rise to the site-dependent constraint. Another type of constraints is resource requirement, where each task requires a certain amount of resources of different types. Two general resource classes are considered: tools and spare parts. While the former is renewable, the latter is non-renewable. Moreover, each technician starts the journey from his home depot with a set of tools and an initial inventory of spare parts. However, in the cases where a technician does not have enough tools or spare parts to continue the journey, the inventory can be replenished by visiting a central depot once at some point in the route, where an infinite stock of tools and spare parts is available. Finally, as the TRSP is an extension of the VRPTW, a time window is associated with each task. In addition, each home depot has opening and closing times.
The main contribution of this paper is the proposal of an enhanced iterated local search (eILS) for the TRSP. It incorporates several procedures that are used during the intensification and perturbation phases. First, the intensification phase combines a set of removal-repair heuristics and local search operators. Second, several perturbation procedures are incorporated into the eILS. They differ in whether they are based on remove-repair operators or local search-based deterioration. In both cases, the criteria used may also differ, whether based on travel costs or duration. Hence, four perturbation mechanisms are introduced. On the other hand, an _Elite set_ of solutions is used to enhance the intensification and diversity management of the eILS. The extensive intensification approach is inspired by the proximate-optimality principle, and is achieved by allowing each solution to be the starting point of several ILS phases. Whereas diversity management is achieved by maintaining a relatively diverse population of solutions and discarding duplicates solutions that are too close. At the lower level, we propose a new local search operator called _SwapSequence_ operator. It interchanges two sequences of \(k\) and \(k^{\prime}\) visits between two different routes. A new removal operator is also proposed. It is derived from the related removal operator introduced in [27], where instead of removing a set of individual tasks, it removes sequences of tasks. A key feature of the eILS is the implementation of a time-constant moves evaluation and feasibility tests. This includes time window feasibility
checks, renewable and nonrenewable resource availability tests, technician skills compatibility tests and duration evaluation. Moreover, several speed-up techniques have been introduced in the eILS to achieve computational efficiency.
The remainder to this paper is organized as follows. A brief state-of-the-art for the general workforce scheduling problem is proposed in Section 2. In Section 3 we describe TRSP by introducing the necessary notation as well as an illustrative example. Our metaheuristic approach proposed for TRSP is presented in Section 4. The computational tests and detailed results, are presented and discussed in Section 5. Finally, we present some conclusions drawn from this study and discuss relevant perspectives and extensions for TRSP.
## 2 Related work
In this paper, we study a variant of the workforce scheduling problem where a set of technicians need to fulfill tasks at different locations. This general class of problems is called the workforce scheduling and routing problem (WSRP) [6]. A key characteristic of this class of problems is that moving from one location to another takes a significant amount of time, therefore, minimizing travel time will substantially contribute to cost reduction and improved productivity. Applications of these problems in real life can be found in several sectors, such as home health-care services and infrastructure maintenance operations.
Most WSRP variants found in the literature are extensions for the VRP with time windows (VRPTW). In this class of problems, each point of interest or customer is associated with a time window that specifies when the vehicle should start the service [10]. The main objective function of VRPTW is cost minimization. Other variants minimize first the number of vehicles before optimizing travel costs. Another objective function that is less frequently used in the literature is the minimization of the duration [11]. In addition to the basic VRPTw, WSRP problems may also consider additional characteristics and constraints, such as multiple depots, multi-trips, site-dependent considerations, etc. In home health care for example, Li et al. presented in [19] a variant of the technician routing and scheduling problem with workload balance and outpatient service performed by doctors. Bredstrom & Ronnqvist. (2007) [4] introduced a new variant of the VRPTW, with additional synchronization constraints between pairs of caregivers on a selected subset of
patients. Another variant of the VRPTW with profits and synchronization constraints has also been studied in [36], with an application in the context of fire-fighting. Several studies in recent years have focused on the integration of uncertainties into the workforce scheduling problem. Chen et al. (2016) [7] considered uncertain service times, and proposed a branch-and-cut approach to solve the problem. Shi et al. (2019) [28] proposed a robust optimization model with uncertain service and travel times and proposed a metaheuristic and Monte Carlo simulation to solve the problem.
One of the first applications of VRP in the field of workforce scheduling was reported by Weigel and Cao. (1999) [33]. The authors proposed the use of VRPTW to model technician dispatching and home delivery problems faced by a well-known retailer. To solve this problem, they proposed a tabu search algorithm that combines intra and inter-route improvement procedures after the initial assignment of requests to technicians. Xu and Chiu (2001) [35] addressed the field technician scheduling problem inspired by the telecommunication industry. In the studied problem, the objective is to maximize the preferences when assigning tasks to technicians, as well as the minimization of the work duration. The authors proposed several approaches to solve the problem, namely, a greedy randomized adaptive search procedure (GRASP), upper bounds, relaxation schemes and an extended mathematical model. Tang et al. (2007) [30] modeled a maintenance-scheduling problem on a horizon of several days as several multiple-tour maximum collection problems with time-dependent rewards. The rewards decrease overtime to favor faster scheduling tasks. The authors proposed solving the problem using a tabu search heuristic. Bostel et al. (2008) [1] addressed a field force planning and routing problem solved on a multi-period time horizon and uses a rolling horizon approach, with an application in the field of water treatment and distribution. A memetic algorithm and a column generation-based heuristic is proposed to deal with the static version. An adapted procedure is proposed to address the dynamic version of the problem. An exact method based on the column generation approach was proposed for the same problem in Tricoire et al. 2013 [31].
The TRSP gained more attention after the French Operations Research Society (ROADEF) dedicated the yearly challenge to addressing the technician and intervention scheduling problem proposed by a well-known telecommunication company [12]. In this problem, each task is associated with a priority level and requires a certain level of proficiency in a set of skills such that it can be performed by a technician. Technicians are grouped into teams
and dispatched to fulfill tasks without the consideration of travel times. The objective of the problem is to execute tasks as early as possible depending on their priority levels. Cordeau et al. (2010) [9] developed an ALNS method whereas Hashimoto et al. (2011) [14] proposed a GRASP method. Although the problem does not consider any routing decisions, it has been the origin of several variants of WSRP. Kovacs et al. (2012) [17] proposed the service technician routing and scheduling problem (STRSP) which is a generalization of the variant proposed in [12], because it takes into consideration the optimization of travel times. Moreover, in the case where it is not possible to execute all the tasks, an outsourcing cost is associated with each unfulfilled task. Two variants are investigated: the first is with team building and the other is without team building. The authors proposed a generic ALNS approach for both variants, which were validated on new benchmark instances derived from the one proposed in [12]. Later, Xie et al. (2017) [34] tackled the variant without team building in [17] and proposed an iterated local search algorithm that succeeded in finding several new best-known solutions for the problem. Mathlouthi et al. (2018) [22] introduced a new variant of the TRSP. The main features are the consideration of multiple time windows per task, possibility for technicians to break during the day and possibility of picking up some special types of spare parts that are available only at a subset of central depots. The authors proposed a mathematical formulation that was tested on small-size instances. To tackle large-size instances, the same authors proposed in Mathlouthi et al. (2018) [21] a tabu search heuristic enhanced by an adaptive memory mechanism.
Zamorano and Stolletz (2017) [37] proposed a new variant of the TRSP that combines team building and multi-period features. This variant was inspired by real-life problems faced by an external maintenance provider specializing in forklifts. The authors proposed a mixed-integer program and branch-and-price algorithm while considering different sub-problem formulations during the column generation phase. Tests are carried out on a set of artificial instances and real-world data. Pekel. (2020) [23] addressed the same problem presented in [37]. The author proposed an improved particle swarm optimization (IPSO) algorithm and compared its results with those of a branch-and-cut algorithm. Guastaroba et al. (2020) [13] presented another variant of the WSRP with multiple periods and team building. To solve this problem, they proposed a mixed-integer program and two meta-heuristic methods. The first method is a math-heuristic approach based on ALNS whereas the second is a three-phase decomposition algorithm.
## 3 Problem description
We consider a set of technicians/vehicles \(\mathcal{K}=\{0,\ldots,K-1\}\), where \(|\mathcal{K}|=K\), and a set of tasks \(\mathcal{R}=\{0,\ldots,N-1\}\), where \(|\mathcal{R}|=N\). In the following, the two words, technician and vehicle are used interchangeably. Each technician \(k\in\mathcal{K}\) starts from its home depot \(o_{k}\in O=\{1+k|k\in K\}\) and ends at the same depot. A central depot "0" is open to technicians to replenish their inventory of spare parts and necessary tools. We define \(\delta_{0}\) as the replenishment time at the central depot. Each task \(i\) is associated with a location \(u_{i}\in U=\{K+1+i|i\in\mathcal{R}\}\), a service time duration \(\delta_{i}\) and a time window \([e_{i},l_{i}]\), defining earliest and latest service starting times. In addition, for each arc \((i,j)|i,j\in V=O\cup U\cup\{0\}\), we associate travel time \(t_{ij}\), which is the same for all vehicles. We also consider several types of spare parts \(T=\{1,\ldots,|T|\}\), tools \(P=\{1,\ldots,|P|\}\) and skills \(Q=\{1,\ldots,|Q|\}\). Each task \(i\in\mathcal{R}\) requires \(d_{ip}\) units of spare parts of type \(p\in P\), and the tool \(t\in T\) if the Boolean \(b_{it}\) is true. We also set the constant \(a_{iq}\) to \(1\) if task \(i\) requires skill \(q\in Q\). For each vehicle \(k\in\mathcal{K}\), we denote the initial inventory of spare parts of type \(p\in P\) by \(v_{p}^{k}\), whereas a Boolean \(w_{t}^{k}\) is set to \(1\) if a tool \(t\in T\) is in the vehicle starting from the depot. We also set a constant \(y_{q}^{k}\) to \(1\) if the technician possesses a skill \(q\in Q\). Because skills are intrinsic to each technician, we can define a compatibility list of technicians for each task. Hence, we denote for each task \(i\in\mathcal{R}\), the set of compatible technicians by \(K_{i}\). Finally, we associate for each home depot \(o_{k}|k\in\mathcal{K}\) an opening and a closing time window \([e_{k},l_{k}]\).
## 4 Enhanced iterated local search
We provide in this section a detailed description of the eILS.
### Low-level heuristics
In this section, we propose a detailed description of the low-level heuristics/operators used to describe the different components of the eILS.
#### 4.1.1 Remove-Repair based perturbation
Given an input solution and a perturbation parameter \(D_{max}\), this procedure removes a random number of tasks between \(1\) and \(D_{max}\), and then applies the best insertion algorithm to repair the solution.
_Best insertion algorithm_. This heuristic iteratively inserts tasks in the solution at their best positions according to one of the two evaluation criteria : the duration or the travel costs. At each iteration, the unscheduled list of tasks is computed, and all feasible insertions in compatible vehicles are computed for each task. If no feasible insertion for a given task is found in a compatible vehicle, then the best insertion algorithm looks for feasible insertions while considering a pass by the central depot for replenishment. The task with the best insertion is selected and scheduled at its respective position. This process is iterated until all tasks are scheduled or no feasible insertions are found.
_Removal operators_. Given a solution, removal operators remove a random number of tasks between 1 and \(D_{max}\).
* _Random Removal_ : randomly selects the tasks and removes them from their respective routes.
* _Worst Removal_ : iteratively selects the task yielding the maximum cost reduction. Two versions are available depending on the move evaluation : based on duration or on travel costs.
* _Sequence related removal_ : This removal heuristic is inspired by the _related_ removal operator introduced in Shaw et al. [27] as well as the SISR operator in [8]. The basic idea of this operator is to remove sub-sequences of visits from distinct routes so that the best insertion algorithm re-inserts them, and hopefully, constructs promising sub-sequences that improve the objective value of the solution. This approach proved to be efficient, especially on problems where tasks have tight time windows. In the first step, a set of tasks are selected from different routes using the _related_ removal operator of Shaw et al. [27]. This operator takes into consideration the spatial and temporal relatedness of those tasks. We call these tasks _seeds_. In the second step, a sub-sequence of visits occurring after the _seeds_ are removed together with the _seeds_. Fig. 1 simulates the impact of the _sequence-related_ removal operator. The number of removed tasks is initialized by 3, and varies according to the perturbation parameter (See Section 4.2).
It is noteworthy to mention that, after every task is removed from a given route, a test is systematically performed to verify whether the visit to the
central depot is still relevant, or the initial inventory of tools and spare parts satisfies the requirements of the remaining tasks in the route.
Based on these operators, Algorithm 1 presents the skeleton of a remove-repair based perturbation procedure.
```
input : Solution \(S\), perturbation parameter \(D_{max}\)
1\(unscheduled\gets getUnscheduledTasks(S)\)
2\(r\leftarrow\mathcal{U}(1,3)\)
3if\((r=1)\)then\(tmpUnscheduled\gets RandomRemoval(S,D_{max})\)
4elseif\((r=2)\)then\(tmpUnscheduled\gets WorstRemoval(S,D_{max})\)
5else\(tmpUnscheduled\gets SeqRelatedRemoval(S,D_{max})\)
6\((S,unscheduled)\gets BestInsertion(S,unscheduled)\)
7\((S,tmpUnscheduled)\gets BestInsertion(S,tmpUnscheduled)\)
8\(unscheduled\gets unscheduled\cup tmpUnscheduled\) return\(S\)
```
**Algorithm 1**Removal/repair perturbation procedure
#### 4.1.2 Local search operators
The local search procedure is implemented as a variable neighbourhood descent search (VNDS) procedure. This component is composed of two sets of local search operators, inter-route search operators and intra-route search operators.
The inter-route search operators set \(\mathcal{N}^{e}\) comprises three (03) operators.
Figure 1: Sequence-related removal process
* _2-Opt*_ : this operator explores the possibilities of exchanging two arcs \((i,j)\) and \((k,l)\) located in two distinct routes by arcs \((i,l)\) and \((k,j)\). Because in the TRSP each vehicle should start and end at the same depot, exchanging arcs between the last visits and the depots should be taken into consideration during moves evaluation.
* _Swap-relocate_ : this operator explores the relocation of a task or an arc to another route, either by placing it between two consecutive visits or by interchanging it with a task in the second route. Moreover, we consider the possibility of reversing the arc before being relocated. This gives raise to six (06) different movements. \(Swap-relocate(1,0)\) is a simple relocation of a task from one route to another. \(Swap-relocate(1,1)\) is a swap of two tasks visited in two distinct routes. \(Swap-relocate(2,0)\) is a relocation of an arc whereas \(Swap-relocate(2,1)\) is an interchange of an arc with a task from another route. Finally, \(Swap-relocate(2,0)^{r}\) and \(Swap-relocate(2,1)^{r}\) perform a reverse of the arc before performing the relocation or the interchange.
* _Swap-sequence_ : this operator is similar to _Swap-relocate_, except that it does not consider the reversal of sub-sequences. Several combinations are possible : _Swap-sequence(2,2)_, _Swap-sequence(3,k)_ and _Swap-sequence(4,k)_, where the length of second subsequence \(k\) is less than or equal to the first subsequence.
Regarding the intra-route search operators set \(\mathcal{N}^{a}\), four (04) operators are considered :
* _Exchange\({}^{1}\)_ : this operator explores the possibilities of exchanging the position of two tasks \(i\) and \(j\) located in the same route.
* _Shift\({}^{1}\)_ : this operator tries to move a given task \(i\) forward and backward to another position in the same route.
* _R-Opt_ : this operator tries to move a sequence of two or three visits forward and backward in the same route.
* _2-Opt_ : this operator explores the possibility to improve a given route by replacing the two arcs \((i,j)\) and \((k,l)\) by the arcs \((i,k)\) and \((j,l)\); this implies the reversal of a sub-sequence between visits \(j\) and \(k\), \(j\) and \(k\) included.
### Iterative remove-repair heuristic
We also propose in this paper a fast heuristic called iterative removal/repair procedure (IRRP). This fast heuristic is used in several parts of the eILS. Algorithm 2 outlines the general heuristic structure. The main difference between IRRP and classic ILS is that it does not contain an intensification phase. The algorithm starts from an arbitrary solution (empty, partial or a complete one) and iteratively performs a removal/repair perturbation phase (See Section 4.1.1). Once a new solution is constructed, it undergoes first an acceptance process to verify whether the incumbent solution is going to be updated. Subsequently, if an improvement in the objective function is achieved, \(S_{best}\) is updated. Also, the value of \(D_{max}\) is updated at the end of each iteration.
```
input : Solution \(S\).
1\(S_{Best}\gets S\)
2\(S_{Incumb}\gets S\)
3\(init(D_{max})\)
4while\((!StopCondition)\)do
5\(S_{Tmp}\gets removal RepairPerturbation(S_{Incumb},D_{max})\) (See Section 4.1.1)
6if\((AcceptSolution(S_{Tmp}))\)then\(S_{Incumb}\gets S_{Tmp}\)
7if\((f(S_{Incumb})<f(S_{Best}))\)then\(S_{Best}\gets S_{Incumb}\)
8\(Update(D_{max})\)
9 end if
10return\(S_{Best}\)
```
**Algorithm 2**Iterative removal/repair procedure
### Iterated local search
The ILS is a metaheuristic scheme introduced by Lourenco et al. (2003) [20]. This approach is known by its high potential, ease of implementation and few number of parameters to tune [18]. A typical ILS algorithm is generally composed of three components: generation of an initial solution, perturbation phase, and local search procedure. The perturbation and local search procedures are iteratively applied to construct a new solution at each iteration. Instead of starting each time from scratch or the same base solution, they use the solution of the previous iteration as a starting point.
The role of the perturbation phase is to prevent the metaheuristic from being trapped in local optima, whereas the local search aims at finding new local optimal solutions. The series of local optima produced by this process can be considered as a single chain of solutions followed by the ILS. Algorithm 5 depicts the general scheme of the ILS. \(D_{max}\) is called the perturbation degree and it is provided as a parameter to the perturbation procedure. It is initialized with a small value, and then incremented after each iteration without improvement. It is then reset to its initial value after each improvement. The stopping condition used in the ILS is \(N\) iterations without improvement. The best solution is updated after each improvement of the total duration (line 7). It is noteworthy to mention that the fitness of partial solutions is computed as the sum of its total duration and the number of unscheduled tasks multiplied by a penalty. In our case, we set the penalty to \(10^{3}\).
```
input : Initial solution \(S\). \(S\gets Intensification(S)\) (See Section 4.3.1) \(S_{ILS}\gets S\) \(D_{max}\gets D_{0}\) while(\(!StopCondition\))do \(S\gets Perturbation(S,D_{max})\) (See Section 4.3.2) \(S\gets Intensification(S)\) (See Section 4.3.1) if(\(f(S)<f(S_{ILS})\))then \(S_{ILS}\gets S\) end if \(Update(D_{max})\)
1 end if return\(S_{ILS}\)
```
**Algorithm 3**Iterated local search algorithm
An important issue with the ILS, is that by moving from one local optimum to another after each iteration, the exploration of the surrounding neighborhood of each solution is limited, which may cause the ILS to miss some good improving solutions. This aspect is pointed out by the proximate-optimality principle (POP) [2] which suggests that good solutions are close to each other. To address this drawback, we further enhance the ILS by storing a set of elite solutions that are supplied by local optima found so far, and used later as a base solution for the ILS (See Section 4.4).
#### 4.3.1 Intensification phase
A key feature of our approach is its reliance on an intensification phase that combines IRRP with a set of local search operators.
The procedure is a version of IRRP (see Section 4.2) that starts from a complete or a partial solution. Intensification is achieved by fixing \(D_{max}\) to a relatively small value (see Section 5.2), and performing no more than \(N\) iterations. In this version of the IRRP, the solution provided int the input is maintained as the incumbent solution during all the process (Algorithm 2, line 6), and it is only updated if the objective value is improved. A small and fixed value of \(D_{max}\) (Algorithm 2, line 8) allows the \(IRRP\) to extensively explore the neighborhood of the current solution. During the removal repair perturbation phase, the aim is to improve the total duration of the incumbent solution. It is noteworthy to mention that we consider one removal operator rather than three (see Section 4.1.1) in the IRRP used during intensification, which is the _random removal_ operator.
Algorithm 4 shows a pseudo code for the intensification procedure. The algorithm sequentially executes the operators one after another and iterates as long as there is at least one of the operators that succeeds in improving the current solution. After preliminary experimentation, \(applySwap-sequence(S)\) only calls _Swap-sequence(3,k)_, whereas \(applySwap-relocate(S)\) explores all the combinations described earlier (see Section 4.1.2).
#### 4.3.2 Perturbation phase
Two perturbation approaches have been commonly used in the literature. The first approach is based on removal/repair procedures (see Section 4.1.1), whereas the second one is based on local search operators. Hereafter, we discuss the implementation of these perturbation mechanisms in our method.
This perturbation procedure is based on the _Swap-sequence_ operator described earlier (see Section 4.1.2). A similar approach can be found in [34]. It applies a series of feasible but non-improving moves that deteriorate the fitness of the solution given as an input. Moreover, this procedure is designed such that the order in which the perturbation moves are performed is different from that used during the intensification phase. This is achieved in a similar way as in Brandao (2020) [3]. First, a data structure (an array) is used to store the number of times each task is involved in local search moves performed during the intensification phase so far. Then, at the beginning of the perturbation phase, the array of counters is sorted in non-increasing order. For each task in this data structure, non-improving moves involving
the sub-sequence that starts from this task are computed, and a random move is selected and performed. This process iterates over the list of tasks in a cyclic fashion, until the maximum number of moves is reached. The maximum number of moves is determined as a function of the perturbation degree parameter (see Section 4).
### Elite solutions
As explained in the previous sections, the ILS moves systematically from a local optimum to another after each iteration, without necessarily exploring all the neighborhood of each local optimum. To tackle this issue, we propose to store in a list of a maximum size of \(N_{Pop}\) a set of good and relatively diversified solutions called _elite set_. The basic idea of this approach is that instead of starting ILS from an empty solution, we rather use the ILS to continuously improve the solutions in the _elite set_. Basically, the _elite set_ of solutions undergoes a series of updates that add new solutions of good quality and sufficiently diversified, and simultaneously discard unpromising ones. This approach shares several characteristics with evolutionary algorithms ([25], [32]), where a population of solutions evolves over several generations.
The evaluation of the solutions present in the _elite set_ is of crucial importance, because it allows the systematic discarding of unpromising solutions and maintains a high level of diversity. To achieve this purpose, we consider
a so-called _biased fitness_, which takes into consideration, in addition to the fitness of the solution, its contribution to the diversity of the _elite set_[25]. The contribution of each solution to the diversity of the _elite set_ is computed as the average distance between the current solution and the \(n_{Close}\) closest solutions in the _elite set_. This parameter is fixed to a value of \(20\%\) of the size of the population. Solutions are then ordered according to a weighted sum of their fitness and diversity contributions; and the best \(N_{Pop}\) solutions are maintained, and the others are discarded.
### Generation of Initial Solutions
This procedure is a version of IRRP that starts from an empty solution. The incumbent solution is always updated using the newly constructed solution in the previous iteration (Algorithm 2, line 6). \(D_{max}\) is set to its initial value which is equal to \(D_{0}=3\). The value of \(D_{max}\) is incremented by 1 after each iteration without improvement (Algorithm 2, line 8), and reset to its initial value once a new best solution is found. The stopping condition of IRRP is \(2N+K\), where \(N\) is the number of tasks and \(K\) is the number of vehicles. The evaluation criterion used in IRRP is the total duration (see Section 4.7).
### General flow
The algorithm starts with the initialization of the _elite set_. \(K\) solutions are constructed from scratch using IRRP heuristic (lines 2-6) (see Section 4.5). After computing the _biased fitness_ of each solution (see Section 4.4), \(N_{Pop}\) solutions are stored whereas the others are discarded (line 7).
As described in Section 4.3.2, a set \(\mathcal{P}\) of four different versions of ILS is considered.
* \(ILS_{1}\) : the perturbation is performed in a remove-repair fashion while using duration as moves evaluation criteria.
* \(ILS_{2}\) : the perturbation is performed in a remove-repair fashion while using travel cost as moves evaluation criteria.
* \(ILS_{3}\) : the perturbation is performed based on a local search procedure and moves evaluations is based on duration.
* \(ILS_{4}\) : the perturbation is performed based on a local search procedure and moves evaluations is based on travel cost.
Two phases are considered in the algorithm of the eILS. During phase one, only \(ILS_{1}\) is used to generate new solutions. After \(CV_{1}\) iterations without improving the best solution, eILS passes to phase 2 where \(ILS_{2},ILS_{3}\), and \(ILS_{4}\) are sequentially called in this order, and each one is executed during \(CV_{2}\) iterations. Once a new improving solution is found, eILS switches systematically to phase 1. This logic is presented by the procedure \(selectILS(i,lstImpr,CV_{1},CV_{2})\) (line 12) in Algorithm 5. After the experimentation, the values of \(CV_{1}\) and \(CV_{2}\) are set respectively to values 45 and 30. Regarding the general flow, the algorithm fetches at the beginning of each iteration a solution \(S\) from the _elite set_ using binary tournament, then applies on it a major perturbation operation with a perturbation degree equal to \(N/2\) (line 10), and provides it to a variant of \(ILS\), which is chosen (line 12) based on the current iteration \(i\), the iteration of the last improvement \(lstImpr\), as well as \(CV_{1}\) and \(CV_{2}\). The best solution found by the ILS is then added to the _elite set_ and triggers an update operation, during which at most \(N_{Pop}\) solutions are retained in the _elite set_, whereas duplicates and unpromising solutions are removed (line 12). This process is iterated \(N_{Ils}\) times. At the end of the eILS, the best solution in terms of objective value present in the _elite set_ is returned (line 14).
### Move evaluation and feasibility check
The TRSP is a rich vehicle routing problem [5], with several characteristics and constraints, such as time windows, renewable and non-renewable resources, multi-depot, and site-dependent considerations. The difficulty of the problem is further increased when dealing with duration minimization rather than travel costs minimization. Hence, the need for a constant-time cost evaluation and feasibility checks, for both, repair and local search moves, is crucial for the overall performance of our method.
Before proceeding further, let us consider \(\sigma\) as a sequence of arbitrary visits of nodes, which can be task locations, home depots, or the central depot. We denote the node at the \(i^{th}\) position of \(\sigma\) by \(\sigma^{i}\).
We also define the concatenation operator of two or more sub-sequences \(\sigma_{1}\) and \(\sigma_{2}\) as :
\[\sigma=\sigma_{1}\oplus\sigma_{2} \tag{1}\]
Let \(\Delta(\sigma)\), \(\Lambda^{r}(\sigma)\), \(\Lambda^{n}(\sigma)\) and \(\Psi(\sigma)\) be duration of sequence \(\sigma\), accumulated renewable resources of sequence \(\sigma\), accumulated non-renewable resources of
sequence \(\sigma\) and the accumulated skills needed by tasks in sequence \(\sigma\), respectively. We propose in the following the formulas used to perform feasibility checks and moves evaluation in a constant time.
_Renewable resources._ For each task at position \(i\) in the sequence \(\sigma\), we record the number of times a given type of tool \(t\in T\) is required by the tasks from the start of \(\sigma\) to position \(i\), or until the central depot in the case where it is visited before position \(i\).
The following equations hold.
\[\Lambda_{t}^{r}(\sigma^{i})=\left\{\begin{array}{ll}\Lambda_{t}^{r}(\sigma^ {i-1})+b_{\sigma^{i},t}&if\ \sigma^{k}\neq 0\ \forall k\leq i-1\\ \Lambda_{t}^{r}(\sigma^{i-1})&otherwise\end{array}\right.\]
\[\Lambda^{r}_{t}(\sigma)=\Lambda^{r}_{t}(\sigma^{|\sigma|}).\]
\[\Lambda^{r}_{t}(\sigma)=\left\{\begin{array}{ll}\Lambda^{r}_{t}(\sigma_{1})+ \Lambda^{r}_{t}(\sigma_{2})&if\ 0\notin\sigma_{1}\\ \Lambda^{r}_{t}(\sigma_{1})&otherwise\end{array}\right.\]
Let \(k\in\mathcal{K}\) be the vehicle performing sequence \(\sigma\). \(\sigma\) is unfeasible _if_\(\exists t\in\{1,\ldots,|T|\}\) where \(w^{k}_{t}=0\) and \(\Lambda^{r}_{t}(\sigma)>0\).
_Non-renewable resources_. For each task at position \(i\) in the sequence \(\sigma\), we record the number of times a given type of spare part \(p\in P\) is required by the tasks from the start of \(\sigma\) to position \(i\), or until the central depot in the case where it is visited before position \(i\).
The following equations hold.
\[\Lambda^{n}_{p}(\sigma^{i})=\left\{\begin{array}{ll}\Lambda^{n}_{p}(\sigma^ {i-1})+d_{\sigma^{i},p}&if\ \sigma^{k}\neq 0\ \forall k\leq i-1\\ \Lambda^{n}_{p}(\sigma^{i-1})&otherwise\end{array}\right.\]
\[\Lambda^{n}_{p}(\sigma)=\Lambda^{n}_{p}(\sigma^{|\sigma|}).\]
\[\Lambda^{n}_{p}(\sigma)=\left\{\begin{array}{ll}\Lambda^{n}_{p}(\sigma_{1}) +\Lambda^{n}_{p}(\sigma_{2})&if\ 0\notin\sigma_{1}\\ \Lambda^{n}_{p}(\sigma_{1})&otherwise\end{array}\right.\]
Let \(k\in\mathcal{K}\) be the vehicle performing sequence \(\sigma\). \(\sigma\) is unfeasible _if_\(\exists p\in\{1,\ldots,|P|\}\) where \(v^{k}_{p}>\Lambda^{n}_{p}(\sigma)\).
_Skills_. For each task in a sequence \(\sigma\), we record the number of times a given skill is required by the tasks since the start of the sequence. For each type of skill \(q\in Q\), the following equations hold. \(\Psi_{q}(\sigma^{i})=\Psi_{q}(\sigma^{i-1})+a_{\sigma^{i},q}\). \(\Psi_{q}(\sigma)=\Psi_{q}(\sigma^{|\sigma|})\). \(\Psi_{q}(\sigma)=\Psi_{q}(\sigma_{1})\) + \(\Psi_{q}(\sigma_{2})\). Let \(k\in\mathcal{K}\) be the technician performing sequence \(\sigma\). \(\sigma\) is unfeasible _if_\(\exists q\in\{1,\ldots,|Q|\}\) where \(y^{k}_{q}=0\) and \(\Psi_{q}(\sigma)>0\).
_Time window feasibility_. To perform time window feasibility checks in a constant time, we adopted the approach proposed by Kindervarter and Savelsbergh (2018) [16], where the authors proposed to compute the _Forward Time Slack_\(FTS_{i}\) at the \(i^{th}\) position of \(\sigma\), indicating how much time delay is possible at this position while maintaining time windows feasibility of the subsequent visits. Let \(WT_{i}\) and \(h^{\sigma}_{i}\) be the waiting time and the service starting time at the \(i^{th}\) position of \(\sigma\), respectively. We define the total waiting time \(TWT_{ij}\) between \(\sigma_{i}\) and \(\sigma_{j}\), \(i\leq j\) as follows: \(TWT^{\sigma}_{ij}=\sum_{k=i+1}^{j}WT_{k}\).
The forward time slack at the \(i^{th}\) position of \(\sigma\) is defined as : \(FTS^{\sigma}_{i}=\min_{i\leq k\leq|\sigma|}\{TWT^{\sigma}_{ik}+l_{\sigma_{k}}-h^ {\sigma}_{k}\}\), where \(h^{\sigma}_{k}\) is the service starting time of the visit at position \(k\) of \(\sigma\). For convenience, we denote the FTS at the starting position as \(FTS^{\sigma}\). The insertion of a task \(r\in R\) in the \(i^{th}\) position of \(\sigma\) is feasible if : \(s_{r}<l_{r}\quad and\quad Shift_{i}<FTS^{\sigma}_{i}\), where \(s_{r}\) is calculated as follows: \(s_{r}=max\{h^{\sigma}_{i-1}+\delta_{\sigma_{i-1}}+t_{\sigma_{i-1},r},e_{r}\}\) and \(Shift_{i}=\tilde{h}^{\sigma}_{i}-h^{\sigma}_{i}\), where \(\tilde{h}^{\sigma}_{i}=max\{s_{r}+\delta_{r}+t_{r,\sigma_{i}},e_{\sigma_{i}}\}\).
Movement evaluation.The objective function of the TRSP is to minimize the total duration of the routes. This also includes the possibility of delaying the departure of vehicles from the depots to minimize the total waiting time. We proceed in a similar manner as in Savelsbergh. (1992) [26]. Let us first define the maximum delay of the service starting time at the \(i^{th}\) position of \(\sigma\) without violating time windows at subsequent visits or causing any delays at the arrival depot as the _Passive Time Slack_ (PTS) of \(\sigma\), and it is denoted by \(PTS^{\sigma}_{i}\). This is calculated as follows : \(PTS^{\sigma}_{i}=\{FTS^{\sigma}_{i},TWT^{\sigma}_{i|\sigma|}\}\). For convenience, we denote the PTS at the starting position as \(PTS^{\sigma}\).
The duration of sequence \(\sigma\) is computed as
\(\Delta(\sigma)=h^{\sigma}_{|\sigma|}+\delta_{\sigma_{|\sigma|}}-h^{\sigma}_{1 }-PTS^{\sigma}\).
To compute the duration of the concatenated sequences, we compute the new PTS and the earliest arrival at the final position of the new sequence. For this purpose, we first define the _allowed backward shift_ of a given sequence \(\sigma\), at a given position \(i\), as the maximum gain in duration at the final depot yielded by the backward shift of the service starting time at position \(i\) of \(\sigma\), assuming of course that the service can start in an earlier date. The _allowed backward shift_ is denoted by \(BS^{\sigma}_{i}\) and it is computed as follows: \(BS^{\sigma}_{i}=min\{h^{\sigma}_{i}-e_{\sigma_{i}},BS^{\sigma_{i+1}}\}\). For convenience, we note the BS at the first position of \(\sigma\) by \(BS^{\sigma}\).
Without loss of generality, we consider the concatenation of three subsequences into a single sequence: \(\sigma=\sigma_{1}\oplus\sigma_{2}\oplus\sigma_{3}\). Let \(Shift^{\sigma_{2}}\) and \(Shift^{\sigma_{3}}\) be, respectively, the shifts at the first elements of \(\sigma_{2}\) and \(\sigma_{3}\) after concatenation. We also denote by \(WT^{\sigma_{2}}\) and \(WT^{\sigma_{3}}\) the new waiting times at the first elements of \(\sigma_{2}\) and \(\sigma_{3}\) after concatenation.
Depending on the value of \(Shift^{\sigma_{3}}\), either positive or negative, we propose the following formulas to compute the duration of \(\sigma\).
If \(Shift^{\sigma_{3}}\geq 0\) :
* \(h^{\sigma}_{|\sigma|}=h^{\sigma_{3}}_{|\sigma_{3}|}+max(0,Shift^{\sigma_{3}}-PTS^{ \sigma_{3}})\)
* \(PTS^{\sigma}=min\{FTS^{\sigma_{1}},min\{TWT^{\sigma_{1}}+WT^{\sigma_{2}}+max\{0, FTS^{\sigma_{2}}-Shift^{\sigma_{2}}\},TWT^{\sigma_{1}}+WT^{\sigma_{2}}+max\{0,TWT^{ \sigma_{2}}-Shift^{\sigma_{2}}\}+WT^{\sigma_{3}}+max(0,PTS^{\sigma_{3}}-Shift^{ \sigma_{3}})\}\}\)
If \(Shift^{\sigma_{3}}<0:\)
* \(h^{\sigma}_{|\sigma|}=h^{\sigma_{3}}_{|\sigma_{3}|}-min(-Shift^{\sigma_{3}},BS^ {\sigma_{3}})\)
* \(PTS^{\sigma}=min\{FTS^{\sigma_{1}},min\{TWT^{\sigma_{1}}+WT^{\sigma_{2}}+max\{0, FTS^{\sigma_{2}}-Shift^{\sigma_{2}}\},TWT^{\sigma_{1}}+WT^{\sigma_{2}}+max\{0,TWT^{ \sigma_{2}}-Shift^{\sigma_{2}}\}+WT^{\sigma_{3}}+max(0,PTS^{\sigma_{3}}-Shift^ {\sigma_{3}}-BS^{\sigma_{3}})\}\}\)
### Speed-up techniques
Local search operators and the best insertion algorithm are the most time-consuming components of the proposed scheme. We propose in the following two speed-up techniques used to reduce the computational burden.
#### 4.8.1 Parallel best insertion algorithm
An efficient method to implement the best insertion algorithm is to consider each route separately. The basic idea is to compute the best feasible insertion of the unscheduled tasks for each route. Then, those feasible insertions of all routes are stored in a heap structure that we call a HEAP. The insertion process is performed as follows. It starts by selecting the best move (if any) from HEAP. If the task has not yet been inserted, the insertion is performed and the task is marked as fulfilled and removed from the list of unscheduled tasks. The algorithm systematically computes feasible insertions of the remaining unscheduled tasks while considering only the last modified route. If feasible moves are found, then the best move is selected and pushed to HEAP. This process is iterated until HEAP becomes empty, i.e. no route has feasible insertions in HEAP.
#### 4.8.2 Nearest predecessors
Many irrelevant movement evaluations are performed during local search calls. Experimentation showed that most improvements are achieved by local search movements involving the nearest neighbors. To take advantage of this observation, we propose to compute a list of the nearest predecessors for each task. Given an arbitrary task \(i\), and for each predecessor \(j\), and assuming that arc \((j,i)\) is feasible, the distance is computed as follows :
\[dist(j,i)=max\{max(0,e_{i}-l_{j}),t_{ji}\}. \tag{2}\]
The predecessors of \(i\) are sorted in non-decreasing order according to distance and only the first \(\chi\) predecessors are considered during local search movements. The value of \(\chi\) was fixed to a value equal to 30 after experimentation.
The nearest predecessors lists are mainly used by inter-route local search operators. The \(2-opt*\) operator, for example, where arcs \((i,j)\) and \((k,l)\) are interchanged, it starts by selecting a task \(j\), and then picks a task \(l\) among the \(\chi\) nearest predecessors. \(l\) should be scheduled on a route different from that of task \(j\). Move evaluation in \(Swap-Relocate\) and \(Swap-Sequence\) are carried out in the same fashion.
## 5 Computational results
We present in this section the computational tests carried out to assess the performance of the proposed eILS. Our algorithm is coded in C++ and runs on a PC with an Intel Core i7 2.6GHz processor and 16 GB RAM. The results of the eILS are compared with those of parallel ALNS (pALNS) algorithm found in [24].
### Benchmark instances
Benchmark instances for TRSP are derived from 56 instances of Solomon benchmark for VRPTW [29]. All instances have 100 tasks, that are either randomly distributed (R), in clusters (C) or a mix patterns (RC). Each class can be grouped into two sub-classes: (1) instances with short time windows or (2) large time windows (i.e. C1, C2, R1, R2, RC1, RC2). For each instance, 25 home depots are added, each associated with a technician and initial inventory of tools and spare parts. Each technician is associated with a set of skills (five types of skills are considered) and each task is associated with requirements in terms of skills, tools, and spare parts. The depot present in the original instances of VRPTW is considered the central depot used for replenishment. As indicated in the problem definition, the central depot has unlimited inventory, and each technician can carry as many tools and spare parts as needed for their journey without any capacity limitations.
### Parameter settings
In this section, we investigate the impact of several parameters of the eILS on the overall performance. Three parameters were determined: number of tasks to remove \(D_{max}\) at each iteration in the \(IRRP\) during the intensification phase, size of the _Elite set_\(N_{pop}\), and number of iterations \(N_{ils}\) of eILS. The values for each parameter are listed in Table (1).
Twelve instances were arbitrarily selected for this purpose, and each instance was executed 10 times for each combination of parameter values, that is, \(144*10\) runs for each instance. Figures 2,3 and 4 show the aggregated overall gaps for each combination of the three parameters, whereas figures 5,6 and 7 show the overall computational times variations according to the values of the same parameters.
Based on this experiment, the selected value of each parameter are listed in Table 2.
\begin{table}
\begin{tabular}{l l c} \hline Parameter & Description & Range \\ \hline \(d_{max}\) & Perturbation parameter in \(IRRP\) & \([5,10,15]\) \\ \hline \(N_{ils}\) & Number of Iterations & \([450,500,550,600]\) \\ \hline \(N_{pop}\) & Size of elite set & \([10,15,20]\) \\ \hline \end{tabular}
\end{table}
Table 1: Parameter settings
Figure 2: Impact of \(D_{max}\) in IRRP on overall Figure 3: Impact of the number of iterations gap on overall gap
Figure 4: Impact of the population size on overall gap
Figure 5: Impact of the number of iterations on overall computational time
### Sensitivity analysis
In this section, we present a study of the sensitivity analysis of several components of the eILS. We start by investigating the contribution of each perturbation mechanism, then, we focus on two components of the eILS, which are the related-removal operator and _Swap-Sepquence_ local search operator.
#### 5.3.1 Perturbation mechanisms
We focus in this section on the contribution of each of the four perturbation mechanisms (see Section 4.6). We ran our algorithm 10 times while considering several configurations of the eILS. These configurations are the following:
* \(Conf_{1}=ILS1\)
* \(Conf_{2}=ILS1+ILS2\)
* \(Conf_{3}=ILS1+ILS2+ILS3\)
* \(Conf_{4}=ILS1+ILS2+ILS3+ILS4=\) eILS
We provide the following performance indicators for the methods:
* \(CPU\) : the average computational times of the 10 runs per sub-class. In this section, we take as a reference the computational times of \(Conf_{1}\).
* \(GAP\) : the gap between the best objective value of a given method and the best-know solution per sub-class, and it is calculated as follows: \[GAP=\frac{C_{best}-C_{best}^{M}}{C_{best}}\times 100\] (3) where \(C_{best}\) is the best-known solution and \(C_{best}^{M}\) is the best objective value of the 10 runs of either pALNS or eILS.
\begin{table}
\begin{tabular}{c c c c} \hline Parameters & \(d_{max}\) & \(N_{ils}\) & \(N_{pop}\) \\ \hline Value & 10 & 600 & 10 \\ \hline \end{tabular}
\end{table}
Table 2: Parameter settings for the eILS
* \(DEV\) : the deviation of the average best results from the best objective values per sub-class. \[DEV=\frac{C_{best}^{M}-AVG^{M}}{C_{best}^{M}}\times 100\] (4) where \(AVG^{M}\) is the average objective value of the ten runs given by the eILS or pALNS.
Table 3 provides performance measures for the configurations presented in this section. This clearly shows the contribution of each ILS version (perturbation mechanism) to the global performance of eILS. Adding \(ILS2\) to \(ILS1\) (\(Conf_{2}\)) has clearly improved the overall gap (row 2) from 0.243% to 0.21%, the deviation from 0.707% to 0.673%, Interestingly, these substantial improvements have been achieved while maintaining similar computational times. Adding \(ILS3\) in \(Conf_{3}\) has also improved the the overall gap from 0.21% to 0.198%, although a slight deterioration has been observed on the deviation from the objective value (row 3) and a substantial increase in computational times by a factor of 1.14.Finally, the full scheme of eILS has achieved the best global performance in terms of the overall gap with an associated value of 0.19%. The overall deviation has slightly deteriorated from 0.673% to 0.684%. The computational times have slightly increased by a factor of 1.02 compared to \(Conf_{1}\).
#### 5.3.2 Components sensitivity analysis
In this section, we suggest to investigate the contribution of some of eILS components to the overall performance. We focus our attention on two components, namely, _related-sequence_ removal, referred to by \(SeqRem\), and
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Parameters & \(Conf_{1}\) & \(Conf_{2}\) & \(Conf_{3}\) & \(Conf_{4}\) \\ \hline \(GAP\)(\%) & 0.243 & 0.21 & 0.198 & 0.19 \\ \hline \(DEV\)(\%) & 0.707 & 0.673 & 0.7 & 0.684 \\ \hline \(CPU\) & 1 & 1 & 1.14 & 1.02 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Sensitivity analysis of perturbation mechanisms
swap-sequence_ local search operator, referred to by \(SwapSeq\). A similar approach of _related-sequence_ removal operator but more complex can be found in [8].
Table 4 shows a comparison of the three versions of eILS. The first version does not include the related sequence removal operator (column 1), whereas the second version does not include the swap sequence operator (column 2). The last column of Table 4 shows the results of eILS, including both components. The results clearly show the contribution of both components to the overall results of the eILS. The absence of _related-sequence_ removal (column 2) substantially deteriorates the quality of solutions obtained by the eILS. More precisely, the percentage gap (\(GAP\)) evolves from 0.179% to 0.192%. However, the computational times have substantially decreased by a factor of 0.88 compared to eILS.
The same behavior has been observed when the _swap-sequence_ local search operator is discard. The percentage gap goes from 0.179% to 0.210% whereas the percentage deviation has achieved 0.810% compared to 0.629% of the eILS. However, the computational times have substantially decreased by a factor of 0.84 compared to the whole scheme of eILS.
### Computational results
We conduct experiments to assess the performance of our method. We compare our method with the pALNS presented in [24]. The pALNS was implemented using Java 7 and Gurobi 4.60 on an Ubuntu 11.10 64-bit machine, with an Intel i7 860 processor (4\(\times\)2.8GHz) and 6GB of RAM, using K = 8 subprocesses. To guarantee a fair comparison between the two methods, we adopt the same protocol used in [24], that is, we perform ten random runs of the eILS on each instance tested, and we report the best objective value, the average objective value, and the average computational times.
Table 5 shows a comparison between the two methods.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Parameters & no \(SwapSeq\) & no \(SeqRem\) & eILS \\ \hline \(GAP\)(\%) & 0.210 & 0.192 & 0.179 \\ \hline \(DEV\) (\%) & 0.810 & 0.607 & 0.629 \\ \hline \(CPU\) & 0.84 & 0.88 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Sensitivity analysis for some components of the eILS
We recorded the deviation from the average (\(DEV\)) computed using Eq. (4), the computational times (\(CPU\)) and the gap to the best solution (\(GAP\)) computed using Eq. (3). The computational times and deviation from the average of the pALNS can be found in [24]. Because pALNS is a parallel approach, the authors in [24] reported two computational times: the computational times of the parallel ALNS \(cpu_{1}\) and the computational times of post-optimization phase (route recombination) \(cpu_{2}\). Hence, we estimate the computational times of a sequential version of pALNS as \(CPU_{pALNS}=8\times cpu_{1}+cpu_{2}\).
The results in Table 5 show no clear dominance between the two methods. However, we notice that for sub-classes with tight windows (R1, and RC1), pALNS succeeds in outperforming eILS, since it achieves a percentage gap of 0.094% and 0.104% respectively, against 0.351%, and 0.609%. This is mainly justified by the use of a powerful post-optimization approach based on a set covering formulation used in [24]. Nevertheless, eILS improves the best know solutions of several instances of these classes. In contrast, for subclasses C1, C2, R2, and RC2, which are characterized by relatively large time windows, eILS outperforms pALNS in terms of percentage gaps and deviation from the average. More precisely, eILS reaches percentage gaps of 0.54%, 0%, 0.013%, and 0.062% on these subclasses, against the percentage gaps of 0.060%, 0.522%, 1.512%, and 1.527%
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \multirow{2}{*}{Class} & \multicolumn{3}{c|}{pALNS} & \multicolumn{3}{c}{eILS} \\ \cline{2-7} & \(CPU\) (\(s\)) & \(GAP\) (\%) & \(DEV\) (\%) & \(CPU\) (\(s\)) & \(GAP\) (\%) & \(DEV\) (\%) \\ \hline \(C1\) & 580.9 & 0.060 & 0.23 & 329.97 & 0.054 & 0.291 \\ \hline \(C2\) & 246 & 0.474 & 0.42 & 414.88 & 0 & 0.112 \\ \hline \(R1\) & 731.4 & 0.094 & 0.82 & 343.87 & 0.351 & 0.886 \\ \hline \(R2\) & 290.1 & 1.5 & 1.46 & 382.03 & 0.065 & 0.818 \\ \hline \(RC1\) & 409 & 0.104 & 0.68 & 327.41 & 0.609 & 0.985 \\ \hline \(RC2\) & 238.8 & 1.466 & 1.43 & 369.17 & 0 & 1.111 \\ \hline \(Mean\) & 434.9 & 0.617 & 0.86 & 360.54 & 0.184 & 0.713 \\ \hline \end{tabular}
\end{table}
Table 5: Summary of results of eILS and pALNS
It is noteworthy to mention that the computational times of the eILS are much higher on subclasses with large time windows. This is mainly owing to the extensive use of local search operators, especially because these type of instances have room for improving solution costs, whereas for instances with tight time windows, the contribution of local search operators is limited to inter-route operators. As a consequence, eILS succeeds in improving almost all the instances of C2, RC2 and RC2, achieving a total number of 34 new best-known solutions (see Appendix A).
Globally, although the eILS fails to improve the best-know solution for a number of instances in classes C1, R1 and RC1, it achieves a better overall gap to best, which is equal to 0.184%, which is more that three times lower than that of pALNS (0.617%). Regarding computational times, eILS has an overall CPU time equal to \(360.54s\), against \(434.9s\) for pALNS.
## 6 Conclusion and perspective
In this paper, we addressed a variant of the workforce scheduling problem called TRSP. This variant incorporates several state-of-the-art constraints such as _time windows_, _multi-depot_, _site-dependent_, _capacity constraints_, etc. We proposed in this paper an enhanced version of the ILS metaheuristic. The eILS combines a stack of local search operators, removal heuristics, a best insertion algorithm, and an intensification/diversification mechanism based on an _elite set_ of solutions. The performance of the proposed method was compared with that of a math-heuristic approach from the literature. The eILS achieved excellent results by improving the best-known solution for many benchmark instances.
Several promising research perspectives was determined after the study of the TRSP. This study highlights the need to design exact approaches for TRSP. Mathematical models, either linear or non-linear, seem to be not suitable and often fail to solve small instances. This is mainly because of the nature of the objective function, where instead of minimizing travel costs, the duration is minimized. The most suitable approach for finding optimal solutions for the TRSP is the branch-and-price method [11]. Another promising direction is the integration of the synchronization constraints into the TRSP. This allows the problem to cover relevant cases where a given task requires the intervention of multiple technicians, either simultaneously or with precedence relations, to perform the maintenance operation. Finally, because the objective function aims at the minimization of the total duration, a promis
ing research direction consists in the elaboration of methods that consider workload balancing.
## Acknowledgment
This work was carried out within the framework of the ELSAT220 project. The ELSAT2020 project is co-financed by the Hauts-de-France Region and European Economic and Regional Development Fund (ERDF) of the EU.
## CRediT authorship contribution statement
**Ala-Eddine Yahiaoui:** Conceptualization, Methodology, Software, Validation, Writing - original draft, Writing - review & editing. **Sohaib Afifi and Hamid Allaoui:** Review & editing.
|
2304.06826 | Collaboration and topic switches in science | Collaboration is a key driver of science and innovation. Mainly motivated by
the need to leverage different capacities and expertise to solve a scientific
problem, collaboration is also an excellent source of information about the
future behavior of scholars. In particular, it allows us to infer the
likelihood that scientists choose future research directions via the
intertwined mechanisms of selection and social influence. Here we thoroughly
investigate the interplay between collaboration and topic switches. We find
that the probability for a scholar to start working on a new topic increases
with the number of previous collaborators, with a pattern showing that the
effects of individual collaborators are not independent. The higher the
productivity and the impact of authors, the more likely their coworkers will
start working on new topics. The average number of coauthors per paper is also
inversely related to the topic switch probability, suggesting a dilution of
this effect as the number of collaborators increases. | Sara Venturini, Satyaki Sikdar, Francesco Rinaldi, Francesco Tudisco, Santo Fortunato | 2023-04-13T21:30:27Z | http://arxiv.org/abs/2304.06826v1 | # Collaboration and topic switches in science
###### Abstract
Collaboration is a key driver of science and innovation. Mainly motivated by the need to leverage different capacities and expertise to solve a scientific problem, collaboration is also an excellent source of information about the future behavior of scholars. In particular, it allows us to infer the likelihood that scientists choose future research directions via the intertwined mechanisms of selection and social influence. Here we thoroughly investigate the interplay between collaboration and topic switches. We find that the probability for a scholar to start working on a new topic increases with the number of previous collaborators, with a pattern showing that the effects of individual collaborators are not independent. The higher the productivity and the impact of authors, the more likely their coworkers will start working on new topics. The average number of coauthors per paper is also inversely related to the topic switch probability, suggesting a dilution of this effect as the number of collaborators increases.
science of science -- collaboration -- homophily -- topic switches pacs: Valid PACS appear here: +
Footnote †: Corresponding author: [email protected]
+
Footnote †: Corresponding author: [email protected]
+
Footnote †: Corresponding author: [email protected]
Modern science has become increasingly collaborative over the past decades [1]. Large teams have become almost necessary to tackle complex problems in various disciplines, requiring a large pool of knowledge and skills. On the other hand, small teams may introduce novel paradigms [2].
A powerful representation of the collaborative nature of science is given by a collaboration network, in which nodes are authors, and two nodes are connected if they have coauthored at least one paper. With the growing availability of bibliometric data, collaboration networks have been extensively studied, and their structural properties are now well known [3; 4; 5; 6].
Collaboration networks are concrete manifestations of _homophily_ between scholars. People working on the same topic or problem may decide to team up and leverage their respective skills to increase their chances of discovering new results. This is an example of _selection_, in that similar individuals end up interacting with each other.
On the other hand, collaboration could also induce _social influence_, in that scholars might affect the future behavior of their coauthors. Coauthors often expose us to new tools, methods, and theories, even when the latter is not being used for the specific project carried out by the team. The link between diffusion of knowledge and collaboration has been highlighted and explored for some time. For instance, it is known that knowledge flow occurs with a greater probability between scholars who have collaborated in the past [7] and those who are in close proximity in the network [8].
In particular, once scholars discover new research topics, they may decide to work on them in the future. Switches between research interests have become increasingly frequent over time [9] and have been quantitatively investigated [10; 11]. The decision to switch may actually be induced by the coauthors in a social contagion process [12; 13; 14; 15; 16] where scholar \(a\), who spreads the new topic, influences scholar \(b\) to adopt it. For this reason, epidemic models have been applied to describe the diffusion of ideas [17; 18; 19]. In these models, an _infected_ individual \(a\) exposes a _susceptible_ individual \(b\) to a disease with a certain probability of getting infected and continuing the spread. In the case of a topic, the infection spreads if \(b\) works on the new topic.
Here we present an extensive empirical analysis of the relationship between topic switches of scientists and their collaboration patterns. We distinguish active authors, _i.e._, those who have papers on the new topic, from inactive authors who have never published in that area. For simplicity, we focus only on the first-order neighborhoods in the collaboration network. We find that the probability for an inactive scholar to switch topic grows with the productivity and impact of their active coauthors. The larger the average number of inactive coauthors of active scientists, the smaller the effect. Also, the topic-switch probability for an inactive scholar grows with the number of their active coauthors, with a profile suggesting that the contributions of each coauthor are not independent.
## Results
We use the scientific publication dataset OpenAlex [20]. We present the results for twenty topics belonging to three disciplines: Physics, Computer Science, and Biology & Medicine. See Methods A for details.
Our approach is inspired by the pioneering work by Kossinets and Watts on social network evolution [21]. In it, the authors estimated _triadic closure_ of two individu
als \(a\) and \(b\), _i.e._, the probability that \(a\) and \(b\) become acquainted as a function of the number of common friends. They took two snapshots of the network at consecutive time ranges: in the earlier snapshot, one keeps track of all pairs of disconnected people, and in the latter, one counts how many of those pairs become connected. A similar approach has been adopted to compute _membership closure_, _i.e._, the probability that an individual starts participating in an activity having been connected to \(k\) others who participate in it [22]. We now describe how we adapt this framework to measure how collaborations induce topic switches.
Given a scientific topic \(t\), reference year \(T_{0}\), and window size \(T\), we construct two consecutive non-overlapping time ranges spanning years \([T_{0}-T,T_{0})\) and \([T_{0},T_{0}+T)\) respectively. We call the first range the _interaction window_ (IW), where we track author interactions in the collaboration network, and the latter range, the _activation window_ (AW), where we count topic switches. We then identify the set of _active_ authors \(A\) who published papers \(P\) on topic \(t\) during the IW. For example, in Fig. 1A, \(A=\{a_{0},a_{1},a_{4},a_{5}\}\). We construct the collaboration network \(G\) by considering all papers \(P^{\prime}\) written by authors \(a\in A\) after \(a\) becomes active. Note that \(P^{\prime}\) includes papers outside of \(P\), like the ones drawn in gray in Fig. 1A. We classify the non-active authors in \(G\) as _inactive_ authors who are the candidates for topic switches in the AW. They turn active when they publish their first paper on topic \(t\). In Fig. 1B, authors \(a_{2}\), \(a_{3}\), and \(a_{6}\) are inactive, with \(a_{2}\) and \(a_{6}\) becoming active in the AW. Furthermore, we rank each active author \(a\in A\) based on two metrics of scientific prominence: _productivity_ and _impact_, described in the Methods C, and calculated at the end of the IW to capture the current perception of \(a\)'s scholarly output. Finally, for each metric, we identify and mark the authors who rank in the top and the bottom 10%.
Given this general setup, we conduct two complementary experiments that we describe in detail in Sections A and B. In Experiment I, we measure membership closure among inactive authors to quantitatively assess how past collaborations with active authors manifest in topic switches. In Experiment II, we instead focus on the active authors, quantifying the propensity of their inactive coauthors to start working on their topic of expertise.
### Experiment I
Here we investigate membership closure among inactive authors. Specifically, we will answer the following questions:
* How is the probability of topic switches related to \(k\), the number of contacts with active authors?
* Does this probability depend on the relative prominence of the active authors?
To compute the measure, we first must define what confrues as contact with an active author in the IW. We consider two definitions as described below.
Figure 1: Schematic setup for our analysis. (A) Stream of papers across interaction (IW) and activation (AW) windows. Papers tagged with the focal topic \(t\) are marked in red. (B) Author collaboration graph at the end of IW. Authors \(a_{i}\) and \(a_{j}\) are linked by an edge of weight \(k\) if \(a_{i}\) coauthored \(k\) papers with \(a_{j}\) within the IW. The authors active in the focal topic by the end of IW are marked in red. (C) Focus: inactive authors. Inactive author \(a_{6}\) has four active contacts from three sources \(\{a_{0}\), \(a_{1}\), \(a_{5}\}\) derived from the collaboration graph in (B). (D) Focus: active authors. Active author \(a_{0}\) has four coauthors \(\{a_{1}\), \(a_{2}\), \(a_{3}\), \(a_{6}\}\), of whom \(a_{1}\) is already active, and \(a_{6}\) also collaborated with \(a_{1}\) in the IW. This leaves the subset of exclusive inactive coauthors \(\{a_{2},a_{3}\}\). Within this subset, only \(a_{2}\) becomes active in the AW, resulting in \(a_{0}\)’s source activation probability of \(\frac{1}{2}=0.50\). Additionally, \(a_{2}\) writes their first paper with \(a_{0}\) in the AW.
1. The number of active coauthors, with the same coauthor counted as many times as the number of collaborations. In the collaboration network, this corresponds to the weighted degree when considering only active coauthors.
2. The number of papers written with active coauthors.
For example, in Fig. 1C author \(a_{6}\) has four contacts based on the first definition (two from \(a_{5}\) and one each from \(a_{0}\) and \(a_{1}\)), and two if we use the second (from the second and the fourth papers in the IW). We report the findings based on the first definition in the main text. Results from the second definition do not alter the main conclusions and can be found in SI Figs. S1 and S2.
To address the first question, we compute the cumulative _target activation probability_\(C(k)\), _i.e._, the fraction of inactive authors who become active in the AW as a function of the number of contacts \(k\) (see Methods - E). In Fig. 2, we plot \(C(k)\) (in purple) for each of the twenty topics under investigation. Error bars derive from averaging over different time windows for each field (see Methods - D). As expected, we see an increasing trend. In particular, the jump from \(k=0\) to \(k=1\) is remarkable, showing that the probability of _spontaneous_ activation in the absence of previous contacts (\(k=0\)) is much lower
Figure 2: Experiment I. Cumulative target activation probability (in purple) for inactive authors in the AW with shaded 95% confidence intervals. For each \(k\), the \(y\)-value indicates the fraction of inactive authors with at least \(k\) active contacts in the IW who became active in the AW. The green solid line with shaded errors represents the baseline described in the text, corresponding to independent effects from the coauthors. The heatmap below the \(x\)-axis shows the mean difference between the observed and baseline curves for each \(k\) value. It is gray if the 95% confidence interval contains 0, denoting the \(k\)-values where the points are statistically indistinguishable at \(p\)-value 0.05. Positive and negative deviations from the baseline are in red and blue, respectively.
than that of activation through collaboration (\(k\geq 1\)). We observe that the higher the number of contacts, the larger the probability. Most of the growth occurs for low values of \(k\).
To put these numbers in context, we consider a simple baseline \(C_{\text{base}}(k)\) (see Methods -F) where we assume each contact has a constant, independent probability of producing a topic switch.Within each topic, we compute the difference (see Methods -D) between the curves for each value of \(k\) over all reference years and plot them below the \(x\)-axis. Except for the topics of Cluster Analysis, Parallel Computing, and Peptide Sequence, the observed curves deviate from the baseline. This provides some empirical evidence to ascertain that the baseline cannot capture the nuances in the observed data. A positive deviation for the majority of the topics indicates a compounding effect. Fluid Dynamics and Statistical Physics are exceptions, as they undershoot the baseline. This may be because they are broad interdisciplinary fields unlike the others, and having collaborators in different fields may lessen their effect.
Next, we explore the second research question, checking if the contact source's prominence affects activation chances. Recall that in every IW for a topic, we select active authors in the top 10% and the bottom 10% based on productivity and impact. This separates the most prominent active authors from the least prominent. To mitigate confounding effects, we only consider the subset of inactive authors who are neighbors with strictly one of the two sets of active authors.
In Fig. 3, we assess the significance of the difference between the cumulative target activation probabilities for inactive authors in contact with active authors in the two bins. Each row corresponds to a topic, and the color of each square indicates whether the difference is positive (red), negative (blue), or non-significant (grey). The two columns correspond to prominent authors selected based on productivity (left) and impact (right). For productivity, all differences are significant and positive, meaning that contacts with highly productive active authors lead to higher target activation probabilities. For impact, there are a handful of exceptions. Overall, having prominent contacts increases the target activation probability.
### Experiment II
Here we focus on the active authors and their collaborators. For every active author \(a\), we consider the subset of their inactive coauthors who have _exclusively_ collaborated with \(a\) in the IW. We call this set the exclusive inactive coauthors of \(a\). For example, in Fig. 1D, active author \(a_{0}\) has four coauthors \(\{a_{1},a_{2},a_{3},a_{6}\}\), of whom only \(a_{2}\) and \(a_{3}\) exclusively collaborate with \(a_{0}\) in the IW. We do this because effects due to active authors different from \(a\) would be difficult to disentangle and could confound the analysis and the conclusions. The relevant measure here is the _source activation probability_\(P_{s}^{a}\), _i.e._, the fraction of exclusive inactive coauthors who become active in the AW (see Methods -G). The fraction controls for the collaboration neighborhood sizes which could vary widely for different scholars. In Fig. 1D, \(P_{s}^{a}\) for \(a_{0}\) is \(\frac{1}{2}\) = 0.5, as only \(a_{2}\) becomes active in the AW.
For a given set of active authors, we obtain \(C_{s}\), the complementary cumulative probability distribution of their source activation probabilities (see Methods -G). We select the pools of the most and least prominent authors as described in Experiment I. The relative effects of the two groups are estimated by comparing the _cumulative source activations_, _i.e._, points on the respective cumulative distributions at a specific threshold \(f^{*}\). Results are reported in Fig. 4A for a threshold \(f^{*}=0.10\). Our conclusions also hold when considering a threshold \(f^{*}=0.20\), which can be found in SI Fig. S3.
In Fig. 4, each row corresponds to a topic. The green and purple ranges represent the 95% confidence intervals of the mean difference between the cumulative source activations for the two pools of authors for productivity and impact, respectively. For productivity, the difference is significant for all topics but one (Superconductivity).
Figure 3: Heatmaps showing the mean difference between the cumulative target activation probabilities of the inactive authors in the AW who had exclusive contacts with the top 10% and bottom 10% of active authors, respectively, selected according to productivity (left) and impact (right) in the IW. The cells are gray if the 95% confidence interval contains 0. The majority of red cells indicate that the cumulative target activation probabilities for contacts with the top 10% are higher than those with the bottom 10%.
The differences are somewhat less pronounced for impact, but are still significant in most cases.
To further corroborate this finding, we specialize the analysis by checking how many exclusive coauthors of \(a\) also published their first paper on topic \(t\) in the AW with \(a\). This is a way to assess the _chaperoning propensity_ of active authors [23], and we define the measure in Methods - H. In Fig. 4B, we report the 95% confidence intervals of the average difference between the chaperoning propensities for the most prominent and the least prominent active authors for threshold \(f^{*}=0.10\). Similar to Fig. 4A, we find that the more productive/impactful an active author is, the more likely their coauthors will start working with them on a new topic. Results for \(f^{*}=0.20\), which confirm this trend, can be found in SI Fig. S3.
While our analysis clearly shows that prominence is a factor, one may wonder if the number of coauthors also plays a role. We posit that, on average, the more collaborators one has, the more tenuous the contact with any of them will be, resulting in lower source activation probabilities. From each group of most prominent authors, we, therefore, pick the top and the bottom 20% based on the average number of coauthors on papers published with exclusive inactive coauthors. By construction, this excludes any paper written on the focal topic.
In Fig. 5, we perform the same analysis as in Fig. 4A for the two pools of authors described above. We observe that the confidence intervals of the differences lie to the left of zero, _i.e._, are negative. For productivity, all values are significant. For impact, there are only two topics (Chemotherapy and Radiation Therapy) that are not significant. Overall, inactive coauthors of prominent authors with more collaborators have a lower probability of switching topics. This is consistent with the intuition that the interactions with each coauthor are less frequent/strong in that case and, consequently, less effective at inducing topic switches.
## Discussion
Collaboration allows scholars to deepen existing knowledge and be exposed to new ideas. In this paper, we assessed if and how collaboration patterns affect the probability of switching research topics. We determined that the probability for a scholar to start working on a new topic depends on earlier contacts with people already active in that topic. This effect is proportional to the number of contacts, with more contacts resulting in higher probabilities. In most topics, this behavior is distinct from a simple baseline assuming independent effects from the contacts, which likely indicates effects of non-dyadic interactions that prompt further investigation.
Similarly, we measured the probability that inactive coauthors of an active author end up publishing on the new topic, which singles out the effect of the association with that author in the activation process. We stress that, by design, previous interactions between inactive and active authors are limited to works dealing with topics different from the focal topic. Therefore, our analysis suggests that an active author may expose an inactive one to a new topic, even when their interactions do not directly concern that topic. This underlines the social character of scientific interactions, where discussions may deviate from the context that mainly motivates them.
We also checked whether the activation probability depends on some specific features of the active authors. We
Figure 4: Experiment II results for \(f^{*}=0.10\). (A) The mean and 95% confidence interval of the means of the difference between the cumulative source activations of active authors in the top 10% and bottom 10% based on productivity (green) and impact (pink). (B) The mean and 95% confidence interval of the means of the difference between the chaperoning propensities of active authors in the top 10% and bottom 10% based on productivity (green) and impact (pink). A positive difference indicates that the effect is stronger for the top 10% active authors.
found that the more prolific and impactful authors have higher chances of inducing coauthors to switch topics and become coauthors in their first paper on the new topic.
Furthermore, we showed that the larger the number of coauthors of an active author, the lower the chance of a topic switch. This is consistent with a _dilution_ of the influence, resulting from the inability to interact strongly with collaborators when their number is large. To the best of our knowledge, we are disclosing this effect for the first time.
A natural explanation of our findings is that topic switches result from a social contagion process, much like the adoption of new products [14, 24], or the spreading of political propaganda [16]. However, we cannot discount selection effects in observational studies like ours [25]. Having large numbers of active coauthors on a topic may be associated with strong latent homophily between the authors, which may facilitate the future adoption of the topic even without interventions from the active authors.
Our work uses OpenAlex, a valuable open-access bibliometric database. We rely on their author disambiguation and topic classification algorithms to conduct the analyses. These processes are inherently noisy and can introduce implicit biases. In addition, there appears to be incomplete citation coverage which might partly explain why the results for impact are less robust than those for productivity. Future releases of OpenAlex might mitigate these problems. To counter these issues, we repeated our analysis on multiple topics from three distinct scientific disciplines. While the size of the effects varies with the topic, our main conclusions hold across all topics, with very few exceptions.
In conclusion, our work offers a platform for further investigations on the mechanisms driving homophily in science. A thorough understanding of these mechanisms requires effective integration of all factors that may play a role. Besides productivity and impact, topic switches may be affected by the institutional affiliations of those involved. On the one hand, it is plausible that people in the same institution have more chances to interact and affect each other's behavior. On the other hand, collaborations with people from renowned institutions are expected to weigh more in the process. Another discriminating factor could be the number of citations to the collaborator's papers. The higher the number of citations, the closer the association between collaborators. We could also include the scientific affinity between coauthors through the similarity of their papers. Modern neural language models [26, 27] allow to embed papers and, consequently, authors in high-dimensional vector spaces, where the distance between two authors is a good proxy of the similarity of their outputs.
## Methods
### Data
We analyze papers from the February 2023 snapshot of the bibliometric dataset OpenAlex: the successor to Microsoft Academic Graph (MAG). We restrict our analysis to papers published between 1990 and 2022 and having at most thirty authors. Papers are tagged with _concepts_ (topics) by a classifier trained on the MAG. We use concept tags to construct snapshots for three fields: Physics, Computer Science (CS), and Biology and Medicine (BioMed). Physics contains 19.7M papers, while CS and BioMed each have 27.6M and 43.52M papers, respectively. Within each domain, we select seven, six, and seven topics, respectively. We publish the code and associated data on GitHub.
Within each topic, we consider reference years between 1995 and 2018, where the respective interaction and activation windows contain at least 3000 papers. This
Figure 5: Dilution effect results for \(f^{*}=0.10\). The mean and 95% confidence interval of the mean of the difference between the cumulative source activations of active authors in the top 20% and bottom 20% bins, based on the average number of coauthors, among the top 10% active authors in productivity (green) and impact (pink). A negative difference across the topics indicates a _dilution_ effect, wherein coauthors of prominent active scholars with fewer collaborators are more likely to switch topics.
threshold ensures a critical mass of papers and authors to conduct the analyses. Each topic we selected has at least 10 reference years satisfying the constraint. The statistical tests in the manuscript are aggregated over the different reference years. More information is available in SI Tables S1 to S3.
### Overlap coefficient
We use the overlap coefficient to measure the degree of overlap between the different sets of authors picked based on productivity and impact.
\[\text{Overlap}(A,B)=\frac{|A\cap B|}{\min(|A|,|B|)}.\]
In our case, the two sets are the same size, so a score of 10% implies that both sets share 10% of the elements.
### Author ranking metrics
Let \(P\) be the set of papers published on topic \(t\) authored by the set of active authors \(A\) during the interaction window IW. Let \(a\) be an active author who wrote \(P_{a}\) papers during the IW. We define the following metrics to rank active authors and select the top and bottom 10%.
_Productivity:_ the count of papers \(a\) has authored on topic \(t\) during the IW. More formally, it is the cardinality of the set \(P\cap P_{a}\).
_Impact:_ the average citation count of \(P_{a}\) from the papers in \(P\). We argue that restricting incoming citations from \(P\) is a good proxy for the impact that \(a\) has made on that topic. The average number of citations is a better indicator of excellence than the total citation count [28]. Also, considering the average instead of the sum lowers its correlation with productivity, here measured by the overlap coefficient of Methods -
For any \(0\leq f\leq 1\), we compute the fraction \(C_{s}(f)\) of all active authors whose source activation probability is greater than or equal to \(f\). \(C_{s}(f)\) is the complementary cumulative probability distribution of the source activation probability \(P_{s}^{a}\). As expected, \(C_{s}(f)\) quickly decreases to 0 with increasing \(f\). Because the curves corresponding to two sets of active authors are effectively indistinguishable at the tail, we compare a pair of points at some threshold \(f^{*}\). We call \(C_{s}(f^{*})\) the _cumulative source activation_.
The choice of the threshold \(f^{*}\) is important. Setting it to 0 or 1 would return the same probability for both sets of authors. It should not also be too small for numerical reasons. For example, if there are only five inactive coauthors, the smallest non-zero fraction cannot be smaller than \(1/5=0.20\). Choosing too high a value instead would lead to weaker statistics. So, we fix the value at 0.10 for the results in the main text (Figs. 4 and 5) and report the results for 0.20 in SI Figs. S3 and S4.
### Chaperoning propensity
Let \(m_{a}\) be the number of exclusive inactive coauthors of an active author \(a\) who become active in the AW, which is the same as the numerator of Eq. (4). Let \(i_{a}\) be the number of those authors who write their first paper on topic \(t\) with \(a\) in the AW. The _chaperoning probability_ of \(a\) is defined as
\[P_{c}^{a}=\frac{i_{a}}{m_{a}}. \tag{5}\]
We define the _chaperoning propensity_\(P_{c}(f)\) corresponding to a specific threshold \(f\in[0,1]\) as the fraction of all active authors with \(P_{c}^{a}\geq f\). We use the aforementioned values of 0.10 (Figs. 4 and 5) and 0.20 (SI Figs. S3 and S4) for the threshold \(f\).
###### Acknowledgements.
This project was supported by grants from the National Science Foundation (#1927418) and the Air Force Office of Scientific Research (#FA9550-19-1-0354). This research was supported in part by Lilly Endowment, Inc., through its support for the Indiana University Pervasive Technology Institute.
|
2303.03721 | Precision theoretical determination of electric-dipole matrix elements
in atomic cesium | We compute the reduced electric-dipole matrix elements
$\langle{nS_{1/2}}||D||{n'P_J}\rangle$ with $n=6,7$ and $n'=6,7,\ldots,12$ in
cesium using the most complete to date ab initio relativistic coupled-cluster
method which includes singles, doubles, perturbative core triples, and valence
triples. Our results agree with previous calculations at the linearized single
double level but also show large contributions from nonlinear singles and
doubles as well as valence triples. We also calculate the normalized ratio
$\xi_{n,n'}\equiv(1/\sqrt{2})\langle{nS_{1/2}}||D||{n'P_{1/2}}\rangle/\langle{nS_{1/2}}||D||{n'P_{3/2}}\rangle$
which is important for experimental determination of matrix elements. The
ratios $\xi_{6,n}$ display large deviations from the nonrelativistic limit
which we associate with Cooper-like minima. Several appendices are provided
where we document the procedure for constructing finite basis sets and our
implementation of the random phase approximation and Brueckner-orbitals method. | H. B. Tran Tan, A. Derevianko | 2023-03-07T08:07:08Z | http://arxiv.org/abs/2303.03721v2 | # Precision theoretical determination of electric-dipole matrix elements in atomic cesium
###### Abstract
We compute the reduced electric-dipole matrix elements \(\langle nS_{1/2}||D||n^{\prime}P_{J}\rangle\) with \(n=6,7\) and \(n^{\prime}=6,7,\ldots,12\) in cesium using the most complete to date _ab initio_ relativistic coupled-cluster method which includes singles, doubles, perturbative core triples, and valence triples. Our results agree with previous calculations at the linearized single double level but also show large contributions from nonlinear singles and doubles as well as valence triples. We also calculate the normalized ratio \(\xi_{n,n^{\prime}}\equiv(1/\sqrt{2})\langle nS_{1/2}||D||n^{\prime}P_{1/2} \rangle/\langle nS_{1/2}||D||n^{\prime}P_{3/2}\rangle\) which is important for experimental determination of matrix elements. The ratios \(\xi_{6,n}\) display large deviations from the nonrelativistic limit which we associate with Cooper-like minima. Several appendices are provided where we document the procedure for constructing finite basis sets and our implementation of the random phase approximation and Brueckner-orbitals method.
## I Introduction
Gauging the accuracy of theoretical determinations of atomic parity-violating (APV) amplitudes [1; 2; 3; 4; 5; 6; 7; 8; 9] generically requires experimental knowledge of three key atomic properties: (i) electric-dipole matrix elements, (ii) magnetic-dipole hyperfine constants, and (iii) atomic energies. While the accuracy of the calculations can be evaluated based on the internal consistency of various many-body approximations and the convergence patterns with respect to the increasing complexity of these approximations, the theory-experiment comparison for known atomic properties remains the key. Indeed, the exact calculations for many-electron atomic systems cannot be carried out in principle, thus leaving the possibility of unaccounted systematic effects. Even for one-electron systems, since the theory is formulated as a perturbation theory in the fine structure constant \(\alpha\), the electron-to-nucleus mass ratio, etc., there are always some unaccounted higher-order contributions. Then, only a sufficiently accurate experiment can provide an "exact" answer.
In \({}^{133}\)Cs, where the most accurate to date APV experiment [10] has been carried out and new experiments [11; 12] are planned, the current goal for _ab initio_ relativistic many-body calculations stands at \(0.1\%\). As we survey the available experimental data, it becomes clear that the weakest link in the theory-experiment comparison are experimental \(E1\) matrix elements (energies are known with high spectroscopic accuracy and the \({}^{133}\)Cs ground state hyperfine splitting is fixed to an exact number by the definition of the unit of time, the second). The accuracy of available experimental \(E1\) matrix elements is reviewed below, but it is no better than \(0.1\%\). Although Ref. [13] reported \(0.05\%\) accuracies for the \(\langle 7S_{1/2}||D||7P_{J}\rangle\) matrix elements, their determination is indirect and relies on theoretical input. More accurate is a direct determination of the ratio of matrix elements. Rafac and Tanner [14] directly measured the ratio of the cesium \(D\)-line transition strengths, \(\left|\left\langle 6p^{\,2}P_{3/2}||D||68^{\,2}S_{1/2}\right\rangle\right|^{2}/\)\(\left\langle 6p^{\,2}P_{1/2}||D||68^{\,2}S_{1/2}\right\rangle\right|^{2}=1.9809(9)\), which translates into a \(0.05\%\)-accurate measurement of the ratio of \(E1\) reduced matrix elements. The absorption spectroscopy measurement of the ratio (in contrast to matrix elements) mitigates certain systematic effects, such as dependence on laser power, beam size, collection efficiencies, and detection sensitivities. Another ratio, \(\left\langle 7s^{\,2}\!S_{1/2}||D||6p^{\,2}P_{3/2}\right\rangle/\left\langle 7s^{\,2}S_{1/2}||D||6p^{\,2}P_{1/2}\right\rangle=1.5272(17)\), was recently measured using a two-color, two-photon excitation technique [15].
Motivated by these experimental developments, the primary goal of this paper is to examine the behavior of the _normalized ratio of reduced dipole matrix elements_ connecting the initial \(nS_{1/2}\) state to the two fine-structure components \(n^{\prime}P_{J}\),
\[\xi_{n,n^{\prime}}\equiv\frac{1}{\sqrt{2}}\frac{\langle nS_{1/2}||D||n^{ \prime}P_{3/2}\rangle}{\langle nS_{1/2}||D||n^{\prime}P_{1/2}\rangle}\,. \tag{1}\]
We will examine the behavior of the ratio \(\xi_{n,n^{\prime}}\) as a function of the final state principle quantum number \(n^{\prime}\) while fixing the initial state. We have chosen the renormalization factor of \(1/\sqrt{2}\) so that in the nonrelativistic limit \(\xi_{n,n^{\prime}}\to 1\). Generically, one would expect the relativistic correction to be \(\sim(\alpha Z)^{2}\), which evaluates to \(0.16\) for Cs. However, we find that the normalized ratio (1) can substantially deviate from \(1\), signaling a complete breakdown of such an expectation. Moreover, we find that the ratio (1) substantially depends on many-body effects included in the calculations. Such significant deviations are known in photoionization processes for alkali-metal atoms (see, e.g., Ref. [16] and references therein), where the ground state \(nS_{1/2}\) can be ionized into either the \(P_{1/2}\) or \(P_{3/2}\) channel. Due to a phase shift between the \(\varepsilon P_{3/2}\) and \(\varepsilon P_{1/2}\) continuum wave functions of the outgoing electron with energy \(\varepsilon\), a situation may arise that the \(nS_{1/2}\to\varepsilon P_{1/2}\) transition amplitude vanishes, while the \(nS_{1/2}\to\varepsilon P_{3/2}\) amplitude does not. In this case, the ratio (1) based on discrete-to-continuum matrix elements becomes infinite. This is the origin of the Cooper mini
mum [17; 18] in photoionization cross-sections. While in our case of bound-bound transitions, the Cooper minimum does not occur _per se_, similar logic applies to explaining large deviations of \(\xi_{n,n^{\prime}}\) from 1.
To explore the sensitivity of the ratio (1) to correlation corrections, we use a variety of relativistic many-body methods ranging from the random-phase Approximation (RPA) and Brueckner-orbital (BO) methods to the more sophisticated coupled-cluster (CC) techniques.
The secondary goal of this paper is to compile electric-dipole (\(E1\)) matrix elements \(\langle nS_{1/2}||D||n^{\prime}P_{J}\rangle\) with \(n=6,7\) and \(n^{\prime}=6-12\). Our relativistic CC calculations are complete through the fifth order of many-body perturbation theory [19; 20] and include a large class of diagrams summed to all orders. As such, these are the most complete _ab initio_ relativistic many-body calculations of \(E1\) matrix elements in Cs to date. To this end we extend our earlier coupled-cluster CCSDvT calculations [5] to the next level of computational complexity. The CCSDvT method includes single and double excitations of electrons from the Cs Xe-like core and single, double, and triple excitations of the valence electron. Here we amend the CCSDvT method with perturbative treatment of core triples: we will use the CCSDpTVT designation for this method, with pT emphasizing the perturbative treatment of core triple excitations.
Our compilation of \(E1\) matrix elements is anticipated to be useful in a variety of applications ranging from determining atomic polarizabilities, light shifts, and magic wavelengths for laser cooling, trapping, and atom manipulation in atomic clocks [21; 22; 23; 24; 25; 26; 27; 28], to evaluating the long-range interaction coefficients \(C_{6}\) and \(C_{8}\) needed in ultra-cold collision physics [29; 30], and finally to suppressing decoherence in quantum simulation, quantum information processing, and quantum sensing [31; 32]. In addition, our matrix elements can lead to more accurate theoretical determination of the \(6S_{1/2}\to 7S_{1/2}\) parity-violating amplitude \(E_{PV}\) and transition polarizabilities. The vector transition polarizability is needed to extract \(E_{PV}\) from experimental results [10], whereas the theoretical value for \(E_{PV}\) facilitates the inference of more fundamental quantities such as the electroweak Weinberg angle [33; 34; 35] and, thereby, improves precision probes of the low-energy electroweak sector of the standard model of elementary particles.
Experimentally, the \(\langle 6S_{1/2}||D||6P_{J}\rangle\) matrix elements are the most accurately known, through a variety of techniques, including time-resolved fluorescence [36; 37], absorption [38], ground-state polarizability [39; 40], and photo-association spectroscopy [41; 29; 42]. The relative uncertainties in these experiments are \(\sim 0.1\%\). Direct absorption measurements of \(\langle 6S_{1/2}||D||7P_{J}\rangle\) yielded results differing at the \(\lesssim 1\%\) level [43; 11; 44], while a more recent experiment comparing the absorption coefficient of the \(6S_{1/2}\to 7P_{J}\) transitions to that of the more precisely known \(6S_{1/2}\to 6P_{1/2}\) line obtained \(\langle 6S_{1/2}||D||7P_{1/2}\rangle\) and \(\langle 6S_{1/2}||D||7P_{3/2}\rangle\) with uncertainties of 0.1% and 0.16% respectively [45]. The \(\langle 7S_{1/2}||D||7P_{J}\rangle\) matrix elements were determined [46] by combining a measurement of the DC Stark shift of the \(7S_{1/2}\) state [47] with the theoretical value for the ratio of matrix elements \(\langle 7S_{1/2}||D||7P_{3/2}\rangle/\langle 7S_{1/2}||D||6P_{1/2}\rangle\) with uncertainties of 0.15%. An updated measurement of the \(7S_{1/2}\) DC Stark shift [13] reduced the uncertainties in \(\langle 7S_{1/2}||D||7P_{J}\rangle\) to 0.05%. The \(\langle 7S_{1/2}||D||6P_{J}\rangle\) matrix elements were determined from the measured lifetime of \(7S_{1/2}\)[48] and the theoretical value for the ratio \(\langle 7S_{1/2}||D||6P_{3/2}\rangle/\langle 7S_{1/2}||D||6P_{1/2}\rangle\)[49; 50; 46], with uncertainties \(\sim 0.5\%\). More recently, the \(7S_{1/2}\) lifetime and the ratio \(\langle 7S_{1/2}||D||6P_{3/2}\rangle/\langle 7S_{1/2}||D||6P_{1/2}\rangle\) were remeasured with a better accuracy resulting in an improved 0.1% uncertainty in the \(\langle 7S_{1/2}||D||6P_{J}\rangle\) matrix elements [15]. To the best of our knowledge, high-precision experimental results for \(E1\) matrix elements involving states with higher principle quantum numbers are lacking.
There were many theoretical determinations of \(E1\) matrix elements in Cs (see comparisons in the later sections of this paper). Here we review the ones that are closely related to our CC methodology. In the context of atomic parity violation [5], the \(\langle nS_{1/2}||D||n^{\prime}P_{1/2}\rangle\) matrix elements with \(n=6,7\) and \(n^{\prime}=6,7,8,9\) were calculated using a coupled-cluster approach which included nonlinear singles, doubles, and valence triples (CCSDvT) with an accuracy at the level of 0.2%. A broader study [27] of several Cs atomic properties including lifetimes, matrix elements, polarizabilities, and magic wavelengths, presented a comprehensive list of \(nS_{1/2}\to n^{\prime}P_{1/2,3/2}\) matrix elements with \(n=6-14\) and \(n^{\prime}=6-12\). Although \(E1\) matrix elements between states with lower principle quantum numbers had estimated uncertainties around a few percent, the uncertainties in those involving higher principle quantum numbers were \(\sim 20\%\). It is worth pointing out that the CC method used in Ref. [27] is the linearized version of the CC method limited to singles, doubles, and partial triples (SDpT), to be contrasted with our more complete CCSDpTVT method employed in the present work. The CCSDvT method of Ref. [5] was also less complete as it did not include treatment of core triple excitations.
This paper is organized as follows. In Sec. II, we present a summary of the methods employed in our computations of \(E1\) matrix elements in Cs. Numerical results are tabulated an discussed in Sec. III. The paper presents several appendices where we document our methods of solving the many-body problem, such as the construction of Dirac-Hartree-Fock basis sets (Appendix A) and Brueckner orbitals (Appendix B), as well as the basis-set implementation of the random-phase approximation (Appendix C). These techniques were used in several earlier papers by our group and documenting them not only facilitates a reproduction of our results, but can be also useful for the community. Unless specified otherwise, atomic units, \(|e|=m_{e}=\hbar=1\), are used.
Theory
In this section, we discuss several _ab initio_ relativistic many-body methods we use to compute the \(E1\) matrix elements. These include the lowest order Dirac-Hartree-Fock (DHF) method, the random-phase approximation (RPA), the Brueckner-orbital (BO) method, the combined RPA(BO) method, and several levels of approximation within the CC method. We also present details of computing "minor" corrections: Breit, QED, semiempirical, and numerical basis-extrapolation corrections.
### Basics
We begin by considering the Dirac Hamiltonian of the atomic electrons propagating in the nuclear potential \(\sum_{i}V_{\text{mc}}(r_{i})\). Here, \(i\) ranges over all the \(N=55\) electrons of the Cs atom. The full electronic Hamiltonian \(H\) may be decomposed into
\[\begin{split} H&=\sum_{i}h_{0}(i)+V_{c}\,,\\ h_{0}(i)&=c\mathbf{\alpha}_{i}\cdot\mathbf{p}_{i}+m_{ e}c^{2}\beta_{i}\\ &+V_{\text{muc}}(r_{i})+h_{W}(r_{i})+U(r_{i})\,,\\ V_{c}&=\frac{1}{2}\sum_{i\neq j}\frac{e^{2}}{| \mathbf{r}_{i}-\mathbf{r}_{j}|}-\sum_{i}U(r_{i})\,,\end{split} \tag{2}\]
where \(U(r_{i})\) is chosen to be the conventional frozen-core \(V^{N-1}\) DHF potential as it dramatically reduces the number of many-body perturbation theory (MBPT) diagrams. For brevity, we suppressed the positive-energy projection operators for the two-electron interactions (no-pair approximation). See the textbook [51] for further details.
As usual, we assume that the energies \(\varepsilon_{i}\) and orbitals \(\psi_{i}\) of the single-electron DHF Hamiltonian \(h_{0}\) are known. In Appendix A, we discuss the construction of the DHF \(B\)-spline basis sets used in our numerical work. These basis sets approximate the spectrum of \(h_{0}\) and are numerically complete. With the complete spectrum of \(h_{0}\) determined, the many-body eigenstates \(\Psi\) of \(H\) are then expanded over antisymmetrized products of the one-particle orbitals \(\psi_{i}\). In MBPT, one obtains these eigenstates by treating the residual \(e^{-}e^{-}\) interaction \(V_{c}\) as a perturbation. Second quantization and diagrammatic techniques considerably streamline the MBPT derivations. To this end, we first express Eq. (2) in terms of \(a_{i}^{\dagger}\) and \(a_{i}\), the creation and annihilation operators associated with the one-particle eigenstate \(\psi_{i}\) of \(h_{0}\). We will follow the indexing convention that core orbitals are labeled by the letters at the beginning of alphabet \(a,b,c,\dots\), while valence electron orbitals are denoted by \(v,w,\dots\), and the indices \(i,j,k,\dots\) refer to an arbitrary orbital, core or excited (including valence states). The letters \(m,n,p,\dots\) are reserved for those orbitals unoccupied in the core (these could be valence orbitals).
In the second quantization formalism, the DHF Hamiltonian \(H_{0}\) and the residual \(e^{-}e^{-}\) interaction read
\[\begin{split} H_{0}&=\sum_{i}\varepsilon_{i}N[a_{i} ^{\dagger}a_{i}]\,,\\ V_{c}&=\frac{1}{2}\sum_{ijkl}g_{ijkl}N[a_{i}^{ \dagger}a_{j}^{\dagger}a_{k}a_{l}]\,,\end{split} \tag{3}\]
where \(N[\cdots]\) denotes normal ordering of operator products and the Coulomb matrix elements are
\[g_{ijkl}\equiv\int\frac{d^{3}r_{l}d^{3}r_{2}}{|\mathbf{r}_{1}-\mathbf{r}_{2}| }\psi_{i}^{\dagger}(\mathbf{r}_{1})\psi_{j}^{\dagger}(\mathbf{r}_{2})\psi_{k} (\mathbf{r}_{1})\psi_{l}(\mathbf{r}_{2})\,. \tag{4}\]
The zero-order wave function may be expressed as \(|\Psi_{v}^{(0)}\rangle=a_{v}^{\dagger}|0_{c}\rangle\), where \(|0_{c}\rangle\) represents the filled Fermi sea of the atomic core (quasivacuum state). We are interested in a matrix element of a one-electron operator \(Z=\sum_{ij}z_{ij}a_{i}^{\dagger}a_{j}\) between two valence many-body states \(|\Psi_{w}\rangle\) and \(|\Psi_{v}\rangle\), \(Z_{wv}\). The first-order contribution to the \(Z_{wv}\) is given by
\[Z_{wv}^{(1)}=\langle\Psi_{w}^{(0)}|Z|\Psi_{v}^{(0)}\rangle=z_{wv}+\delta_{wv} \sum_{a}z_{aa}\,. \tag{5}\]
For the \(E1\) matrix elements, the sum over core orbitals vanishes due to selection rules, and \(Z_{wv}^{(1)}\) reduces to the DHF value of the matrix element \(z_{wv}\).
The second-order MBPT correction to matrix elements reads
\[Z_{wv}^{(2)}=\sum_{an}\frac{z_{an}\tilde{g}_{wnva}}{\varepsilon_{a}-\varepsilon _{n}-\omega}+\sum_{an}\frac{\tilde{g}_{wavn}z_{na}}{\varepsilon_{a}-\varepsilon _{n}+\omega}\,, \tag{6}\]
where \(\omega\equiv\varepsilon_{w}-\varepsilon_{v}\) and \(\tilde{g}_{ijkl}\equiv g_{ijkl}-g_{ijlk}\).
One may separate the third-order correction \(Z_{wv}^{(3)}\) into different classes of diagrams [52]
\[Z_{wv}^{(3)}=Z_{wv}^{3,\text{RPA}}+Z_{wv}^{3,\text{BO}}+Z_{wv}^{\text{SR}}+Z_ {wv}^{\text{Norm}}\,. \tag{7}\]
The RPA and BO terms are discussed in Secs. II.3 and II.2. These corrections typically dominate the third-order contributions. Expressions for the structural radiation (SR) and normalization (Norm) terms can be found in Ref. [52]. We do not, however, include the SR and Norm diagrams in our MBPT calculations. Their contributions, as well as higher-order ones, are more systematically accounted for in the CC approach described in Sec. II.4.
Fourth-order diagrams \(Z_{wv}^{(4)}\) have been computed in Refs. [53; 54]. These are subsumed in the CCSDvT method and we do not compute them explicitly. We are not aware of any work tabulating the fifth-order MBPT contributions. Due to the exploding number of diagrams with increasing MBPT order, such contributions are more elegantly accounted for using all-order diagrammatic summation techniques.
Diagrammatic techniques enable summing certain classes of diagrams to all orders. For example, the RPA
method, discussed in Sec. II.3, incorporates the second-order \(Z_{wv}^{(2)}\), third-order \(Z_{wv}^{3,\text{RPA}}\), and all higher-order diagrams of the similar topological structure. Similar considerations apply to the BO method (see Sec. II.2). The CCSDvT (and, by extension, the more sophisticated CCSDpTvT) method sum even larger classes of MBPT diagrams to all orders. The CCSDvT method is complete through the fifth order of conventional MBPT [19; 20]; it starts missing certain diagrams in the sixth order.
Finally, as a matter of practical implementation, the MBPT expressions, like Eq. (6), involve summations over the core and the excited orbitals. Each orbital \(\psi_{i}\) is characterized by a principal quantum number \(n_{i}\), orbital angular momentum \(\ell_{i}\), a total angular momentum \(j_{i}\), and its projection \(m_{i}\). The sums over the magnetic quantum numbers \(m_{i}\) are carried out analytically using the rules of Racah algebra. Although the sums over \(j_{i}\) are infinite, they are restricted by angular momentum selection rules which reduce the number of surviving terms. Moreover, the sums over total angular momenta converge well and in practice, it suffices to sum over a few lowest values of \(j_{i}\). The sums over the principal quantum numbers \(n_{i}\) involve, on the other hand, summing over the infinite discrete spectrum and integrating over the continuum. In the finite-basis-set method, employed in our work, these infinite summations are replaced by summations over a finite-size pseudospectrum [55; 56; 57; 58].
The basis orbitals in the pseudospectrum are obtained by placing the atom in a sufficiently large cavity and imposing boundary conditions at the cavity wall and at the origin (see Ref. [58] for further details on dual-kinetic-basis \(B\)-spline sets used in our paper). For each value of \(j_{i}\), one then finds a discrete set of \(2M\) orbitals, \(M\) from the Dirac sea and the remaining \(M\) with energies above the Dirac sea threshold (conventionally referred to as "negative" and "positive" energy parts of the spectrum in analogy with free-fermion solutions). This enables a straightforward implementation of the positive-energy spectrum projection operators in the no-pair approximation.
If the size of the cavity is large enough, typically about \(40a_{0}/Z_{\text{eff}}\) where \(a_{0}\) is the Bohr radius and \(Z_{\text{eff}}\) is the effective charge of the core felt by the valence electrons, the low-lying basis-set orbitals map with a good accuracy to the discrete orbitals of the exact DHF spectrum obtained with the conventional finite-differencing techniques. Higher-energy orbitals do not closely match their physical counterparts due to confinement and discretization (see Sec. III and Appendix A). Nevertheless, since the pseudospectrum is numerically complete, in the sense that any function satisfying the boundary conditions imposed by the cavity can be expanded in terms of the basis functions, it can be used instead of the real spectrum to evaluate correlation corrections to states confined to the cavity. Theoretically, in the limit where the cavity size and the number of basis functions, \(M\), go to infinity, one recovers the physical problem. The increasing computational cost associated with increasing \(M\) means, however, that in practice, finite but reasonably large values of cavity radius and basis0set size are chosen and numerical errors due to these finite values are estimated by extrapolating to the infinite basis (see Sec. II.5.5 for more details). From now on, all single-particle DHF orbitals \(\psi_{i}\) are understood to be members of a finite basis set. Details on our construction of the \(B\)-spline basis set are presented in Appendix A.
### Brueckner-orbital method
Qualitatively, the BO correction accounts for a process where the valence electron charge polarizes the atomic core, inducing a dipole and higher-rank multipolar moments in the core. The valence electron is then attracted by the induced redistribution of charges in the core, reducing the size of the valence electron's orbit. This process is included in a generic model-potential formulation by adding a relevant self-energy operator \(\Sigma(\mathbf{r})\) to the valence electron Hamiltonian
\[\Sigma^{\text{m.p.}}(\mathbf{r})=-\alpha_{c}/(2r^{4})\,, \tag{8}\]
with \(\alpha_{c}\) being the electric-dipole polarizability of the core.
Note that since \(\Sigma^{\text{m.p.}}(\mathbf{r})\) diverges at small distances, higher multipole contributions are needed for states with low orbital angular momenta and may be more systematically accounted for in a more involved many-body formulation of the self-energy operator. For example, to second order, the matrix element of \(\Sigma\) between arbitrary orbitals \(i\) and \(j\) is given by [59]
\[\Sigma^{(2)}_{ij}(\varepsilon_{0})=\sum_{amn}\frac{g_{aimn}\tilde{g}_{mnaj}}{ \varepsilon_{a0}-\varepsilon_{mn}}+\sum_{abm}\frac{\tilde{g}_{miab}g_{abmj}}{ \varepsilon_{m0}-\varepsilon_{ab}}\,, \tag{9}\]
where \(g\) and \(\tilde{g}\) are the Coulomb matrix elements as defined in Eq.(4) and after Eq. (6). Here we use the shorthand notation \(\varepsilon_{i_{1},i_{2}}\equiv\varepsilon_{i_{1}}+\varepsilon_{i_{2}}\), with \(\varepsilon_{0}\) being some reference energy (see Appendix B for details). We employ Eq. (9) in our calculations. In particular, the diagonal matrix elements \(\Sigma_{vv}(\varepsilon_{v})\) are simply the second-order MBPT correction to the energy of valence state \(v\). The multipolar expansion of \(\Sigma^{(2)}(\varepsilon_{v})\) in the limit of the valence electron being far away from the core recovers the model potential expression (8).
The Brueckner orbitals \(u\) and corresponding energies are determined by solving the eigenvalue equation with both the DHF Hamiltonian \(h_{0}\) and the self-energy operator \(\Sigma\) included:
\[\left(h_{0}+\Sigma^{(2)}(\varepsilon_{0})\right)u=\varepsilon^{\text{BO}}u\,. \tag{10}\]
Our numerical approach to solving this eigenvalue equation is discussed in Appendix B; we solve the matrix eigenvalue problem using the DHF finite basis set. With the BO orbitals determined, the matrix element is simply \(\langle u_{w}|z|u_{v}\rangle\), which includes the DHF value, third-order \(Z_{wv}^{(3),\text{BO}}\) contribution, and higher-order corrections.
### The random-phase approximation
Detailed introductions to the formalism of th RPA can be found in Refs. [60, 61]. The RPA is a linear-response theory realized in the self-consistent mean-field (DHF in our case) framework. Qualitatively, it accounts for the screening of the externally applied electric field (e.g., a driving laser field oscillating at the transition frequency \(\omega\)) by the core electrons. The RPA formalism is an all-order method and offers a distinct advantage of being gauge independent in computations of transition amplitudes.
The third-order RPA term in Eq. (7) is structurally similar to \(Z_{uv}^{(2)}\) and can be grouped with it. It may be shown that topologically similar diagrams exist in higher-order MBPT corrections [62]. When all these diagrams are included, one obtains the RPA corrections similar in form to second-order Eq. (6). In the RPA, one first computes the "core-to-excited" matrix elements \(z_{an}^{\rm RPA}\) and \(z_{na}^{\rm RPA}\) (RPA vertices) [52]
\[z_{an}^{\rm RPA} =z_{an}+\sum_{bm}\frac{z_{bm}^{\rm RPA}\tilde{g}_{nmnb}}{\varepsilon _{b}-\varepsilon_{m}-\omega}+\sum_{bm}\frac{\tilde{g}_{abnm}z_{mb}^{\rm RPA}}{ \varepsilon_{b}-\varepsilon_{m}+\omega}\,, \tag{11}\] \[z_{na}^{\rm RPA} =z_{na}+\sum_{bm}\frac{z_{bm}^{\rm RPA}\tilde{g}_{nmnb}}{ \varepsilon_{b}-\varepsilon_{m}-\omega}+\sum_{bm}\frac{\tilde{g}_{nbnm}z_{mb}^{ \rm RPA}}{\varepsilon_{b}-\varepsilon_{m}+\omega}\,. \tag{12}\]
Once the RPA vertices are obtained, the RPA matrix element between two valence states is given by
\[Z_{uv}^{\rm RPA}=\sum_{an}\frac{z_{an}^{\rm RPA}\tilde{g}_{wnva}}{\varepsilon _{a}-\varepsilon_{n}-\omega}+\sum_{an}\frac{\tilde{g}_{wwna}z_{na}^{\rm RPA}}{ \varepsilon_{a}-\varepsilon_{n}+\omega}\,. \tag{13}\]
Our numerical finite-basis-set implementation of the RPA is described in Appendix C.
An iterative solution of Eqs. (12) recovers the conventional MBPT diagrams order-by-order, but starts missing contributions in the third order. Among the missing third-order diagrams, the dominant correlation correction is usually \(Z_{uv}^{(3),\rm BO}\), coming from Brueckner orbitals (see Sec. II.2). To include the important BO correction in the RPA framework, we will also use a basis of Brueckner orbitals (instead of the DHF orbitals) in solving the RPA equations; we will denote such results as RPA(BO). The conventional RPA results using the DHF basis will be denoted RPA(DHF).
### The coupled-cluster method
The task of accounting for higher-order MBPT corrections can be systematically carried out by means of the CC method [63, 64], which we discuss in this section. Ultimately, we will employ the CCSDvT and the CCSDpTvT methods which are known to be complete through the fourth order of MBPT for energies and through the fifth order for matrix elements [19, 20].
We begin by going back to the second-quantized form of the full electronic Hamiltonian \(H\), Eq. (3),
\[H =H_{0}+G\] \[=\sum_{i}\varepsilon_{i}N[a_{i}^{\dagger}a_{i}]+\frac{1}{2}\sum_{ ijkl}g_{ijkl}N[a_{i}^{\dagger}a_{j}^{\dagger}a_{l}a_{k}]\,. \tag{14}\]
It may be shown that the exact many-body eigenstate \(|\Psi_{v}\rangle\) of \(H\) may be represented as
\[|\Psi_{v}\rangle =N[\exp(K)]\,|\Psi_{v}^{(0)}\rangle\] \[=\left(1+K+\frac{1}{2!}N[K^{2}]+\ldots\right)\,|\Psi_{v}^{(0)} \rangle\,, \tag{15}\]
where \(|\Psi_{v}^{(0)}\rangle\) is the lowest-order DHF state and the cluster operator \(K\) is expressed in terms of connected diagrams of the wave operator [65]
\[K =S_{c}+D_{c}+T_{c}+S_{v}+D_{v}+T_{v}+\ldots\] \[=\sum_{ma}\rho_{ma}a_{m}^{\dagger}a_{a}+\frac{1}{2!}\sum_{mnab} \rho_{mnab}a_{m}^{\dagger}a_{n}^{\dagger}a_{b}a_{a}\] \[+\frac{1}{3!}\sum_{mnrabc}\rho_{mnrabc}a_{m}^{\dagger}a_{n}^{ \dagger}a_{l}^{\dagger}a_{c}a_{b}a_{a}\] \[+\sum_{m\neq v}\rho_{mv}a_{m}^{\dagger}a_{v}+\frac{1}{2!}\sum_{ mna}\rho_{mnea}a_{m}^{\dagger}a_{n}^{\dagger}a_{a}a_{v}\] \[+\frac{1}{3!}\sum_{mnrab}\rho_{mnrvab}a_{m}^{\dagger}a_{n}^{ \dagger}a_{r}^{\dagger}a_{b}a_{a}a_{v}+\ldots\,. \tag{16}\]
Here \(S_{v}\), \(D_{v}\), and \(T_{v}\) (\(S_{c}\), \(D_{c}\), and \(T_{c}\)) are the valence (core) singles, doubles, and triples, expressed in terms of the creation and annihilation operators \(a_{i}^{\dagger}\) and \(a_{i}\). By substituting Eqs. (15) and (16) into the Bloch equation specialized for univalent systems [54], we obtain a set of coupled algebraic equations for the cluster amplitudes \(\rho\). We solve the CC equations numerically using finite basis sets, obtaining, as a result, the cluster amplitudes \(\rho\) and the correlation corrections to the valence electron energies \(\delta E_{v}\).
The explicit form of these equations depends on the level of approximation at which one chooses to operate. For example, one may truncate the expansion (16) at doubles and the expansion (16) at the term linear in \(K\). The resulting linear singles-doubles approximation is conventionally labeled "SD". If one chooses to retain only singles and doubles but all nonlinear terms in Eq. (16), one obtains the nonlinear singles-doubles approximation, labeled "CCSD". Contributions from core triples may be partially accounted for by considering their perturbative effects on core singles and doubles, corresponding to the "CCSDpT" method. In this work, we will employ both the "CCSDvT" and "CCSDpTvT" methods, which include the valence triples, corresponding to the term \(T_{v}\) in Eq. (16), on top of the core CCSD and CCSDpT. The topological structure and explicit form of the Bloch equations in these approximations may be found in Refs. [20, 66].
Once the cluster amplitudes \(\rho\) and thus the many-body wave functions for two valence states \(v\) and \(w\) have been obtained, one may evaluate the \(E1\) matrix element between \(w\) and \(v\) using
\[D_{wv}=\frac{\langle\Psi_{w}|\sum_{ij}d_{ij}a_{i}^{\dagger}a_{j}|\Psi_{v}\rangle }{\sqrt{\langle\Psi_{w}|\Psi_{w}\rangle\langle\Psi_{v}|\Psi_{v}\rangle}}\,, \tag{16}\]
where \(d_{ij}\equiv\langle i|d|j\rangle\) are the single-electron \(E1\) matrix elements. The corresponding expressions for different contributions to \(D_{wv}\) are given in Refs. [50] and [19]. Note that these expressions include only linear single, linear double, and linear triple contributions to \(D_{wv}\). Additional modifications to \(D_{wv}\) due to the nonlinear single and double terms in the CC wave functions are accounted for by the "dressing" of lines and vertices [67]. See Sec. II.5.2 for more details.
### Other corrections
#### ii.5.1 Semiempirical scaling
Since our most complete CCSDpTvT method is still an approximation, we miss certain correlation effects (due to our perturbative treatment of core triples and omission of core and valence quadruple and higher-rank excitations). This is the cause of the difference between the computed and experimental energies. To partially account for the missing contributions in calculations of matrix elements, we additionally correct the CCSDpTvT wave functions using a semiempirical procedure suggested in Ref. [3].
This approach is based on the observation that there exists a nearly linear correlation between the variations of correlation energies and matrix elements in different approximations. This linear dependence is due to the effect of self-energy correction, which gives rise to one of the dominant chains of diagrams present in both matrix elements and energies. For example, for triple excitations, the corrections \(S_{v}[T_{v}]\) and \(\delta E_{v}[T_{v}]\) (in the notations of Ref. [19]) arise from the same diagram and the modification of singles due to triples (\(S_{v}[T_{v}]\)) propagates into the calculation of the matrix element.
More specifically, a dominant contribution to the majority of matrix elements comes from the BO-like term involving valence singles (following the notation of Ref. [50])
\[Z_{wv}^{(c)}=\sum_{m}z_{wm}\rho_{mv}+z_{mv}\rho_{mw}\,. \tag{17}\]
One may connect the CC \(Z_{wv}^{(c)}\) diagram to a BO-basis matrix element \(\langle u_{w}|z|u_{v}\rangle\) via \(u_{v}=\sum_{m}\rho_{mv}\psi_{m}\), with \(\rho_{mv}\) being the expansion coefficients over DHF basis set \(\{\psi_{m}\}\) (see Sec. II.2). Missing corrections to \(Z_{wv}^{(c)}\) due to higher-rank CC excitations may be partially accounted for by improving the values of the valence single coefficients \(\rho_{mv}\). This is achieved by noting that the correlation energy and single amplitudes are closely related. Indeed, the self-energy operator \(\Sigma\) defined in Sec. II.2 is connected to the valence singles via
\[(\varepsilon_{v}-\varepsilon_{m}+\delta E_{v}^{\rm CC})\rho_{mv}=\Sigma_{mv}\,, \tag{18}\]
where \(\delta E_{v}^{\rm CC}\) is the correlation energy computed at the given level of CC approximation (and approaches true value of correlation energy in the complete, yet practically unattainable for Cs, treatment). Notice that the role of \(\delta E_{v}^{\rm CC}\) on the left-hand side of Eq. (18) is suppressed as typically \(|\delta E_{v}^{\rm CC}|\ll|\varepsilon_{v}-\varepsilon_{m}|\). More importantly, the diagonal matrix element of \(\Sigma\) is the correlation correction to the energy of valence state \(v\), \(\delta E_{v}^{\rm CC}=\Sigma_{vv}\). As a result, contributions from higher-order diagrams to the right-hand side of Eq. (18) are similar to those to the correlation energy.
This observation suggests rescaling the valence single coefficients as [50]
\[\rho_{mv}^{\prime}=\rho_{mv}\frac{\delta E_{v}^{\rm expt}}{\delta E_{v}^{\rm CC }}\,, \tag{19}\]
where \(\delta E_{v}^{\rm exp}\) and \(\delta E_{v}^{\rm CC}\) are the experimental and computed correlation energies at a given level of CC approximation, respectively. Note that a consistent definition of the experimental correlation energies requires removing the Breit, QED, and basis extrapolation corrections (see Sec. II.5.5 below) from the experimental energy, i.e.,
\[\delta E_{v}^{\rm expt} = E_{v}^{\rm expt}-E_{v}^{\rm DHF}-\delta E_{v}^{\rm Breit} \tag{20}\] \[- \delta E_{v}^{\rm QED}-\delta E_{v}^{\rm extrapol}\,.\]
We have removed basis extrapolation correction \(\delta E_{v}^{\rm extrapol}\) from energy because the extrapolation correction to matrix elements is computed separately.
It is worth emphasizing, however, that the linear scaling of matrix elements with correlation energy is only approximate and can be used in the semiempirical fits only to a certain accuracy. For example, as will be discussed in Sec. III, scaling at the singles and doubles (SD) level generally does not necessarily produce a result compatible with that obtained using a more complete method, say CCSDvT or CCSDpTvT, partially because these methods include additionally a direct valence triples correction to matrix elements (systematic shifts in the language of experimental physics). Similarly, the self-energy corrections do not affect the dressing of matrix elements (see Sec. II.5.2). Nor can it capture the distinctively different QED corrections to the energies and matrix elements. We refer the reader to Ref. [20] for further justification and discussion of caveats of the semiempirical scaling in the CC method context.
#### ii.5.2 Dressing of matrix elements
Once one has obtained the CC amplitudes by solving the CC equations [and rescaling the single amplitudes as per Eq. (19)], one may proceed to computing the matrix element \(D_{wv}\) by substituting Eq. (14)
into Eq. (16). Notice that the CC wavefunction (14) includes an exponential of the cluster operator \(K\), \(|\Psi_{v}\rangle=\left(1+K+\frac{1}{2!}N[K^{2}]+\ldots\right)\,a_{v}^{\dagger}|0_{v}\rangle\). Dressing of matrix elements refers to the inclusion of nonlinear terms in the above expansion into the computations of matrix elements. In general, there is an infinite number of such contributions even if the cluster operator \(K\) is truncated at a certain number of excitations. A procedure [67] for partially accounting for nonlinear contributions to matrix elements proceeds by expanding the product \(C^{\dagger}C\) of the core cluster amplitude \(C=S_{c}+D_{c}+\ldots\) into a sum of \(n\)-body insertions. Among these, the one- and two-body terms give the dominant contributions. The former generates diagrams with attachments to free particle and hole lines while the latter generates diagrams with two free particle (hole) lines being coupled. Summing these diagrams to all orders gives the particle and hole line dressing as well as the two-particle and two-hole RPA-like dressing. The summations over the resulting infinite series of diagrams are implemented by solving iteratively a set of equations for the expansion coefficients of the line and RPA-like dressing amplitudes. For more details, see Ref. [67].
#### ii.1.3 Breit corrections
The Breit interaction corrections to the \(E1\) matrix elements and energies are computed using the MBPT formalism and numerical approaches documented in Ref. [68]. Briefly, we generate two basis sets, one using the conventional \(V^{N-1}\) DHF potential and the other, the \(V^{N-1}\) Breit-DHF potential. The Breit-DHF potential, in addition to the DHF potential, includes the one-body part of the Breit interaction between the atomic electrons in a mean-field fashion. The generation of the DHF basis sets is discussed in Appendix A; we use identical basis-set parameters for both the DHF and Breit-DHF sets. We then carry out the RPA(BO) calculations using these two distinct basis sets (see Sec. II.3). For the Breit-DHF basis set, we additionally include the two-body (residual) Breit interaction on an equal footing with the residual Coulomb interaction. The Breit correction then is simply the difference between the two RPA(BO) results. Our numerical results are consistent with Breit corrections to \(E1\) matrix elements listed in Table 3 of Ref. [5].
#### ii.1.4 QED corrections
The QED corrections to \(E1\) matrix elements were calculated using the radiative potential method, as developed in Refs. [69; 70]. In that approach, an approximate local potential is included into the atomic Hamiltonian, which accounts for dominant vacuum polarization and electron self-energy effects. The potential is included into the DHF equation and gives an important contribution known as core relaxation, which is particularly important for states with \(\ell>0\)[70; 71; 72]. The corrections for \(\langle 6,7S||D||6,7P_{J}\rangle\) were published recently in Ref. [72]. The authors of Ref. [72] have provided us with their results for the QED corrections to both energies and \(E1\) matrix elements. Note that the so-called vertex corrections to \(E1\) matrix elements were not included in the calculations of Ref. [72]. These corrections are expected to be small, due to the "low-energy theorem" [69], and account for up to a quarter of the total QED corrections in Cs. As a result, the estimated uncertainty associated with the use of the radiative potential for evaluating QED corrections to \(E1\) amplitudes is at 25%.
#### ii.1.5 Basis extrapolation correction
We perform our calculations using a basis comprising single-particle atomic orbitals with a finite number of orbital angular momenta and a finite number of \(B\)-spline basis-set functions for each partial wave. The basis functions are also confined within a cavity of finite, albeit large, radius. Although the finiteness of the basis greatly facilitates the efficiency of numerical procedures, it inevitably introduces some numerical errors into the final results compared to the ideal case where the cavity size, the angular momenta of the orbitals, and the number of splines per partial wave tend to infinity. For a particular atomic property \(f\) (\(f\) can be the removal energy or the electric-dipole matrix element), the finite-basis corrections to \(f\) may be estimated by approximating \(f\) with a function of the maximum orbital angular momentum \(\ell_{\rm max}\), the number of splines per partial wave \(M\), and the cavity radius \(R_{\rm cav}\), and then extrapolating \(f\left(\ell_{\rm max},M,R_{\rm cav}\right)\) to the case where all three parameters approach infinity.
We determine the dependence on \(\ell_{\rm max}\) by computing \(f\) in the relatively computationally inexpensive SD approximation with varying \(\ell_{\rm max}\) while keeping \(M\) and \(R_{\rm cav}\) fixed. We then form the quantities \(g(\ell_{\rm max})\equiv f\left(\ell_{\rm max}\right)-f\left(\ell_{\rm max}-1\right)\) which represents how much \(f\) varies as \(\ell_{\rm max}\) increases by one unit. The function \(g(l)\) is estimated by fitting to \(g(l)=l^{-4}(a+b/l+c/l^{2})\) with fitting parameters \(a\), \(b\), and \(c\). The correction \(\delta f_{\ell_{\rm max}}=f(\infty)-f(\ell_{\rm max})\) is then approximated by \(\sum_{l=\ell_{\rm max}+1}g(l)\). Similarly, the dependence on \(M\) (or \(R_{\rm cav}\)) is determined by computing \(f\) in the SD approximation with varying \(M\) (or \(R_{\rm cav}\)) and while keeping \(\ell_{\rm max}\) and \(R_{\rm cav}\) (or \(M\)) fixed. The difference \(h(M)\equiv f(M)-f(M-\Delta M)\) [or \(j(R_{\rm cav})=f(R_{\rm cav})-f(R_{\rm cav}-\Delta R_{\rm cav})\)] is formed and fitted to \(h(M)=M^{-4}(a+b/M+c/M^{2})\) [or \(j(R)=R^{-4}(a+b/R+c/R^{2})\)]. The corrections \(\delta f_{M}=f(\infty)-f(M)\) and \(\delta f_{R_{\rm cav}}=f(\infty)-f(R_{\rm cav})\) are approximated by \(h(M+\Delta M)+h(M+2\Delta M)+\ldots\) and \(j(R_{\rm cav}+\Delta R_{\rm cav})+j(R_{\rm cav}+2\Delta R_{\rm cav})+\ldots\), respectively. The total basis extrapolation correction is the sum of the three individual corrections, i.e.,
\[\delta f_{\rm basis}=\delta f_{\ell_{\rm max}}+\delta f_{M}+\delta f_{R_{\rm cav }}\,. \tag{21}\]
We point out that \(\delta f_{\ell_{\rm max}}\) is often at least an order of
magnitude larger than \(\delta f_{M}\) and \(\delta f_{R_{\rm cav}}\) for our basis sets.
## III Numerical results and discussions
In the previous section, we have presented the theoretical basis for several methods employed in our estimates of the \(E1\) matrix elements \(\langle nS_{1/2}||D||n^{\prime}P_{J}\rangle\) with \(n=6,7\) and \(n^{\prime}=6-12\) in Cs. Numerical results for energies are presented in Tables 1-13, those for \(E1\) matrix elements in Tables 14-14 and those for the normalized ratios \(\xi_{n,n^{\prime}}\) in Tables 8 and 9. In these tables, the final results of our computations are taken as the CCSDpTvT values with all the additional corrections (scaling, dressing, Breit, QED, and basis extrapolation) added.
In our calculations, we employed a dual-kinetic-balance \(B\)-spline basis set which numerically approximates a complete set of single-particle atomic orbitals. In order to accurately approximate orbitals with high principle quantum numbers, we use a large basis set containing \(M=60\) basis functions for each partial wave. The basis functions are generated in a cavity of radius \(R_{\rm cav}=250\) a.u. which ensures that high-\(n\) orbitals, whose maxima lie far away from the origin, are not disturbed by the cavity. We test the suitability of our one-electron basis functions by comparing their corresponding energies, hyperfine structure constants, and \(E1\) matrix elements with those obtained using the finite-difference solutions of the free, i.e., without cavity, DHF equations (see Appendix A). All differences are \(\lesssim 0.01\%\). We note that the basis set used in Ref. [27] yielded single-electron \(E1\) matrix elements for high-\(n^{\prime}\) states differing from the DHF values at the level of 1%. Since Ref. [27] estimated the final uncertainties in these matrix elements at the level of 20%, the unphysical nature of the basis employed is irrelevant. For the purpose of our work, however, ensuring that the high-\(n^{\prime}\) basis functions faithfully represent their physical counterparts is essential for controlling numerical accuracy.
Basis functions with \(\ell_{\rm max}\leq 7\) partial waves are used for the RPA, while in the BO and RPA(BO) approches, only partial waves with \(\ell_{\rm max}\leq 5\) are included due to the higher computational costs. In the CC approaches, basis functions with \(\ell_{\rm max}\leq 5\) are used for single and double excitations, while for triples, we employ a more limited set of functions with \(\ell_{\rm max}\leq 4\). Additionally, excitations from core subshells \([4s,\ldots,5p]\) are included in the calculations for triples, while excitations from core subshells \([1s,\ldots,3s]\) are neglected. For each partial wave, only 52 out of 60 splines are included. Basis set extrapolation corrections to infinitely large \(\ell_{\rm max}\), \(M\), and \(R_{\rm cav}\) are added separately. To estimate these corrections, SD calculations with \(\ell_{\rm max}=4,5,6,7\), \(M=40,60,80,100\), and \(R_{\rm cav}=100,150,200,250\) a.u. are performed as discussed in Sec. II.5.5.
We carried out computations on a nonuniform grid defined as \(\ln(r[i]/r_{0}+1)+a_{g}r[i]=(i-1)h\) with 500 points. With \(r_{0}=6.96\times 10^{-6}\) a.u., \(a_{g}=0.50528\), and \(h=2.8801\times 10^{-1}\) a.u., there are 11 points inside the \({}^{133}\)Cs nucleus. The nuclear charge distribution is approximated by a Fermi distribution \(\rho_{\rm nuc}(r)=\rho_{0}/(1+\exp[(r-c)/a])\), where \(\rho_{0}\) is a normalization constant. For \({}^{133}\)Cs, we used \(c=5.6748\,\)fm and \(a=0.52338\,\)fm.
Our results for the removal energies are presented in Tables 1, 2, and 3. It may be observed that our calculations consistently underestimate the removal energies. The theory-experiment agreement improves with increasing principal quantum numbers, as expected, since orbitals with higher \(n\) do not penetrate the atomic core as strongly as those with lower \(n\). Such a qualitative argument becomes more explicit by considering the expectation values (first-order corrections) of the model-potential self-energy operator (8), \(\Sigma^{\rm m.p.}({\bf r})\propto 1/r^{4}\). The uncertainties in the final results are taken as quadrature sums of those in the CC approximation and the Breit, QED, and basis extrapolation corrections. We estimate the systematic uncertainties in the CC approximation as the difference between the CCSDpTvT and CCSDpT values, representing higher-order terms that are missed by the CCSDpTvT approximation. The relative uncertainties in the QED corrections are estimated at the level of 25% [72]. We take a conservative estimate of the uncertainties in the Breit and basis extrapolation corrections at 50%.
Our results for the reduced \(E1\) matrix elements
\begin{table}
\begin{tabular}{l r r} \hline \hline & \(6S_{1/2}\) & \(7S_{1/2}\) \\ \hline DHF & 27954 & 12112 \\ BO & 31804 & 13023 \\ SD & 31844 & 12944 \\ CCSD & 31459 & 12884 \\ CCSDpT & 31486 & 12889 \\ CCSDpT & 31305 & 12852 \\ CCSDpTvT & 31332 & 12858 \\ \hline \multicolumn{3}{c}{Other corrections} \\ \hline Breit & 2.6 & 0.3 \\ QED & \(-21.5\) & \(-5.0\) \\ Basis extrapolation & 12.6 & 2.7 \\ \hline Final result & 31326(154) & 12856(31) \\ Uncertainty (\%) & 0.49 & 0.24 \\ \hline Experiment [73] & 31406 & 12871 \\ \hline Difference (\%) & \(-0.26\) & \(-0.12\) \\ Difference (\(\sigma\)) & \(-0.52\) & \(-0.48\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Removal energies (in cm\({}^{-1}\)) of \(6S_{1/2}\) and \(7S_{1/2}\) states in Cs in various approximations: (i) Dirac-Hartree-Fock (DHF), (ii) Brueckner orbitals (BO), (iii) linearized coupled-cluster approximation with singles and doubles (SD), (iv) coupled-cluster approximation with singles and doubles (CCSD), (v) coupled-cluster approximation with singles and doubles and perturbative treatment of core triples (CCSDpT), (vi) coupled-cluster approximation with singles and doubles and full treatment of valence triples (CCSDvT), and (vii) the most sophisticated coupled-cluster approximation with singles and doubles, perturbative treatment of core triples, and full treatment of valence triples (CCSDpTvT). The final results are obtained by adding CCSDpTvT and “Other corrections” entries.
\(\langle nS_{1/2}||D||n^{\prime}P_{J}\rangle\) are compiled in Tables 4, 5, 6, and 7. The uncertainties in the final results are taken as quadrature sums of those in the scaling, Breit, QED, and basis extrapolation corrections. We assume that the uncertainty in the scaling correction is half its value, representing higher-order terms that are missed by the CCSDpTvT approximation. We assume that the uncertainties in matrix element dressings are already accounted for in the scaling uncertainties. Indeed, at any level of the CC approximation, the dressing corrections account for a large class of the most important diagrams arising from nonlinear CC contributions to matrix elements. As a result, it is expected that missing contributions to matrix elements come from neglecting higher-order diagrams in computing the CC amplitudes themselves, i.e., terms (partially) accounted for by the semiempirical scaling. Again, the relative uncertainties in the QED corrections are estimated at the level of 25% [72] and we assume a conservative estimate of the uncertainties in the Breit and basis extrapolation corrections at 50%. We note that since the QED, Breit, and basis extrapolation corrections are generally smaller than the semiempirical scaling ones, the uncertainties in the latter make up most of the overall uncertainty budget. The relative roles of these "other corrections" to the uncertainties of our results may be understood further by examining their contributions to the matrix elements themselves.
The higher-order terms that are missed by the CCSDpTvT approximation, represented by the scaling corrections, are quite small, as may be expected if one considers the convergence patterns of the matrix elements with increasing complexity of CC approximations. Indeed, Figs. 1 and 2 show the diminishing of contributions from higher-order diagrams: although nonlinear
\begin{table}
\begin{tabular}{l r r r r r r r} \hline \hline & \(6P_{1/2}\) & \(7P_{1/2}\) & \(8P_{1/2}\) & \(9P_{1/2}\) & \(10P_{1/2}\) & \(11P_{1/2}\) & \(12P_{1/2}\) \\ \hline DHF & 18791 & 9222.6 & 5513.3 & 3671.4 & 2621.1 & 1965.3 & 1528.2 \\ BO & 20290 & 9681.2 & 5718.4 & 3781.4 & 2687.1 & 2008.3 & 1558.4 \\ SD & 20413 & 9686.7 & 5716.4 & 3779.1 & 2685.3 & 2006.6 & 1556.4 \\ CCSD & 20230 & 9641.9 & 5698.0 & 3769.6 & 2679.7 & 2003.1 & 1554.0 \\ CCSDpT & 20238 & 9644.5 & 5699.1 & 3770.3 & 2680.1 & 2003.3 & 1554.2 \\ CCSDvT & 20187 & 9630.7 & 5693.2 & 3767.2 & 2678.3 & 2002.2 & 1553.4 \\ CCSDpTvT & 20195 & 9633.2 & 5694.4 & 3767.8 & 2678.7 & 2002.4 & 1553.6 \\ \hline \multicolumn{7}{c}{Other corrections} \\ \multicolumn{7}{c}{} & \(-\)7.1 & \(-\)2.5 & \(-\)1.1 & \(-\)0.6 & \(-\)0.4 & \(-\)0.2 & \(-\)0.2 \\ QED & 1.1 & 0.4 & 0.2 & 0.1 & 0.1 & 0.0 & 0.0 \\ Basis extrapolation & 3.5 & 1.0 & 0.5 & 0.2 & 0.1 & 0.1 & 0.1 \\ \hline Final result & 20193(43) & 9632.1(11.4) & 5694.0(4.7) & 3767.5(2.5) & 2678.5(1.4) & 2002.3(0.9) & 1553.5(0.6) \\ Uncertainty (\%) & 0.21 & 0.12 & 0.08 & 0.07 & 0.05 & 0.05 & 0.04 \\ \hline Experiment [73] & 20228 & 9641.1 & 5697.6 & 3769.5 & 2679.7 & 2003.0 & 1554.0 \\ \hline Difference (\%) & \(-\)0.18 & \(-\)0.09 & \(-\)0.06 & \(-\)0.05 & \(-\)0.04 & \(-\)0.03 & \(-\)0.03 \\ Difference (\(\sigma\)) & \(-\)0.82 & \(-\)0.79 & \(-\)0.76 & \(-\)0.79 & \(-\)0.85 & \(-\)0.77 & \(-\)0.82 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Removal energies (in cm\({}^{-1}\)) of \(nP_{1/2}\) states for \(n=6-12\) in Cs in various approximations. See Table 1 caption for explanation of entries.
\begin{table}
\begin{tabular}{l r r r r r r r} \hline \hline & \(6P_{3/2}\) & \(7P_{3/2}\) & \(8P_{3/2}\) & \(9P_{3/2}\) & \(10P_{3/2}\) & \(11P_{3/2}\) & \(12P_{3/2}\) \\ \hline DHF & 18389 & 9079.2 & 5445.9 & 3634.4 & 2598.7 & 1950.7 & 1518.2 \\ BO & 19733 & 9495.6 & 5633.4 & 3735.4 & 2659.4 & 1990.2 & 1545.7 \\ SD & 19835 & 9500.5 & 5631.8 & 3733.4 & 2657.8 & 1988.8 & 1544.3 \\ CCSD & 19669 & 9458.7 & 5614.3 & 3724.5 & 2652.6 & 1985.5 & 1542.0 \\ CCSDpT & 19676 & 9461.0 & 5615.4 & 3725.0 & 2652.9 & 1985.7 & 1542.1 \\ CCSDvT & 19632 & 9448.5 & 5610.0 & 3722.2 & 2651.2 & 1984.6 & 1541.4 \\ CCSDpTvT & 19639 & 9450.8 & 5611.0 & 3722.7 & 2651.6 & 1984.9 & 1541.6 \\ \hline \multicolumn{7}{c}{Other corrections} \\ \multicolumn{7}{c}{} & \(-\)0.8 & \(-\)0.4 & \(-\)0.2 & \(-\)0.1 & \(-\)0.1 & 0.0 & 0.0 \\ QED & 0.1 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ Basis extrapolation & 3.5 & 1.1 & 0.5 & 0.3 & 0.1 & 0.1 & 0.1 \\ \hline Final result & 19642(37) & 9451.5(10.2) & 5611.3(4.4) & 3722.9(2.3) & 2651.6(1.3) & 1985.0(0.8) & 1541.7(0.5) \\ Uncertainty (\%) & 0.19 & 0.11 & 0.08 & 0.06 & 0.05 & 0.04 & 0.03 \\ \hline Experiment [73] & 19674 & 9460.1 & 5615.0 & 3724.8 & 2652.8 & 1985.6 & 1542.1 \\ \hline Difference (\%) & \(-\)0.16 & \(-\)0.09 & \(-\)0.07 & \(-\)0.05 & \(-\)0.05 & \(-\)0.03 & \(-\)0.03 \\ Difference (\(\sigma\)) & \(-\)0.87 & \(-\)0.84 & \(-\)0.84 & \(-\)0.82 & \(-\)0.92 & \(-\)0.75 & \(-\)0.80 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Removal energies (in cm\({}^{-1}\)) of \(nP_{3/2}\) states for \(n=6-12\) in Cs in various approximations. See Table 1 caption for explanation of entries.
core singles and doubles and valence triples contribute significantly, core triples and dressing effects are generally small. More specifically, for \(\langle 6S_{1/2}||D||nP_{1/2}\rangle\), core triples account for \(\lesssim 1\%\) and dressings \(\lesssim 2\%\) of the final results. For \(\langle 6S_{1/2}||D||nP_{3/2}\rangle\), their contributions are \(\lesssim 0.1\%\) and \(\lesssim 0.5\%\), respectively. For \(\langle 7S_{1/2}||D||nP_{1/2}\rangle\), the core triples contribution is \(\lesssim 0.2\%\) and dressings are \(\lesssim 0.1\%\). For \(\langle 7S_{1/2}||D||nP_{3/2}\rangle\), both contributions are \(\lesssim 0.1\%\). Scaling accounts for up to \(7\%\) of the final result in \(\langle 6S_{1/2}||D||nP_{1/2}\rangle\), up to \(2\%\) in \(\langle 6S_{1/2}||D||nP_{3/2}\rangle\), and up to \(0.6\%\) in \(\langle 7S_{1/2}||D||nP_{J}\rangle\). Note also that although not shown in Tables 4-6, we also computed the scaling corrections to the CCSDvT matrix elements and, reassuringly, found that the CCSDpTvT scaling corrections are generally smaller than the CCSDvT scaling corrections, further confirming the convergence of our results with increasing complexity of the CC approximations.
In terms of the Breit, QED, and basis extrapolation corrections to \(\langle nS_{1/2}||D||n^{\prime}P_{J}\rangle\), generally speaking, they become more and more important as \(n^{\prime}\) increases. For \(\langle 6S_{1/2}||D||nP_{1/2}\rangle\), they grow from a few \(0.01\%\) for \(n=6\) to a few percents for \(n=12\), while for \(\langle 6S_{1/2}||D||nP_{3/2}\rangle\) and \(\langle 7S_{1/2}||D||nP_{J}\rangle\), the growth is less dramatic, from a few hundredths of a percent for \(n=6\) to a few tenths of a percent for \(n=12\). We also mention in passing that the relative roles of Breit and QED corrections in \(\langle nS_{1/2}||D||n^{\prime}P_{1/2}\rangle\) are noticeably more pronounced than those in \(\langle nS_{1/2}||D||n^{\prime}P_{3/2}\rangle\). The qualitative reason for this is due to the more relativistic character of the \(p_{1/2}\) orbitals as compared to the \(p_{3/2}\) orbitals.
Although correlation effects on removal energies become less and less important with increasing principal quantum number, the same cannot be said, however, for all matrix elements. Indeed, Tables 4 and 5 show very large correlation corrections to \(\langle 6S_{1/2}||D||nP_{J}\rangle\) for \(n\geq 9\). Electron correlation, however, appears to have minimal effects on \(\langle 7S_{1/2}||D||nP_{J}\rangle\), as may be observed from Tables 6 and 7. This may be qualitatively understood by noting that computing \(\langle nS_{1/2}||D||n^{\prime}P_{J}\rangle\) involves integrating products of wave functions which oscillate up to some point on the radial grid. Larger \(n\) generally means more oscillations happening further away from the origin. A result is that if \(n\) and \(n^{\prime}\) are very different, \(|nS_{1/2}\rangle\) and \(|n^{\prime}P_{J}\rangle\) have disparate numbers of oscillations that happen at different places so their product oscillates for the whole integration range, thus yielding contributions that cancel instead of add to each other. This cancellation means that the integral depends delicately on the exact details of the wave functions, and small correlation corrections to the wave functions themselves could result in large corrections to the matrix elements. Other related features appear in Tables 4 and 5: the RPA(DHF) approximation is particularly inadequate for \(\langle 6S_{1/2}||D||10P_{1/2}\rangle\) and \(\langle 6S_{1/2}||D||11P_{1/2}\rangle\) and the BO approximation seems to be doing poorly for all higher \(n\). These artifacts are results of cancellations between the DHF and RPA contributions to the matrix elements, which become evident in detailed analyses of different contributions to the final CCSDpTvT results.
Using the values of \(\langle nS_{1/2}||D||n^{\prime}P_{J}\rangle\), we computed the normalized ratio of reduced \(E1\) matrix elements \(\xi_{nn^{\prime}}\) connecting the \(nS_{1/2}\) state to the two \(n^{\prime}P_{J}\) fine-structure states [see Eq. (1)]. The \(\xi_{nn^{\prime}}\) results are collected in Tables 8 and 9. The uncertainties in the final results for \(\xi_{nn^{\prime}}\) are also taken to be half the semiempirical scaling corrections. Note that we do not estimate the uncertainty for \(\xi_{nn^{\prime}}\) by adding the uncertainties for \(\langle nS_{1/2}||D||n^{\prime}P_{1/2}\rangle\) and \(\langle nS_{1/2}||D||n^{\prime}P_{3/2}\rangle\) in quadrature since they are not necessarily independent, given that the two matrix elements involve the same \(nS_{1/2}\) state.
From Table 9, one observes that the ratio \(\xi_{7,n}\) increases relatively slowly with increasing \(n\), and that it remains quite close to the nonrelativistic value of unity. Table 8 for \(\xi_{6,n}\), on the other hand, tells a very different story. The ratio \(\xi_{6,n}\) grows rapidly with increasing \(n\), reaching \(\xi_{6,12}\approx 5.4\). This peculiarity may be understood by investigating the behaviors of the \(\langle 6S_{1/2}||D||nP_{1/2}\rangle\) and \(\langle 6S_{1/2}||D||nP_{3/2}\rangle\) matrix elements themselves. From Tables 4 and 5, it appears that \(\langle 6S_{1/2}||D||nP_{1/2}\rangle\) is approaching zero as \(n\) increases while \(\langle 6S_{1/2}||D||nP_{3/2}\rangle\) remains finite. This situation is similar to that of Cooper minima [17; 18], wherein the photoionization matrix element from the atomic ground state to the continuum \(\varepsilon P_{1/2}\) state vanishes at a smaller continuum energy \(\varepsilon\) than that to the continuum \(\varepsilon P_{3/2}\) state.
The previous comments on the various contributions to the matrix elements also apply to the ratio \(\xi_{n,n^{\prime}}\). In particular, the disparity in the Breit and QED corrections to the two \(nP_{J}\) fine-structure components discussed above immediately translates into the ratios \(\xi_{n,n^{\prime}}\), whose relative Breit and QED corrections are similar to those of \(\langle nS_{1/2}||D||n^{\prime}P_{1/2}\rangle\). The spuriously large values for \(\xi_{6,10}\) and \(\xi_{6,11}\) in the RPA(DHF) approximation are due to the poor results from using the RPA(DHF) to estimate \(\langle 6S_{1/2}||D||10P_{1/2}\rangle\) and \(\langle 6S_{1/2}||D||11P_{1/2}\rangle\).
In Figs. 3-10, our computed values for the reduced \(E1\) matrix elements are compared against existing experimental results as well as previous calculations. The convergence patterns for \(\xi_{6,n}\) and \(\xi_{7,n}\) with increasing complexity of the coupled-cluster approximation are shown in Figs. 11 and 12. In Figs. 13-15 our values for the normalized ratios \(\xi_{6,6}\), \(\xi_{6,7}\), and \(\xi_{7,6}\) are compared against existing experimental results and previous calculations. The experimental weighted averages and uncertainties are computed using
\[\bar{x} =\frac{\sum_{i}x_{i}/\sigma_{i}^{2}}{\sum_{i}1/\sigma_{i}^{2}}\,, \tag{22a}\] \[\bar{\sigma} =1/\sqrt{\sum_{i}1/\sigma_{i}^{2}}\,, \tag{22b}\]
where \(x_{i}\) and \(\sigma_{i}\) are the central value and uncertainty of each measurement.
When comparing our values with previous theoretical results, it is worth bearing in mind that the computations in Refs. [3], [46], and [27] were performed at the SD
and SDpT level, with semiempirical scaling included. A comparison of our SD results, both bare and with semiempirical scaling (not shown in Tables 4-9) and the values quoted in these earlier works shows excellent agreement. As a result, the differences between our results and earlier ones represent an improvement due to our accounting for higher-order terms in the CC approximation, most prominently nonlinear singles and doubles and valence triples. The improvement is noticeable in all cases and is significant for \(\langle 6S_{1/2}||D||nP_{1/2}\rangle\) with \(n\geq 9\). This also shows that the semiempirical scaling approach is only approximate and can only partially recover contributions from higher-order diagrams, as noted in Sec. II.5.1. Indeed, although not shown in Tables 4-9, we also computed the scaled \(E1\) matrix elements at the SD, CCSD, and CCSDvT levels. As shown in Fig. 16, the scaled SD and scaled CCSD results are generally incompatible with the more complete scaled CCSDvT and scaled CCSDpTvT values.
Our results for \(\langle 6,7S_{1/2}||D||6,7P_{J}\rangle\) agree well with those of Ref. [74], which were obtained using the atomic many-body perturbation theory in the screened Coulomb interaction (AMPSCI), more colloquially known as the all-order Feynman technique. We remind the reader that AMPSCI involves summing to all orders perturbative series with respect to the screened Coulomb interaction, in contrast with the CC method, wherein the perturbative series are with respect to electron correlation. The Feynman technique thus misses certain diagrams with singles, doubles, and triples, but, on the other hand, includes some diagrams with quadruples not present in our CCSDpTvT calculations. We note that although earlier Feynman-technique values of Ref. [49] for \(\langle 6S_{1/2}||D||6,7P_{3/2}\rangle\) and \(\langle 7S_{1/2}||D||6P_{J}\rangle\) disagree with ours, they also disagree with the more recent results of Ref. [74].
Overall, our results agree well with or are close to experimental data, except for \(\langle 6S_{1/2}||D||12P_{1/2}\rangle\) (16% or 2.5\(\sigma\) away), \(\langle 6S_{1/2}||D||7P_{3/2}\rangle\) (2.7\(\sigma\) away), \(\langle 7S_{1/2}||D||7P_{1/2}\rangle\) (4.0\(\sigma\) away) and \(\langle 7S_{1/2}||D||7P_{3/2}\rangle\)(3.8\(\sigma\) away). We point out, however, that with these disagreements, except for \(\langle 6S_{1/2}||D||12P_{1/2}\rangle\) which proves difficult due to strong cancellations making its value very small, the theory-experiment agreement is acceptable in terms of percentage.
In relation to the determination of the APV amplitude in Cs, the relevant \(E1\) matrix elements are those between \(6,7S_{1/2}\) and \(nP_{1/2}\) states. From Tables 4 and 6, we observe that the main contributions, coming from \(\langle 6S_{1/2}||D||6P_{1/2}\rangle\), \(\langle 7S_{1/2}||D||6P_{1/2}\rangle\), and \(\langle 7S_{1/2}||D||7P_{1/2}\rangle\), have uncertainties \(\sim 0.1\%\). While other \(E1\) matrix elements involving \(P_{1/2}\) states with higher principle quantum numbers have larger uncertainties, their values are at least an order of magnitude smaller than those of the three main terms. As a result, the effective uncertainties arising from these "tail" terms are all sub-0.1%. It is worth noting also that the largest uncertainty of 5.2% in \(\langle 6S_{1/2}||D||12P_{1/2}\rangle\) is only half the uncertainty of the "tail" terms estimated in Ref. [5]. As a result, although we do not claim that a determination of \(E_{PV}(^{133}\)Cs) using the \(E1\) matrix elements quoted in this work will have a \(\sim 0.1\%\) uncertainty, such a level of accuracy is clearly reachable. Achieving this goal will be the subject of our future work based on a parity-mixed (PM) CC approach [9], where the artificial separation of contributions to \(E_{PV}\) into "main" and "tail" terms is circumvented. The results of the current paper will serve as gauges for the accuracy of the PM-CC approach. We note in passing that a new evaluation of \(E_{PV}\) aiming at a 0.1% uncertainty must also account for the contribution from neutrino vacuum polarization, which was recently estimated to be at the level of \(\sim 1\%\)[75].
We end this section with a few words on the computational cost associated with the different approximations employed. DHF, RPA, and BO are negligibly inexpensive. The SD computations take around 1/4 core-hour for \(S_{1/2}\) and \(P_{1/2}\) states and around 1 core-hour for \(P_{3/2}\) states. CCSD computations cost around 2.5 core-hours for \(S_{1/2}\) and \(P_{1/2}\) states and around 5 core-hour for \(P_{3/2}\) states. Calculations involving valence triple excitations are quite expensive: on our computer server with 160 cores, \(S_{1/2}\) and \(P_{1/2}\) states take around 8 real-time hours per state and \(P_{3/2}\) states take around 22 real-time hours per state. The inclusion of perturbative core triples does not drastically increase the computational cost compared to CCSD and CCSDvT.
## Acknowledgements
We thank B. M. Roberts, C. J. Fairhall, and J. S. M. Ginges for providing QED corrections and useful discussions. This work was supported in part by the U.S. National Science Foundation Grants No. PHY-1912465 and No. PHY-2207546, by the Sara Louise Hartman Endowed Professorship in Physics, and by the Center for Fundamental Physics at Northwestern University.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multicolumn{1}{c}{6\(S_{1/2}\rightarrow\)} & \multicolumn{1}{c}{6\(P_{1/2}\)} & \multicolumn{1}{c}{7\(P_{1/2}\)} & \multicolumn{1}{c}{8\(P_{1/2}\)} & \multicolumn{1}{c}{9\(P_{1/2}\)} & \multicolumn{1}{c}{10\(P_{1/2}\)} & \multicolumn{1}{c}{11\(P_{1/2}\)} & \multicolumn{1}{c}{12\(P_{1/2}\)} \\ \hline DHF & 5.2777 & 3.7174\([-1]\) & 1.3262\([-1]\) & 7.1742\([-2]\) & 4.6735\([-2]\) & 3.3731\([-2]\) & 2.5952\([-2]\) \\ RPA(DHF) & 4.9744 & 2.3872\([-1]\) & 0.4983\([-1]\) & 1.3121\([-2]\) & 0.2197\([-2]\) & 0.1679\([-2]\) & 0.3118\([-2]\) \\ BO & 4.7250 & 4.4414\([-1]\) & 1.8142\([-1]\) & 10.601 \([-2]\) & 7.2457\([-2]\) & 5.3975\([-2]\) & 4.2430\([-2]\) \\ RPA(BO) & 4.3909 & 3.0269\([-1]\) & 0.9402\([-1]\) & 4.4344\([-2]\) & 2.5708\([-2]\) & 1.6871\([-2]\) & 1.2019\([-2]\) \\ SD & 4.4806 & 2.9655\([-1]\) & 0.9060\([-1]\) & 4.2257\([-2]\) & 2.4291\([-2]\) & 1.5841\([-2]\) & 1.1240\([-2]\) \\ CCSD & 4.5535 & 3.0274\([-1]\) & 0.9285\([-1]\) & 4.3478\([-2]\) & 2.5079\([-2]\) & 1.6395\([-2]\) & 1.1645\([-2]\) \\ CCSDpT & 4.5480 & 3.0299\([-1]\) & 0.9301\([-1]\) & 4.3587\([-2]\) & 2.5157\([-2]\) & 1.6455\([-2]\) & 1.1693\([-2]\) \\ CCSDvT & 4.5098 & 2.7138\([-1]\) & 0.7314\([-1]\) & 2.9477\([-2]\) & 1.4421\([-2]\) & 0.7912\([-2]\) & 0.4678\([-2]\) \\ CCSDpTvT & 4.5042 & 2.7163\([-1]\) & 0.7330\([-1]\) & 2.9583\([-2]\) & 1.4498\([-2]\) & 0.7971\([-2]\) & 0.4725\([-2]\) \\ \hline \multicolumn{1}{c}{Other corrections} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ Scaling & \(-0.0101\) & 0.0280\([-1]\) & 0.0155\([-1]\) & 0.0858\([-2]\) & 0.0547\([-2]\) & 0.0580\([-2]\) & 0.0361\([-2]\) \\ Dressing & 0.0017 & 0.0065\([-1]\) & 0.0040\([-1]\) & 0.0277\([-2]\) & 0.0210\([-2]\) & 0.0167\([-2]\) & 0.0137\([-2]\) \\ Breit & \(-0.0010\) & 0.0189\([-1]\) & 0.0170\([-1]\) & 0.0712\([-2]\) & 0.0523\([-2]\) & 0.0407\([-2]\) & 0.0328\([-2]\) \\ QED & 0.0035 & \(-0.0225\)\([-1]\) & \(-0.0131\)\([-1]\) & \(-0.0882\)\([-2]\) & \(-0.0652\)\([-2]\) & \(-0.0509\)\([-2]\) & \(-0.0413\)\([-2]\) \\ Basis extrapolation & \(-0.0017\) & 0.0050\([-1]\) & 0.0032\([-1]\) & 0.0221\([-2]\) & 0.0165\([-2]\) & 0.0130\([-2]\) & 0.0106\([-2]\) \\ \hline Final result & 4.4966(52) & 2.752(18)\([-1]\) & 0.753(10)\([-1]\) & 3.077(61)\([-2]\) & 1.529(42)\([-2]\) & 0.875(38)\([-2]\) & 0.524(27)\([-2]\) \\ Uncertainty (\%) & 0.12 & 0.65 & 1.3 & 2.0 & 2.7 & 4.4 & 5.2 \\ \hline Other results & 4.5052(54)12 & 2.776(75)\([-1]\)23 & 4.29(8)\([-2]\)24 & 2.48(50)\([-2]\)25 & 1.62(39)\([-2]\)26 & 1.15(32)\([-2]\)27 \\ & 4.535\([-77]\)3 & 2.79(19)\([-1]\)3 & 0.92(10)\([-1]\)28 & 4.29(68)\([-2]\)29 & 1.62(39)\([-2]\)29 & 1.15(32)\([-2]\)29 \\ & 4.535\([-2]\) & 2.79\([-1]\)3 & 0.81\([-1]\)3 & & & & & \\ & 4.510\([-2]\) & 2.80\([-1]\)4 & 0.78\([-1]\)4 & & & & & \\ \hline Experiments & 4.5012(26) & 7.7810(45)\([-1]\)27 & 0.723(44)\([-1]\)28 & 3.23(37)\([-2]\)29 & 1.62(8)\([-2]\)29 & 0.957(46)\([-2]\)29 & 0.627(30)\([-2]\)29 \\ & 4.5010(35)2 & 2.789(16)\([-1]\)29 & & & & & \\ & 4.5097(45)2 & 2.757(20)\([-1]\)29 & & & & & \\ & 4.5064(47)12 [
## Appendix A Details of constructing the \(B\)-spline finite basis set
The \(B\)-spline basis set is one example of the finite basis sets, the workhorse of numerous atomic structure and quantum chemistry codes. The \(B\)-spline basis set was popularized by the Notre Dame group [51, 55, 80] and since then has found numerous applications in high-precision relativistic atomic-structure calculations, especially those based on many-body perturbation theory (MBPT). The power of the finite basis sets lies in the ability to carry out summations over intermediate single-particle orbitals. Such summations are ubiquitous in numerical implementations of MBPT formalism. Since an exact atomic single-particle spectrum consists of a numerable yet infinite set of bound states and an innumerable set of states in the continuum, the combined set contains an infinite number of eigenfunctions and is simply impractical in numerical implementations. A finite basis set is a numerical approximation to the exact eigenspectrum, replacing it with a numerically complete yet finite-sized set.
The procedure for constructing a finite basis set is as follows. First, we confine the atom to a spherical cavity of radius \(R_{\text{max}}\). Then the exact single-particle spectrum becomes countable as the continuum is discretized, yet the confined atomic spectrum still contains an infinite number of eigenstates. To make the basis finite, the orbitals are expanded over a numerically complete set of support polynomials, the B-splines in our case [51, 55, 80]. Finally, the single-particle Dirac Hamiltonian is diagonalized in this finite-sized Hilbert space, producing the desired eigenspectrum that is now finite.
One of the technical drawbacks of the original Notre
Figure 1: Convergence patterns for the \(\langle 6S_{1/2}||D||nP_{1/2}\rangle\) matrix elements with increasing complexity of the coupled-cluster method. The pattern for \(n\geq 8\) is similar to that for \(n=7\). For all \(6\leq n\leq 12\), the convergence pattern for \(\langle 6S_{1/2}||D||nP_{3/2}\rangle\) is similar to that of \(\langle 6S_{1/2}||D||nP_{1/2}\rangle\).
\begin{table}
\begin{tabular}{l c c c c c c c} \(6S_{1/2}\rightarrow\) & \(6P_{3/2}\) & \(7P_{3/2}\) & \(8P_{3/2}\) & \(9P_{3/2}\) & \(10P_{3/2}\) & \(11P_{3/2}\) & \(12P_{3/2}\) \\ \hline DHF & 7.4264 & 6.9474[\(-\)1] & 2.8323[\(-\)1] & 1.6582[\(-\)1] & 11.359 [\(-\)2] & 8.4797[\(-\)2] & 6.6791[\(-\)2] \\ RPA(DHF) & 7.0131 & 5.0875[\(-\)1] & 1.6648[\(-\)1] & 0.8280[\(-\)1] & 5.0362[\(-\)2] & 3.4443[\(-\)2] & 2.5408[\(-\)2] \\ BO & 6.6251 & 8.0698[\(-\)1] & 3.6037[\(-\)1] & 2.2047[\(-\)1] & 15.480 [\(-\)2] & 11.731 [\(-\)2] & 9.3300[\(-\)2] \\ RPA(BO) & 6.1740 & 6.0914[\(-\)1] & 2.3686[\(-\)1] & 1.3289[\(-\)1] & 8.8212[\(-\)2] & 6.4364[\(-\)2] & 4.9843[\(-\)2] \\ SD & 6.3026 & 6.0083[\(-\)1] & 2.3174[\(-\)1] & 1.2960[\(-\)1] & 8.5908[\(-\)2] & 6.2645[\(-\)2] & 4.8510[\(-\)2] \\ CCSD & 6.4045 & 6.1071[\(-\)1] & 2.3565[\(-\)1] & 1.3185[\(-\)1] & 8.7425[\(-\)2] & 6.3754[\(-\)2] & 4.9358[\(-\)2] \\ CCSDpT & 6.3966 & 6.1098[\(-\)1] & 2.3580[\(-\)1] & 1.3195[\(-\)1] & 8.7495[\(-\)2] & 6.3807[\(-\)2] & 4.9400[\(-\)2] \\ CCSDvT & 6.3476 & 5.6727[\(-\)1] & 2.0802[\(-\)1] & 1.1212[\(-\)1] & 7.2360[\(-\)2] & 5.1742[\(-\)2] & 3.9480[\(-\)2] \\ CCSDpTvT & 6.3394 & 5.6740[\(-\)1] & 2.0814[\(-\)1] & 1.1220[\(-\)1] & 7.2419[\(-\)2] & 5.1786[\(-\)2] & 3.9516[\(-\)2] \\ \hline \multicolumn{8}{c}{Other corrections} \\ Scaling & \(-0.0146\) & \(0.0355[-\)1] & 0.0184[\(-\)1] & 0.013[\(-\)1] & 0.0686[\(-\)2] & 0.0871[\(-\)2] & 0.0702[\(-\)2] \\ Dressing & 0.0023 & 0.0096[\(-\)1] & 0.0059[\(-\)1] & 0.0042[\(-\)1] & 0.0319[\(-\)2] & 0.0254[\(-\)2] & 0.0209[\(-\)2] \\ Breit & \(-0.0011\) & 0.0051[\(-\)1] & 0.0029[\(-\)1] & 0.0019[\(-\)1] & 0.0138[\(-\)2] & 0.0107[\(-\)2] & 0.0086[\(-\)2] \\ QED & 0.0052 & \(-0.0251[-\)1] & \(-0.0152[-\)1] & \(-0.0104[-\)1] & \(-0.0776[-\)2] & \(-0.0609[-\)2] & \(-0.0495[\(-\)2] \\ Basis extrapolation & \(-0.0024\) & 0.0052[\(-\)1] & 0.0035[\(-\)1] & 0.0025[\(-\)1] & 0.0188[\(-\)2] & 0.0149[\(-\)2] & 0.0122[\(-\)2] \\ \hline Final result & 6.3288(75) & 5.704(19)[\(-\)1] & 2.097(10)[\(-\)1] & 1.1332(72)[\(-\)1] & 7.297(41)[\(-\)2] & 5.256(47)[\(-\)2] & 4.014(38)[\(-\)2] \\ Uncertainty (\%) & 0.12 & 0.34 & 0.49 & 0.63 & 0.56 & 0.89 & 0.95 \\ \hline Other resuts & 6.3402(79)1 & 5.741(89)[\(-\)1]2 & 2.32(14)[\(-\)1]2 & 1.297(96)[\(-\)1]2 & 8.60(71)[\(-\)2]2 & 6.27(56)[\(-\)2]2 & 4.86(46)[\(-\)2]2 \\ & 6.382 & 5.761[\(-\)1]2 & 2.18[\(-\)1]2 & & & & & \\ & 6.3474 & 5.76[\(-\)1]2 & 2.14[\(-\)1]2 & & & & & \\ & 6.325 & 5.83[\(-\)1]2 & & & & & \\ \hline Experiment & 6.3350(6)2 & 5.7417(57)[\(-\)1]2 & 2.11(8)[\(-\)1]2 & 1.15(7)[\(-\)1]2 & 7.22(34)[\(-\)2]2 & 5.29(46)[\(-\)2]2 & 3.98(19)[\(-\)2]2 \\ & 6.3349(48)2 & 5.750(7)[\(-\)1]2 & & & & & \\ & 6.3325(6)2 & 5.7580(44)[\(-\)1]2 & & & & & \\ \hline Difference (\%) & \(-0.10\) & \(-0.93\) & \(-0.62\) & \(-1.5\) & \(1.1\) & \(-0.65\) & \(0.85\) \\
Dame \(B\)-spline implementation [55] is the occurrence of the so-called spurious states, which do not map into the physical states of the Hamiltonian. This drawback was rectified with the introduction of the dual-kinetic balance (DKB) boundary conditions [57] for \(B\)-spline sets. The original work [57] focused on the hydrogenlike systems and then the DKB construction was extended to DHF potentials for multielectron atoms [58]. In our calculations, we use the DKB \(B\)-spline basis sets described in Ref. [58].
In this paper, we carry out computations for uncharacteristically large principle quantum numbers (up to \(n=12\)). In the basis-set construction described above, even if the cavity radius is large, \(R_{\rm max}\gg a_{0}\), only the lower energy orbitals map into the single-particle states of an unconfined atom, with higher-energy orbitals no longer fitting into the cavity. Then the mapping of basis-set orbitals to "physical" orbitals corresponding to an unconfined atom becomes spoiled. We now discuss our strategy for selecting \(R_{\rm max}\).
To ensure the correct mapping of the basis-set orbitals to physical ones, we carry out a supporting finite-difference calculation. The starting point of our calculation is the frozen-core DHF method. The finite-difference method is based on a numerical integration of the DHF equation on a sufficiently large grid to fully accommodate the desired atomic DHF orbitals [51]. In other words, the finite-difference method provides the reference results for an unconfined atom. The \(B\)-spline basis-set construction method solves the same DHF problem but in a cavity. We vary the cavity radius \(R_{\rm max}\) and compare the energies and other supporting quantities, such as the transition amplitudes, of our target basis-set orbitals with the finite-difference results.
We presented such a comparison for the DHF eigenenergies in Fig. 17. Here the \(B\)-spline basis set contains \(M=60\) basis functions per partial wave (\(B\)-splines of order \(k=9\)) generated in a cavity of radius \(R_{\rm max}=250\) a.u. We plot the fractional difference between the basis-set
\begin{table}
\begin{tabular}{l c c c c c c c} \(7S_{1/2}\to\) & \(6P_{1/2}\) & \(7P_{1/2}\) & \(8P_{1/2}\) & \(9P_{1/2}\) & \(10P_{1/2}\) & \(11P_{1/2}\) & \(12P_{1/2}\) \\ \hline DHF & 4.4131 & 1.1009[1] & \(9.2117[-1]\) & \(3.3720[-1]\) & \(1.8332[-1]\) & \(1.1944[-1]\) & \(8.6149[-2]\) \\ RPA(DHF) & 4.4494 & 1.0921[1] & \(8.6912[-1]\) & \(3.0076[-1]\) & \(1.5576[-1]\) & \(0.9758[-1]\) & \(6.8225[-2]\) \\ BO & 4.1945 & 1.0263[1] & \(10.080[-1]\) & \(3.9870[-1]\) & \(2.2756[-1]\) & \(1.5320[-1]\) & \(11.303[-2]\) \\ RPA(BO) & 4.2232 & 1.0175[1] & \(9.5622[-1]\) & \(3.6260[-1]\) & \(2.0034[-1]\) & \(1.3166[-1]\) & \(9.5406[-2]\) \\ SD & 4.1952 & 1.0253[1] & \(9.2901[-1]\) & \(3.4658[-1]\) & \(1.8956[-1]\) & \(1.2374[-1]\) & \(8.9261[-2]\) \\ CCSD & 4.2502 & 1.0298[1] & \(9.4069[-1]\) & \(3.5176[-1]\) & \(1.9263[-1]\) & \(1.2582[-1]\) & \(9.0782[-2]\) \\ CCSDpT & 4.2497 & 1.0292[1] & \(9.4155[-1]\) & \(3.5228[-1]\) & \(1.9299[-1]\) & \(1.2608[-1]\) & \(9.0988[-2]\) \\ CCSDvT & 4.2527 & 1.0308[1] & \(9.2122[-1]\) & \(3.3906[-1]\) & \(1.8342[-1]\) & \(1.1870[-1]\) & \(8.5035[-2]\) \\ CCSDpTvT & 4.2522 & 1.0302[1] & \(9.2210[-1]\) & \(3.3960[-1]\) & \(1.8379[-1]\) & \(1.1897[-1]\) & \(8.5246[-2]\) \\ \hline Scaling & \(-0.0081\) & \(-0.0011[1]\) & \(0.0383[-1]\) & \(0.0150[-1]\) & \(0.0078[-1]\) & \(0.0090[-1]\) & \(0.0471[-2]\) \\ Dressing & \(0.0000\) & \(0.0000[1]\) & \(0.0033[-1]\) & \(0.0023[-1]\) & \(0.0017[-1]\) & \(0.0014[-1]\) & \(0.0111[-2]\) \\ Breit & \(0.0049\) & \(-0.0003[1]\) & \(0.0342[-1]\) & \(0.0195[-1]\) & \(0.0131[-1]\) & \(0.0096[-1]\) & \(0.0754[-2]\) \\ QED & \(-0.0045\) & \(0.0007[1]\) & \(-0.0423[-1]\) & \(-0.0244[-1]\) & \(-0.0165[-1]\) & \(-0.0122[-1]\) & \(-0.0957[-2]\) \\ Basis extrapolation & \(0.0001\) & \(-0.0003[1]\) & \(0.0083[-1]\) & \(0.0051[-1]\) & \(0.0035[-1]\) & \(0.0026[-1]\) & \(0.0207[-2]\) \\ \hline Final result & 4.2446(49) & 1.0292(6)[1] & \(9.263(28)[-1]\) & \(3.414(14)[-1]\) & \(1.8475(88)[-1]\) & \(1.2002(74)[-1]\) & \(8.583(52)[-2]\) \\ Uncertainty (\%) & 0.1 & 0.060 & 0.30 & 0.41 & 0.48 & 0.62 & 0.60 \\ \hline Other results & 4.239(18)12 & 1.0297(23)[1]13 & & \(\\ & 4.243(12)23 & 1.0310(40)13 & 9.14(27)[ENDFOOTNOTE] & \([-1]\)24 [ENDFOOTNOTE] & 3.49(10)2 & 1.908(60)[ENDFOOTNOTE] & 1.908(60)[ENDFOOTNOTE] & 1.247(44)[1]2 & 9.00(35)[ENDFOOTNOTE] & \([-2]\)25 \\ & 4.2434 & & & & & \\ & 4.2364 & & & & & \\ & 4.2536 & & & & & & \\ \hline Experiment & 4.249(4)2 & & & & & & \\ & 4.233(22)3 & & & & & & \\ \hline Difference (\%) & \(-0.09\) & \(-0.30\) & & & & & \\ Difference (\(\sigma\)) & \(-0.62\) & \(-4.0\) & & & & & \\ \end{tabular}
\end{table}
Table 6: Reduced electric-dipole matrix elements (\(7S_{1/2}||D||n^{\prime}P_{1/2}\)) (in atomic units a.u.) for \(n^{\prime}=6-12\) in Cs with various approximations. See Table 1 caption for explanation of entries. The notation \(x[y]\) stands for \(x\times 10^{
and finite-difference eigenenergies and find the agreement to be better than \(0.015\%\). As expected, the difference between the basis set and "physical" orbital energies worsens with increasing principle quantum numbers, i.e., with the increasing spatial extent of the orbitals. By using the same basis set, we also investigated the electric- and magnetic-dipole matrix elements between atomic orbitals. The basis-set values for the matrix elements differ from their finite-difference counterparts by up to \(0.1\%\) for orbitals involving large \(n\) values. We notice that a larger cavity radius, such as \(R_{\rm max}=500\,\)a.u., does not necessarily lead to a better numerical accuracy, because the resulting larger grid step size results in a poorer \(B\)-spline grid coverage. To fix this, one should increase \(M\), the number of B-splines used in the basis-set generation, as well as the \(B\)-spline order \(k\). Increasing \(M\), however, leads to larger basis sets and, thereby, a polynomial \(M^{\gamma}\) increase in computational time (the power \(\gamma\) depends on a specific many-body scheme, with the steepest scaling for our most sophisticated CC calculations). The same observation applies for \(k\). Our basis set, as described in Sec. III of the main text, is a compromise between numerical accuracy and computational time. To further improve our numerical accuracy in basis-set-based many-body calculations, we replace the lowest-order DHF values of matrix elements with those computed using finite-difference orbitals.
## Appendix B Constructing finite basis set of Brueckner orbitals
An introduction to the Brueckner orbital (BO) method was given in Sec. II.2 of the main text. The second-order expression for the self-energy operator \(\Sigma\) was given by Eq. (9). The goal of this Appendix is to show that one may generate a BO basis set from a DHF finite basis set, described in Appendix A, by using basis rotation. Such a generated BO set retains all the numerically useful
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \(7S_{1/2}\rightarrow\) & \(6P_{3/2}\) & \(7P_{3/2}\) & \(8P_{3/2}\) & \(9P_{3/2}\) & \(10P_{3/2}\) & \(11P_{3/2}\) & \(12P_{3/2}\) \\ \hline DHF & 6.6710 & 1.5345[1] & 1.6049 & 6.4748[\(-1\)] & 3.7372[\(-1\)] & 2.5345[\(-1\)] & 1.8800[\(-1\)] \\ RPA(DHF) & 6.7122 & 1.5227[1] & 1.5340 & 5.9752[\(-1\)] & 3.3579[\(-1\)] & 2.2328[\(-1\)] & 1.6321[\(-1\)] \\ BO & 6.4422 & 1.4234[1] & 1.7498 & 7.5067[\(-1\)] & 4.4817[\(-1\)] & 3.1033[\(-1\)] & 2.3333[\(-1\)] \\ RPA(BO) & 6.4699 & 1.4118[1] & 1.6799 & 7.0160[\(-1\)] & 4.1100[\(-1\)] & 2.8083[\(-1\)] & 2.0914[\(-1\)] \\ SD & 6.4235 & 1.4237[1] & 1.6439 & 6.7941[\(-1\)] & 3.9582[\(-1\)] & 2.6957[\(-1\)] & 2.0036[\(-1\)] \\ CCSD & 6.4975 & 1.4299[1] & 1.6584 & 6.8588[\(-1\)] & 3.9967[\(-1\)] & 2.7220[\(-1\)] & 2.0228[\(-1\)] \\ CCSDpT & 6.4972 & 1.4290[1] & 1.6595 & 6.8662[\(-1\)] & 4.0018[\(-1\)] & 2.7258[\(-1\)] & 2.0258[\(-1\)] \\ CCSDvT & 6.4973 & 1.4318[1] & 1.6325 & 6.6860[\(-1\)] & 3.8704[\(-1\)] & 2.6239[\(-1\)] & 1.9435[\(-1\)] \\ CCSDpTvT & 6.4969 & 1.4309[1] & 1.6336 & 6.6935[\(-1\)] & 3.8755[\(-1\)] & 2.6277[\(-1\)] & 1.9465[\(-1\)] \\ \hline \multicolumn{8}{c}{Other corrections} \\ Scaling & \(-0.0106\) & \(-0.0018[1]\) & 0.0043 & 0.0234[\(-1\)] & 0.0094[\(-1\)] & 0.0139[\(-1\)] & 0.0108[\(-1\)] \\ Dressing & 0.0001 & 0.0001[1] & 0.0005 & 0.0032[\(-1\)] & 0.0024[\(-1\)] & 0.0019[\(-1\)] & 0.0016[\(-1\)] \\ Breit & 0.0015 & \(-0.0001[1]\) & 0.0010 & 0.0056[\(-1\)] & 0.0038[\(-1\)] & 0.0028[\(-1\)] & 0.0022[\(-1\)] \\ QED & \(-0.0054\) & 0.0010[1] & \(-0.0047\) & \(-0.0284[\)] & \(-0.0195[\)] & \(-0.0146[\)] & \(-0.0115[\)] \\ Basis extrapolation & \(-0.0001\) & \(-0.0004[1]\) & 0.0008 & 0.0055[\(-1\)] & 0.0039[\(-1\)] & 0.0029[\(-1\)] & 0.0023[\(-1\)] \\ \hline Final result & 6.4824(55) & 1.4297(10)[1] & 1.6355(25) & 6.703(14)[\(-1\)] & 3.8755(73)[\(-1\)] & 2.6346(81)[\(-1\)] & 1.9519(63)[\(-1\)] \\ Uncertainty (\%) & 0.085 & 0.067 & 0.16 & 0.21 & 0.19 & 0.31 & 0.32 \\ \hline Other results & 6.474(23)1 & 1.4303(33)[1]1 & 6.480(19)2 & 1.4323(61)[1]1 & 1.620(35)2 & 6.80(14)[\(-1\)]2 & 3.962(88)[\(-1\)]2 & 2.698(65)[\(-1\)]2 & 2.698(65)[\(-1\)]2 & 2.006(73)[\(-1\)]2 \\ & 6.4792 & 1.4323[1]3 & \\ & 6.4703 & 1.4293[1]3 & \\ & 6.5073 & 1.4295[1]3 & \\ \hline Experiment & 6.489(5)2 & 1.4344(7)[1]4 & \\ & 6.479(31)2 & 1.4320(20)[1]2 & \\ Weighted average & 6.488(5) & 1.4341(7)[1] & & & & \\ \hline Difference (\%) & \(-0.10\) & \(-0.31\) & & & \\ Difference (\(\sigma\)) & \(-0.85\) & \(-3.8\) & & & \\ \hline \end{tabular}
\end{table}
Table 7: Reduced electric-dipole matrix elements \(\langle 7S_{1/2}||D||n^{\prime}P_{3/2}\rangle\) (in atomic units a.u.) for \(n^{\prime}=6-12\) in Cs with various approximations. See Table 1 caption for explanation of entries. The notation \(x[y]\) stands for \(x\times 10^{y}\).
\begin{table}
\begin{tabular}{l r r r r r r r} \hline \hline & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \hline DHF & 0.99499 & 1.3215 & 1.5101 & 1.6344 & 1.7186 & 1.7776 & 1.8198 \\ RPA(DHF) & 0.99691 & 1.5070 & 2.3624 & 4.4622 & 16.209 & 14.506 & 5.7621 \\ BO & 0.99146 & 1.2848 & 1.4046 & 1.4706 & 1.5107 & 1.5368 & 1.5549 \\ RPA(BO) & 0.99426 & 1.4230 & 1.7814 & 2.1191 & 2.4263 & 2.6977 & 2.9324 \\ SD & 0.99465 & 1.4326 & 1.8087 & 2.1687 & 2.5008 & 2.7963 & 3.0518 \\ CCSD & 0.99455 & 1.4264 & 1.7946 & 2.1443 & 2.4650 & 2.7497 & 2.9971 \\ CCSDpT & 0.99452 & 1.4259 & 1.7927 & 2.1406 & 2.4593 & 2.7419 & 2.9873 \\ CCSDvT & 0.99526 & 1.4781 & 2.0111 & 2.6896 & 3.5480 & 4.6243 & 5.9676 \\ CCSDpTvT & 0.99521 & 1.4771 & 2.0079 & 2.6819 & 3.5321 & 4.5939 & 5.9137 \\ \hline \multicolumn{7}{c}{Other corrections} \\ Scaling & \(-0.00006\) & \(-0.0059\) & \(-0.0242\) & \(-0.0454\) & \(-0.0962\) & \(-0.2396\) & \(-0.3221\) \\ Dressing & \(-0.00001\) & \(-0.0010\) & \(-0.0050\) & \(-0.0141\) & \(-0.0325\) & \(-0.0628\) & \(-0.1184\) \\ Breit & \(0.00005\) & \(-0.0087\) & \(-0.0251\) & \(-0.0551\) & \(-0.1066\) & \(-0.1831\) & \(-0.3124\) \\ QED & \(0.00004\) & \(0.0055\) & 0.0198 & 0.0501 & 0.1058 & 0.1927 & 0.3467 \\ Basis extrapolation & \(0.00000\) & \(-0.0013\) & \(-0.0051\) & \(-0.0131\) & \(-0.0280\) & \(-0.0519\) & \(-0.0949\) \\ \hline Final result & 0.99523(4) & 1.4656(55) & 1.968(18) & 2.604(38) & 3.375(78) & 4.25(16) & 5.41(25) \\ Uncertainty (\%) & 0.0040 & 0.37 & 0.93 & 1.5 & 2.3 & 3.8 & 4.5 \\ \hline Other results & 0.9951(1)1 & 1.464(25)[1]2 & & & & & \\ & 0.9952 & 1.4791 & 2.1382 & & & & & \\ & 0.9954 & 1.4551 & & & & & \\ & 0.9955 [FOOTNOTE:4]Footnote 4: Refrac & & & & & \\ & 0.9952 [FOOTNOTE:4]Footnote 4: Refrac & & & & & \\ & 0.9950 [FOOTNOTE:4]Footnote 4: Refrac & & & & & \\ & 0.9950 [FOOTNOTE:4]Footnote 4: Refrac & & & & & \\ & 0.9951 [FOOTNOTE:4]Footnote 4: Refrac & & & & & \\ & 0.995 [FOOTNOTE:4]Footnote 4: Refrac & & & & & \\ & 0.995 [FOOTNOTE:4]Footnote 4: Refrac & [14] & & & & \\ & 0.9951 [FOOTNOTE:4]Footnote 4: Refrac & [14] & & & & \\ & 0.995 [FOOTNOTE:4]Footnote 4: Refrac & & & & & \\ & 0.995 [FOOTNOTE:4]Footnote 4: Refrac & & & & & \\ & 0.995 [FOOTNOTE:4]Footnote 4: Refrac & [14] & & & & \\ & 0.995 [FOOTNOTE:4]Footnote 4: Refrac & & & & & \\ & 0.995 [FOOTNOTE:4]Footnote 4: Refrac & [14] & & & & \\ & 0.995 [FOOTNOTE:4]Footnote 4: Refrac & [14] & & & & \\ & 0.995 [FOOTNOTE:4]Footnote 4: Refrac & [14] & & & & \\ & 0.995 [FOOTNOTE:4]Footnote 4: Refrac & [14] & & & & \\ & 0.995 [FOOTNOTE:4]Footnote 4: Refrac & [14] & & & & \\ & 0.995 [FOOTNOTE:4]Footnote 4: Refrac & [14] & & & & \\ & 0.995 [FOOTNOTE:4]Footnote 4: Refrac & [14] & & & & \\ & 0.995 [FOOTNOTE:4]Footnote 4: Refrac & [14] & & & & \\ & 0.995 [FOOTNOTE:4]Footnote 4: Refrac & [14] & & & & \\ & 0.995 [FOOTNOTE:4]Footnote 4: Refrac & [14] & & & & \\ & 0.995 [FOOTNOTE:4]Footnote 4: Refrac & [14] & & & & \\ & 0.995 [FOOTNOTE:4]Footnote 4: Refrac & [14] & & & & \\ & 0.995 [FOOTNOTE:4]Footnote 4: Refrac & [14] & & & & \\ & 0.995 [FOOTNOTE:4]Footnote 4: Refrac & [14] & & & & \\ & 0.995 [FOOTNOTE:4]Footnote 4: Refrac & [14] & & & & & \\ & 0.995 [FOOTNOTE:4]Footnote 4: Refrac & [14] & & & & & \\ & 0.
properties of the original DHF basis set, but has the extra advantage of producing important third-order correlation corrections unobtainable with a DHF basis set in an RPA calculation, as discussed in Sec. II.3.
By construction, the DHF basis orbitals \(v_{k}\) satisfy the eigenvalue equation
\[h_{0}v_{k}=\varepsilon_{k}^{\rm DHF}v_{k}\,. \tag{10}\]
The DHF basis is numerically compete, orthonormal, and of finite size \(M\). Notice that we attached the label DHF
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \hline DHF & 1.0689 & 0.98561 & 1.2319 & 1.3577 & 1.4415 & 1.5003 & 1.5429 \\ RPA(DHF) & 1.0667 & 0.98593 & 1.2481 & 1.4048 & 1.5244 & 1.6180 & 1.6916 \\ BO & 1.0860 & 0.98069 & 1.2274 & 1.3313 & 1.3926 & 1.4324 & 1.4597 \\ RPA(BO) & 1.0833 & 0.98112 & 1.2423 & 1.3682 & 1.4506 & 1.5083 & 1.5501 \\ SD & 1.0827 & 0.98188 & 1.2512 & 1.3861 & 1.4765 & 1.5405 & 1.5872 \\ CCSD & 1.0810 & 0.98188 & 1.2466 & 1.3788 & 1.4671 & 1.5298 & 1.5756 \\ CCSDpT & 1.0811 & 0.98179 & 1.2463 & 1.3782 & 1.4662 & 1.5287 & 1.5743 \\ CCSDvT & 1.0803 & 0.98218 & 1.2531 & 1.3944 & 1.4921 & 1.5631 & 1.6161 \\ CCSDpTvT & 1.0804 & 0.98214 & 1.2527 & 1.3937 & 1.4910 & 1.5618 & 1.6146 \\ \hline \multicolumn{8}{c}{Other corrections} \\ Scaling & 0.0003 & \(-0.00019\) & \(-0.0019\) & \(-0.0013\) & \(-0.0027\) & \(-0.0035\) & 0.0000 \\ Dressing & 0.0000 & 0.00007 & \(-0.0001\) & \(-0.0003\) & \(-0.0005\) & \(-0.0007\) & \(-0.0008\) \\ Breit & \(-0.0010\) & 0.00022 & \(-0.0038\) & \(-0.0068\) & \(-0.0090\) & \(-0.1144\) & \(-0.0123\) \\ QED & 0.0002 & 0.00002 & 0.0021 & 0.0040 & 0.0058 & 0.0057 & 0.0084 \\ Basis extrapolation & 0.0000 & 0.00001 & \(-0.0005\) & \(-0.0009\) & \(-0.0013\) & \(-0.0013\) & \(-0.0020\) \\ \hline Final result & 1.079(5) & 0.9823(1) & 1.2485(22) & 1.3885(36) & 1.4833(50) & 1.5523(60) & 1.6080(66) \\ Uncertainty & 0.049 & 0.014 & 0.18 & 0.26 & 0.33 & 0.39 & 0.41 \\ \hline Other results & 1.0800(11)1 & 0.9822(1)1 & 0.98232 & 1.25312 & 1.37952 & 1.46832 & 1.52992 & 1.5761 \\ & 1.0803 & 0.9823 & 0.9824 & \\ & 1.0805 & 0.9825 & & \\ & 1.0805 & 0.9825 & & \\ & 1.0805 & 0.9826 & & \\ & 1.0805 & 0.9826 & & \\ & 1.0805 & 0.9827 & & \\ & 1.0805 & 0.9827 & & \\ & 1.0804 & 0.9824 & & \\ & 1.0824 & & \\ & 1.0825 & 0.9824 & & \\ & 1.0825 & 0.9825 & & \\ & 1.0804 & 0.9824 & & \\ & 1.0804 & & \\ & 1.0825 & & \\ & 1.0805 & & \\ & 1.
Dzuba _et al._ (1989)
Blundell _et al._ (1992)
Safronova _et al._ (1999)
Safronova _et al._ (2016)
Safronova _et al._ (2016)
Roberts _et al._ (2016)
Roberts _et al._ (2016)
Boston _et al._ (2007)
Zhang _et al._ (2013)
Amini _et al._ (2008)
Morron _et al._ (2005)
Greginov _et al._ (2015)
Razac _et al._ (1999)
Deveau _et al._ (2002)
Young _et al._ (2004)
Patterson _et al._ (2002)
Amato _et al._ (2002)
Weighted expt. average
Dzuba _et al._ (1989)
Blundell _et al._ (1992)
Safronova _et al._ (1999)
Safronova _et al._ (2016)
Roberts _et al._ (2022)
Roberts _et al._ (2022)
Safronova _et al._ (1999)
[MISSING_PAGE_POST]
to the energies; in Eq. (9) that label was suppressed. We would now like to find solutions to the BO eigenvalue equation with the self-energy operator \(\Sigma\) included
\[\left(h_{0}+\Sigma\right)u=\varepsilon u\,. \tag{101}\]
Since the DHF set \(\{v_{k}\}\) is numerically complete, we can expand the solution \(u\) in terms of the DHF basis orbitals as \(u=\sum_{k}c_{k}v_{k}\). By plugging this expansion into Eq. (101) and using the orthonormality of \(\{v_{k}\}\), we arrive at
\[\sum_{k}\left(\varepsilon_{k}^{\text{DHF}}\delta_{mk}+\Sigma_{mk}\right)c_{k} =\varepsilon^{\text{BO}}\sum_{k}c_{k}\delta_{mk}\,, \tag{102}\]
which may be cast in matrix form as
\[M^{\text{BO}}\mathbf{c}=\varepsilon^{\text{BO}}\mathbf{c}\,, \tag{103}\]
where
\[M^{\text{BO}}=\begin{pmatrix}\Sigma_{11}+\varepsilon_{1}^{\text{DHF}}&\Sigma_ {12}&\cdots&\Sigma_{1M}\\ \Sigma_{21}&\Sigma_{22}+\varepsilon_{2}^{\text{DHF}}&\cdots&\Sigma_{2M}\\ \vdots&\vdots&\ddots&\vdots\\ \Sigma_{M1}&\Sigma_{M2}&\cdots&\Sigma_{MM}+\varepsilon_{M}^{\text{DHF}}\end{pmatrix}\,. \tag{104}\]
By solving this equation we find the \(M\) BO eigenvalues \(\varepsilon^{\text{BO}}\) and the corresponding eigenvectors of expansion coefficients \(\mathbf{c}\). Using these expansion coefficients we can assemble the desired Brueckner orbitals as \(u=\sum_{k}c_{k}v_{k}\).
The numerical implementation of this method may be significantly sped up by using angular reduction as follows.
Figure 12: Convergence patterns for the normalized ratio \(\xi_{7,n}\equiv(1/\sqrt{2})\langle 7S_{1/2}||D||nR_{3/2}\rangle/\langle 7S_{1/2}||D||nP_{1/2}\rangle\) with increasing complexity of the coupled-cluster method. The pattern for \(n\geq 9\) is similar to that for \(n=8\).
Figure 16: Comparison between the semiempirically scaled (sc.) \(E1\) matrix elements at different levels of the coupled-cluster approximation. The error bars, representing the scaling uncertainties, are taken as half the difference between the scaled and unscaled values, i.e., \(\left|\text{SD - SD(sc.)}\right|/2\) and so on.
Figure 14: Comparison between our computed value (vertical line+uncertainty band) for the normalized ratio \(\xi_{6,7}\equiv(1/\sqrt{2})\langle 6S_{1/2}||D||7P_{3/2}\rangle/\langle 6S_{1/2}||D||7P_{1/2}\rangle\) with existing experimental (\(\bullet\)) and theoretical (\(\diamond\)) results. The experimental results are ordered from the top down with decreasing uncertainties. The weighted average and uncertainty are computed using Eqs. (22).
Figure 13: Comparison between our computed value (vertical line+uncertainty band) for the normalized ratio \(\xi_{6,6}\equiv(1/\sqrt{2})\langle 6S_{1/2}||D||6P_{3/2}\rangle/\langle 6S_{1/2}||D||6P_{1/2}\rangle\) with existing experimental (\(\bullet\)) and theoretical (\(\diamond\)) results. The experimental results are ordered from the top down with decreasing uncertainties. The weighted average and uncertainty are computed using Eqs. (22).
Figure 17: Fractional differences \(|\langle\varepsilon^{\text{set}}-\varepsilon^{\text{f.d.}}\rangle/\varepsilon^ {\text{f.d.}}|\) between the DKB \(B\)-spline basis set and finite-difference (f.d.) Dirac-Hartree-Fock eigenenergies \(\varepsilon_{i}\) for several angular symmetries as functions of the principle quantum number. Basis-set parameters: number of splines for a fixed angular symmetry is \(M=60\), cavity radius \(R_{\text{max}}=250\,\text{a.u.}\) and \(B\)-spline order \(k=9\).
Figure 15: Comparison between our computed value (vertical line+uncertainty band) for the normalized ratio \(\xi_{7,6}\equiv(1/\sqrt{2})\langle 7S_{1/2}||D||6P_{3/2}\rangle/(7S_{1/2}||D||6P_{1/2}\rangle\) with existing experimental (\(\bullet\)) and theoretical (\(\diamond\)) results.
We begin by writing
\[g_{ijkl} =\sum_{L}J_{L}(ijkl)X_{L}(ijkl)\,, \tag{101a}\] \[\tilde{g}_{ijkl} =\sum_{L}J_{L}(ijkl)Z_{L}(ijkl)\,. \tag{101b}\]
where
\[J_{L}(ijkl) \equiv\sum_{M}(-1)^{j_{i}-m_{i}+j_{j}-m_{j}}\] \[\times\begin{pmatrix}j_{i}&L&j_{k}\\ -m_{i}&-M&m_{k}\end{pmatrix}\begin{pmatrix}j_{j}&L&j_{l}\\ -m_{j}&M&m_{l}\end{pmatrix}\,. \tag{102}\]
The quantity \(X_{L}(ijkl)\) is expressed in terms of the reduced matrix element of the normalized spherical harmonic \(C_{L}(\hat{\mathbf{r}})\) and the Slater integral \(R_{L}(ijkl)\) as
\[X_{L}(ijkl)=(-1)^{L}\langle\kappa_{i}||C_{L}||\kappa_{k}\rangle\langle\kappa_{ j}||C_{L}||\kappa_{l}\rangle R_{L}(ijkl)\,, \tag{103}\]
where \(\kappa\) is the relativistic angular quantum number that uniquely encodes both the total angular momentum \(j\) and the orbital angular momentum \(\ell\), \(\kappa=(\ell-j)\left(2j+1\right)\). The quantity \(Z_{L}(ijkl)\) may be expressed in terms of \(X_{L}(ijkl)\) via the recoupling formula
\[Z_{L}(ijkl) =X_{L}(ijkl)\] \[+\sum_{L^{\prime}}[L]\begin{pmatrix}j_{k}&j_{i}&L\\ j_{l}&j_{j}&L^{\prime}\end{pmatrix}X_{L^{\prime}}(ijlk)\,, \tag{104}\]
where \([L]\equiv 2L+1\) and \(\begin{pmatrix}j_{k}&j_{i}&L\\ j_{l}&j_{j}&L^{\prime}\end{pmatrix}\) is the \(6j\)-symbol.
Using the angular decompositions (101), we may write the matrix elements of the second-order self-energy operator (9) as
\[\Sigma_{ij} =\delta_{\kappa_{i}\kappa_{j}}\delta_{m_{i}m_{j}}\] \[\times\left(\sum_{amn,L}\frac{(-1)^{j_{m}+j_{n}+j_{a}+j_{i}}}{[L,j_{i}]}\frac{X_{L}(aimn)Z_{L}(mnaj)}{\varepsilon_{a0}-\varepsilon_{mn}}\right.\] \[+\left.\sum_{abm}\frac{(-1)^{j_{m}+j_{j}+j_{a}+j_{b}}}{[L,j_{i}]} \frac{Z_{L}\left(mialb\right)X_{L}\left(abmj\right)}{\varepsilon_{m0}- \varepsilon_{ab}}\right)\,. \tag{105}\]
where \([L,j_{i}]\equiv[L][j_{i}]=(2L+1)(2j_{i}+1)\). Notice the angular selection rules enforced by the \(\delta\) symbols, reflecting the fact that the self-energy operator is a scalar. The matrix \(M^{\text{BO}}\) may then be rearranged into a block-diagonal form, with each block corresponding to a different \(\kappa\) value. Solving the eigenvalue equation (100) is thus equivalent to diagonalizing these blocks individually.
To speed up the computations of the self-energy matrix elements (105) further, we introduce a kernel (here and below the angular symmetry of the block is fixed)
\[K(r,r^{\prime})=\left(\begin{array}{cc}K_{PP}\left(r,r^{\prime}\right)&K_{ PQ}\left(r,r^{\prime}\right)\\ K_{QP}\left(r,r^{\prime}\right)&K_{QQ}\left(r,r^{\prime}\right)\end{array} \right)\,, \tag{106}\]
so that the self-energy matrix elements can be assembled from the large (\(P\)) and small (\(Q\)) components of the Dirac bispinors as
\[\Sigma_{ij}=\int drdr^{\prime}\left(P_{i}\left(r\right),Q_{i}\left(r\right) \right)K(r,r^{\prime})\begin{pmatrix}P_{j}\left(r^{\prime}\right)\\ Q_{j}\left(r^{\prime}\right)\end{pmatrix}. \tag{107}\]
A straightforward but somewhat tedious derivation results in the following expressions for the kernels (\(X\) and \(Y\) stand either for \(P\) or \(Q\))
\[K_{XY}\left(r,r^{\prime}\right) =\frac{1}{[L,j_{i}]}\left(\sum_{amn,L}\frac{K_{XY}^{(amn,L)} \left(r,r^{\prime}\right)}{\varepsilon_{a0}-\varepsilon_{mn}}\right.\] \[+\left.\sum_{abm,L}\frac{K_{XY}^{(abm,L)}\left(r,r^{\prime} \right)}{\varepsilon_{m0}-\varepsilon_{ab}}\right)\,. \tag{108}\]
Remember that the angular symmetry here is fixed, \(\kappa_{i}=\kappa_{j}\). The two subkernels appearing in the sums are, explicitly,
\[K_{XY}^{(amn,L)}\left(r,r^{\prime}\right) =H_{L}\left(mnai\right)X_{n}\left(r\right)v_{L}\left(am;r\right)\] \[\times\left\{H_{L}(mnaj)Y_{n}\left(r^{\prime}\right)v_{L}\left(am; r^{\prime}\right)\right.\] \[+\left[L\right]\sum_{L^{\prime}}H_{L^{\prime}}(nmaj)\left\{ \begin{array}{ccc}j_{n}&j_{j}&L\\ j_{m}&j_{a}&L^{\prime}\end{array}\right\}\] \[\times\left.Y_{m}\left(r^{\prime}\right)v_{L^{\prime}}\left(an;r^ {\prime}\right)\right\}\,, \tag{109}\]
and
\[K_{XY}^{(abm,L)}\left(r,r^{\prime}\right) =H_{L}\left(mjab\right)Y_{b}\left(r^{\prime}\right)v_{L}\left(am;r^ {\prime}\right)\] \[\times\left\{H_{L}\left(miab\right)X_{b}\left(r\right)v_{L}\left( am;r\right)\right.\] \[+\left[L\right]\sum_{L^{\prime}}H_{L^{\prime}}(imab)\left\{ \begin{array}{ccc}j_{i}&j_{b}&L\\ j_{m}&j_{a}&L^{\prime}\end{array}\right\}\] \[\times\left.X_{a}\left(r\right)v_{L^{\prime}}\left(bm;r\right) \right\}\,, \tag{110}\]
where
\[H_{L}(abcd)=(-1)^{L}\langle\kappa_{a}||C_{L}||\kappa_{c}\rangle\langle\kappa_{b }||C_{L}||\kappa_{d}\rangle\,, \tag{111}\]
and \(v_{L}\left(bm;r\right)\) is the screening potential,
\[v_{k}\left(ij;r\right)=\int dr^{\prime}\frac{r_{<}^{k}}{r_{>}^{k+1}}\left[P_{i} (r^{\prime})P_{j}(r^{\prime})+Q_{i}(r^{\prime})Q_{j}(r^{\prime})\right]\,,\]
with the conventional definitions \(r_{<}=\min(r,r^{\prime})\) and \(r_{>}=\max(r,r^{\prime})\).
To reiterate, we start by generating the DHF finite basis set on a certain radial grid, as described in the main text. Then, for a fixed symmetry \(\kappa\), we tabulate the four elements of the kernel (108) on the grid with the help of formulas (109) and (110)). All the evaluations are carried out with the DHF finite basis set. This is the most time-consuming part of the calculations. Then, with the tabulated kernel, we compute the matrix elements (107) of the self-energy matrix and solve the eigenvalue equation (100). This provides us with the desired BO spectrum
and corresponding eigenvectors of expansion coefficients of the BO orbitals over the original DHF basis. We normalize these eigenvectors to guarantee that the BO basis set is orthonormal. Finally, with these expansion coefficients, we assemble the BO basis-set functions.
Finally, we turn to the question of how to choose the reference energy \(\varepsilon_{0}\). Since we are diagonalizing the BO Hamiltonian \(H_{\mathrm{DHF}}+\Sigma\) for each individual angular symmetry \(\kappa\) (\(s_{1/2},p_{1/2},p_{3/2},\ldots\)), we can pick different values of \(\varepsilon_{0}\) for different \(\kappa\). However, within each \(\kappa\) block, \(\varepsilon_{0}\) is fixed. Because we are interested in the low-energy valence states, in the calculations reported in this paper, we fix \(\varepsilon_{0}\) to the lowest valence electron DHF energy for a given \(\kappa\), e.g., for Cs \(s_{1/2}\) states we pick \(\varepsilon_{0}=\varepsilon_{6s}\) and for \(p_{1/2}\) states we pick \(\varepsilon_{0}=\varepsilon_{6p_{1/2}}\), and so on.
## Appendix C Finite-basis-set implementation of the random phase approximation
An introduction to the random phase approximation (RPA) can be found in Sec. II.3. The focus of this appendix is to describe an efficient numerical finite-basis-set implementation of the RPA method. As a starting point, we reproduce formula (11) from the main text. We are interested in computing matrix elements of a one-electron operator \(Z=\sum_{k}z_{k}\), where the sum goes over all the electrons. The RPA-dressed matrix elements (vertices) are
\[Z_{ma}^{\mathrm{RPA}} =z_{ma}\] \[+\sum_{bn}\left(\frac{Z_{bn}^{\mathrm{RPA}}\tilde{g}_{mnab}}{ \varepsilon_{b}-\varepsilon_{n}-\omega}+\frac{Z_{nb}^{\mathrm{RPA}}\tilde{g}_{ mban}}{\varepsilon_{b}-\varepsilon_{n}+\omega}\right)\,, \tag{14a}\] \[Z_{am}^{\mathrm{RPA}} =z_{am}\] \[+\sum_{bn}\left(\frac{Z_{bn}^{\mathrm{RPA}}\tilde{g}_{anmb}}{ \varepsilon_{b}-\varepsilon_{n}-\omega}+\frac{Z_{nb}^{\mathrm{RPA}}\tilde{g}_{ abmn}}{\varepsilon_{b}-\varepsilon_{n}+\omega}\right)\,, \tag{14b}\]
where \(\omega\) is the frequency of the perturbation driving the transition, which, in our case, the \(w\to v\) transition: \(\omega\equiv\varepsilon_{w}-\varepsilon_{v}\). Notice that these RPA-dressed matrix elements are defined between the core (\(a\)) and the excited (\(m\)) orbitals. The matrix elements between the two valence orbitals are given by the second-order expression in terms of the above RPA-dressed matrix elements,
\[Z_{wv}^{\mathrm{RPA}} =z_{wv}\] \[+\sum_{an}\frac{Z_{am}^{\mathrm{RPA}}\tilde{g}_{wmva}}{\varepsilon _{a}-\varepsilon_{m}-\omega}+\sum_{am}\frac{\tilde{g}_{wvma}Z_{ma}^{\mathrm{ RPA}}}{\varepsilon_{a}-\varepsilon_{m}+\omega}\,. \tag{14c}\]
Clearly, we need to first find the RPA-dressed vertices \(Z_{ma}^{\mathrm{RPA}}\) and \(Z_{am}^{\mathrm{RPA}}\). Usually, the set of equations (14) is solved iteratively (see e.g., Ref. [52]), with subsequent iterations recovering higher and higher orders of MBPT. In practical applications, however, sometimes the convergence is poor. Here we present a method to determine the RPA-dressed vertices in one shot, avoiding the iterations altogether. Our method also offers computational advantages when calculating matrix elements for multiple transitions.
We start by defining the following auxiliary quantities
\[\chi_{ma} \equiv\frac{Z_{ma}^{\mathrm{RPA}}}{\varepsilon_{a}-\varepsilon_{m }+\omega}\,, \tag{15a}\] \[\eta_{ma}^{*} \equiv\frac{Z_{am}^{\mathrm{RPA}}}{\varepsilon_{a}-\varepsilon_{m }-\omega}\,, \tag{15b}\]
with which Eqs. (14) for the RPA-dressed vertices can be recast into the form
\[z_{ma} =-\left(\varepsilon_{m}-\varepsilon_{a}-\omega\right)\chi_{ma}\] \[-\sum_{bn}\left(\chi_{nb}\tilde{g}_{bmna}+\eta_{nb}^{*}\tilde{g} _{nmba}\right)\,, \tag{16a}\] \[z_{am} =-\left(\varepsilon_{m}-\varepsilon_{a}+\omega\right)\eta_{ma}^{*}\] \[+\sum_{bn}\left(\eta_{nb}^{*}\tilde{g}_{nabm}+\chi_{nb}\tilde{g} _{bann}\right)\,. \tag{16b}\]
This system of equations for \(\chi_{ma}\) and \(\eta_{ma}^{*}\) is linear. It is inhomogeneous with the driving term \((-z_{ma},-z_{am})\). We can find the solution of this inhomogeneous set of equations by first solving the eigenvalue problem
\[\omega\chi_{ma} =(\varepsilon_{m}-\varepsilon_{a})\chi_{ma}\] \[+\sum_{nb}\left(\begin{array}{cc}\tilde{g}_{bmna}\chi_{nb}+ \tilde{g}_{nmba}\eta_{nb}^{*}\end{array}\right)\,, \tag{17a}\] \[\omega\eta_{ma}^{*} =(\varepsilon_{m}-\varepsilon_{a})\eta_{ma}^{*}\] \[+\sum_{nb}\left(\tilde{g}_{nabm}\eta_{nb}^{*}+\tilde{g}_{bann} \chi_{nb}\right)\,, \tag{17b}\]
to obtain the eigenpair \(\left\{\omega_{\mu},\chi_{ma}^{\mu},\left(\eta_{ma}^{\mu}\right)^{*}\right\}\). The eigenfrequencies \(\omega_{\mu}\) can be interpreted as frequencies of particle-hole excitations of the atomic closed-shell core.
There are two relevant properties of the eigensystem (17): symmetry and orthonormality. First, by examining Eqs. (17), one concludes that for every eigenfrequency \(\omega_{\mu}\) there is an eigenfrequency of opposite sign \(-\omega_{\mu}\). Second, the two corresponding eigenvectors are related: if the triple \(\left\{\omega_{\mu},\left(\chi_{ma}^{\mu},\left(\eta_{ma}^{\mu}\right)^{*} \right)\right\}\) belongs to the eigensystem, so does its negative-frequency counterpart \(\left\{-\omega_{\mu},\left(\left(\eta_{ma}^{\mu}\right)^{*},\chi_{ma}^{\mu} \right)\right\}\). Further, the eigenvectors satisfy the orthonormality condition
\[\sum_{ma}\left[\chi_{ma}^{\lambda}\left(\chi_{ma}^{\mu}\right)^{*}-\left(\eta_{ ma}^{\lambda}\right)^{*}\eta_{ma}^{\mu}\right]=\mathrm{sign}\left(\omega_{\mu}\right) \delta_{\lambda\mu}\,. \tag{18}\]
Once the eigenvalue problem, Eqs. (17), is solved and we obtain a set of eigenvalues \(\omega^{\mu}\) and eigenvectors \(\left(\chi_{ma}^{\mu},\left(\eta_{ma}^{\mu}\right)^{*}\right)\), we search for a solution of the inhomogeneous equations as an expansion over the complete set of eigenvectors
\[\left(\begin{array}{c}\chi_{ma}\\ \eta_{ma}^{*}\end{array}\right)=\sum_{\mu}c_{\mu}\left(\begin{array}{c}\chi_ {ma}^{\mu}\\ \left(\eta_{ma}^{\mu}\right)^{*}\end{array}\right)\,. \tag{19}\]
Substituting this expansion into Eqs. (101), one obtains
\[\sum_{\mu}\left(\omega-\omega_{\mu}\right)c_{\mu}\left(\begin{array}{c}\chi^{ \mu}_{ma}\\ -\left(\eta^{\mu}_{ma}\right)^{*}\end{array}\right)=\left(\begin{array}{c}z_{ ma}\\ z_{am}\end{array}\right). \tag{102}\]
Multiplying from the right by \(\left(\left(\chi^{\nu}_{ma}\right)^{*},\eta^{\nu}_{ma}\right)\) and using the orthogonality relation (100), one finds the expansion coefficients
\[c_{\mu}=\frac{\text{sign}\left(\omega^{\mu}\right)}{\omega-\omega_{\mu}}\sum_{ ma}\left[\left(\chi^{\mu}_{ma}\right)^{*}z_{ma}+\eta^{\mu}_{ma}z_{am}\right]. \tag{103}\]
Finally, returning to the definitions of \(\chi_{ma}\) and \(\eta_{ma}\) and introducing
\[S_{\mu}=\sum_{ma}\left[\left(\chi^{\mu}_{ma}\right)^{*}z_{ma}+\eta^{\mu}_{ma}z_ {am}\right]\,, \tag{104}\]
we arrive at the desired RPA-dressed vertices
\[Z^{\text{RPA}}_{ma} =\left(\varepsilon_{a}-\varepsilon_{m}+\omega\right)\sum_{\mu} \frac{\text{sign}\left(\omega^{\mu}\right)}{\omega-\omega_{\mu}}S_{\mu}\chi^{ \mu}_{ma}\,, \tag{105}\] \[Z^{\text{RPA}}_{am} =\left(\varepsilon_{a}-\varepsilon_{m}-\omega\right)\sum_{\mu} \frac{\text{sign}\left(\omega^{\mu}\right)}{\omega-\omega_{\mu}}S_{\mu}\left( \eta^{\mu}_{ma}\right)^{*}. \tag{106}\]
The final step is the angular reduction of the above expressions. Without losing generality, we assume that the one-electron operator \(Z\) is an irreducible tensor operator of rank \(J\). We also fix its \(M\), the spherical tensor component. We remind the reader that RPA describes particle-hole excitations of a closed-shell core. Such excitations by an operator \(Z\) necessarily required the excitation total angular momentum \(J\) and its projection \(M\). The parity of the particle-hole excitation must be the same as of the operator \(Z\). Thereby, we can introduce the following parametrization using the conventional Clebsch-Gordan coefficients,
\[X_{m(a\rightarrow\kappa_{m})} =\sum_{m_{m}m_{a}}\left(-1\right)^{j_{a}-m_{a}}C^{JM}_{j_{m}m_{m}j _{a}-m_{a}}\chi_{ma}\,, \tag{107a}\] \[Y_{m(a\rightarrow\kappa_{m})} =\sum_{m_{m}m_{a}}\left(-1\right)^{j_{a}-m_{a}+J-M}C^{J-M}_{j_{m }m_{m}j_{a}-m_{a}}\eta^{*}_{ma}\,, \tag{107b}\]
where \(\left(a\rightarrow\kappa_{m}\right)\) denotes an excitation channel, e.g., \(1s_{1/2}\to p_{3/2}\) for the electric-dipole operator (\(J=1\) and odd parity). Notice the additional phase factors and negative magnetic quantum numbers for core orbitals, see Ref. [51] for justification. The reduced coefficients \(X_{m(a\rightarrow\kappa_{m})}\) and \(Y_{m(a\rightarrow\kappa_{m})}\) no longer depend on the magnetic quantum numbers.
Carrying out the summation over the magnetic quantum numbers in the eigenvalue equations (101), we arrive at their reduced form for the coefficients \(X_{m(a\rightarrow\kappa_{m})}\) and \(Y_{m(a\rightarrow\kappa_{m})}\),
\[\omega X_{m(a\rightarrow\kappa_{m})} =(\varepsilon_{m}-\varepsilon_{a})X_{m(a\rightarrow\kappa_{m})}\] \[+\sum_{nb}\frac{\left(-1\right)^{J+j_{n}-j_{b}}}{\left[J\right]}Z _{J}(bmna)X_{n(b\rightarrow\kappa_{n})}\] \[+\sum_{nb}\frac{1}{\left[J\right]}Z_{J}(nmba)Y_{n(b\rightarrow \kappa_{n})}\,, \tag{107a}\] \[-\omega Y_{m(a\rightarrow\kappa_{m})} =(\varepsilon_{m}-\varepsilon_{a})Y_{m(a\rightarrow\kappa_{m})}\] \[+\sum_{nb}\frac{\left(-1\right)^{J+j_{n}-j_{b}}}{\left[J\right]} Z_{J}(bmna)Y_{n(b\rightarrow\kappa_{n})}\] \[+\sum_{nb}\frac{1}{\left[J\right]}Z_{J}(nmba)X_{n(b\rightarrow \kappa_{n})}\,. \tag{107b}\]
It is worth remembering that the normalization condition in the \(X\)-\(Y\) space differs from the conventional normalization. The former reads [see Eq. (100)]
\[\sum_{ma}\left(\left|\chi^{\mu}_{ma}\right|^{2}-\left|\eta^{\mu}_{ma}\right|^{2 }\right)=\text{sign}\left(\omega^{\mu}\right)\,, \tag{108}\]
which translates into
\[\sum_{ma}\left(\left|X^{\mu}_{m(a\rightarrow\kappa_{m})}\right|^{2}-\left|Y^{ \mu}_{m(a\rightarrow\kappa_{m})}\right|^{2}\right)=\text{sign}\left(\omega^{ \mu}\right). \tag{109}\]
Additionally, the symmetry property of the eigensystem now reads as follows: for every pair \(\left\{\omega_{\mu},\left(X^{\mu}_{m(a\rightarrow\kappa_{m})},Y^{\mu}_{m(a \rightarrow\kappa_{m})}\right)\right\}\) there is a negative eigenfrequency counterpart \(\left\{-\omega_{\mu},\left(Y^{\mu}_{m(a\rightarrow\kappa_{m})},X^{\mu}_{m(a \rightarrow\kappa_{m})}\right)\right\}.\)
Furthermore, using the Wigner-Eckart theorem, we arrive at the RPA-dressed reduced matrix elements,
\[\langle m||Z^{\text{RPA}}||a\rangle =\left(\varepsilon_{a}-\varepsilon_{m}+\omega\right)\] \[\times\sum_{\mu}\frac{\text{sign}\left(\omega^{\mu}\right)R_{\mu }}{\omega-\omega_{\mu}}X^{\mu}_{m(b\rightarrow\kappa_{n})}\,, \tag{110a}\] \[\langle a||Z^{\text{RPA}}||m\rangle =\left(\varepsilon_{a}-\varepsilon_{m}-\omega\right)\left(-1 \right)^{m-a+J}\] \[\times\sum_{\mu}\frac{\text{sign}\left(\omega^{\mu}\right)R_{\mu }}{\omega-\omega_{\mu}}Y^{\mu}_{m(b\rightarrow\kappa_{n})}\,, \tag{110b}\]
where the "residuals" \(R_{\mu}\) are defined as
\[R_{\mu} \equiv\sum_{nb}\left(X^{\mu}_{n(b\rightarrow\kappa_{n})}\langle n|| z||b\rangle\right.\] \[+\left.(-1)^{j_{n}-j_{b}+J}\,Y^{\mu}_{n(b\rightarrow\kappa_{n})} \langle b||z||n\rangle\right)\,. \tag{111}\]
This concludes the derivation of our method. Some further simplifications are possible, like reduction to positive frequency summations and we leave these straightforward steps to the reader. Beyond offering a one-shot solution to the RPA equations, our approach is beneficial in evaluating matrix elements for multiple transitions. Indeed, if one stores the eigenvectors \(\left(X^{\mu}_{n(b\rightarrow\kappa_{n})},Y^{\mu}_{n(b\rightarrow\kappa_{n})}\right)\), the
eigenvalues \(\omega_{\mu}\), and the residuals \(R_{\mu}\), then the dressed matrix elements are easily assembled for any given driving frequency \(\omega\).
Finally, a numerical evaluation of the derived expressions requires single-particle orbital basis sets. In the main text, we use both the DHF and BO finite basis sets. These were described in Appendices A and B, respectively.
|
2310.14365 | The classical topological invariants of homogeneous spaces | We study the homogeneous spaces of a simply connected, compact, simple Lie
group $G$ through the lens of K-theory. Our methods apply equally well to the
case where $G$ is in one of the four infinite families of classical groups, or
one of the five exceptional groups. The main examples we study in detail are
the four symmetric spaces FII, EIII, EVI, EVIII in Cartan's list of symmetric
spaces. These are, respectively, homogeneous spaces for $F_4$, $E_6$, $E_7$,
$E_8$ with dimensions $16$, $32$, $64$, $128$. They are the four Rosenfeld
projective planes. | John Jones, Dmitriy Rumynin, Adam R. Thomas | 2023-10-22T17:18:20Z | http://arxiv.org/abs/2310.14365v1 | # The classical topological invariants of homogeneous spaces
###### Abstract.
We study the homogeneous spaces of a simply connected, compact, simple Lie group \(G\) through the lens of K-theory. Our methods apply equally well to the case where \(G\) is in one of the four infinite families of classical groups, or one of the five exceptional groups. The main examples we study in detail are the four symmetric spaces FII, EIII, EVI, EVIII in Cartan's list of symmetric spaces. These are, respectively, homogeneous spaces for \(F_{4}\), \(E_{6}\), \(E_{7}\), \(E_{8}\) with dimensions 16, 32, 64, 128. They are the four Rosenfeld projective planes.
## Introduction
Let \(G\) be a compact simply connected simple Lie group. A symmetric space for \(G\) is a homogeneous space of the form \(G/K\) where \(K\) is the fixed point subgroup of an involution of \(G\). Cartan classified the symmetric spaces into seven infinite families, the symmetric spaces for the classical groups, together with twelve symmetric spaces for the exceptional groups. A more conceptual, modern, classification is given in [11] in terms of Freudenthal magic squares.
The classical symmetric spaces are very well understood. For example, the papers of Borel and Hirzebruch [1, 2, 3], Bott and Samelson [14, 2] give a wealth of useful information concerning the cohomology of the classical symmetric spaces, their homotopy groups, and the characteristic classes of the natural bundles over them. In comparison the exceptional symmetric spaces are not at all well understood. The paper by Piccinni [15], in particular Table A, gives an account of what is known and the relevant references.
Here is an example which shows quite how complicated this can be. The simply connected compact Lie group \(E_{8}\) has a 128 dimensional symmetric space but there does not seem to be an explicit presentation of its rational cohomology ring in the literature. To adapt a slogan from [1], we understand compact simply connected simple Lie groups only as far as we understand \(E_{8}\).
Before going into more detail we fix our conventions used throughout the paper.
1. Representations will always be finite-dimensional complex representations, unless explicitly stated otherwise, and K-theory means complex K-theory.
2. A Lie group is automatically compact and connected. All subgroups of Lie groups are assumed to be Lie groups in their own right.
3. We use the term _exceptional Lie group_ for a simply connected simple Lie group of exceptional type.
4. Let \(G\) be a Lie group. We make a choice of a maximal torus \(T\) in \(G\). Then we use the term _fundamental representation_ of \(G\) for an irreducible representation of \(G\) whose highest weight is a fundamental weight for \(T\). If \(G\) is simply connected then there are \(l\) fundamental representations \(\rho_{1},\ldots\rho_{l}\) of \(G\), where \(l\) is the rank of \(G\), and the representation ring of \(G\) is the polynomial ring \[R(G)=\mathbb{Z}[\rho_{1},\ldots,\rho_{l}].\]
5. Let \(G\) be a simply connected Lie group. Then we use the term _minimal representation_ for a representation of \(G\) that is faithful and irreducible of the minimal dimension.
If \(K\subset G\) is a closed subgroup of \(G\) then the restriction homomorphism is a ring homomorphism \(R(G)\to R(K)\) of representation rings. Our primary strategy is to extract topological information about \(G/K\) from this ring homomorphism. We use K-theory to study topological invariants and to express these invariants in terms of representation theory. In doing so, we take advantage of the usual natural operations, \(\lambda\)-operations \(\lambda^{k}\) and Adams operations \(\psi^{k}\), acting on these rings. Then we use the powerful methods of representation theory, both conceptual and computational, to get our results. This leads to systematic methods that apply equally well to any simply connected Lie group. Our main objective is to apply these methods to understand more about the exceptional symmetric spaces.
There is significant computational aspect to this approach, with its emphasis on the exceptional symmetric spaces, since some big numbers occur in the representation theory of the exceptional Lie groups. For example, the representation ring of \(E_{8}\) is generated by the eight fundamental representations, with dimension
\[3875,669600,6899079264,146325270,2450240,30380,248,147250.\]
The examples we study in detail in this paper are the four Rosenfeld projective planes, FII, EIII, EVI, EVIII in Cartan's list, with dimensions 16, 32, 64, and 128. We plan to study the other exceptional symmetric spaces, in detail, in a follow up to this paper.
The following three results give us a good starting point for understanding topological invariants of a Lie group \(G\), its homogeneous spaces \(G/K\), and its classifying space \(BG\), in terms of K-theory and representation theory.
1. Let \(G\) be a Lie group with \(\pi_{1}(G)\) a free abelian group. Then a theorem due to Luke Hodgkin [10] gives a simple and conceptual description of the K-theory of \(G\) as a functor of the representation ring \(R(G)\): precisely \[\mathrm{K}^{*}(G)=\mathrm{Tor}^{*}_{R(G)}(\mathbb{Z},\mathbb{Z}).\]
2. Suppose that \(K\) is a closed subgroup of \(G\) with the same rank as \(G\). Then again we get a simple and conceptual description of the \(K\)-theory of \(G/K\) using a theorem of Harsh Pittie [11]: precisely \[\mathrm{K}^{*}(G/K)=R(K)\otimes_{R(G)}\mathbb{Z}.\]
3. If \(G\) is any Lie group, the well-known Atiyah-Segal completion theorem [1] gives the K-theory of \(BG\) as a functor of the representation ring of \(G\): \[\mathrm{K}^{*}(BG)=\hat{R}(G)\] where \(\hat{R}(G)\) is the completion of \(R(G)\) at the augmentation ideal.
### Contents
We now give a summary of this paper and try to make the conceptual relations between the various, and quite diverse, sections clear.
**Section 1**.: We start with precise formulations of the theorems of Hodgkin and Pittie and outline the proofs.
**Section 2**.: Next, let \(G\) be a Lie group with free abelian fundamental group. We turn to the study of the Adams operations in \(\mathrm{K}^{*}(G)\). Here is an example which shows why this is important. Representations of \(K\) give vector bundles over \(G/K\) and it is natural to use characteristic classes of these bundles to study the cohomology of \(G/K\). For example, given a topological space \(X\), the Chern character gives an isomorphism of \(\mathbb{Z}/2\)-graded rings
\[\mathrm{ch}:\mathrm{K}^{*}(X)\otimes\mathbb{Q}\to H^{*}(X;\mathbb{Q}).\]
Here \(H^{*}\) means the direct product of the \(H^{k}\). However this does not determine the individual groups \(H^{k}(X;\mathbb{Q})\). To do this we use Adams operations.
According to Hodgkin's theorem the \(\mathbb{Z}/2\)-graded ring \(\operatorname{K}^{*}(G)\) is generated by elements in \(\operatorname{K}^{1}(G)\) and we must explain what we mean by Adams operations in the \(\mathbb{Z}/2\)-graded ring \(\operatorname{K}^{*}(G)\). We use the (standard) terminology \(\Psi\)-ring for a ring \(R\) equipped with a sequence of ring endomorphisms \(\psi^{k}\), \(k\geq 1\) such that \(\psi^{1}=1\) and \(\psi^{k}\psi^{l}=\psi^{kl}\).
The representation ring of a Lie group is an example of a \(\Psi\)-ring. Following Bousfield [Bousfield] we describe the notion of a \(\mathbb{Z}/2\)-graded \(\Psi\)-ring. We then show how the \(\Psi\)-ring structure of \(R(G)\) determines the \(\mathbb{Z}/2\)-graded \(\Psi\)-ring structure of \(\operatorname{K}^{*}(G)\).
**Section 3**.: In [10] Serre proves that a Lie group of rank \(l\) has the rational homotopy type of a product of \(l\) odd-dimensional spheres. The type of \(G\) is this list of odd integers. The main theme in this section is to use a functorial version of the type of \(G\).
Let \(V(G)\) be the indecomposable quotient of the ring \(R(G)\). In other words,
\[V(G)=R(G)/(\mathbb{Z}\oplus I^{2}(G))\]
where \(I(G)\) is the augmentation ideal of \(R(G)\). It is a free abelian group with rank equal to the rank of \(G\). It also has Adams operations, More precisely, it is a \(\Psi\)-module in the usual sense. We show how to calculate the type of \(G\) from the \(\Psi\)-module \(V(G)\).
Next let \(\pi(G)\otimes\mathbb{Q}\) be the direct sum of the nonzero rational homotopy groups of \(G\). We show that there is a natural isomorphism
\[\pi(G)\otimes\mathbb{Q}\to V^{*}(G)\otimes\mathbb{Q}.\]
where \(V^{*}(G)\) is the dual of \(V(G)\). Using Adams operations we can refine this to give a natural description of the individual rational homotopy groups.
The importance of this result is that it is functorial. It gives a way of using representation theory to compute the homomorphism of rational homotopy groups defined by a homomorphism of Lie groups \(h:H\to G\). This turns out to be a very efficient way of doing such calculations.
**Section 4**.: The previous section shows that the \(\Psi\)-module \(V(G)\) is a very useful invariant of \(G\) and in this section we study it in much more detail. Let \(G\) be a simply connected simple Lie group excluding the groups \(\operatorname{Spin}(n)\). Then in Theorem 4.3.1 we show that the \(\Psi\)-module \(V(G)\otimes\mathbb{Q}\) is generated by the element of \(V(G)\otimes\mathbb{Q}\) defined by a minimal representation of \(G\). It follows from Section 3 that the \(\mathbb{Z}/2\)-graded \(\Psi\)-ring \(\operatorname{K}^{*}(G)\otimes\mathbb{Q}\) is generated by a single element. This in turn leads to the following result. Let \(H\) be a Lie group and \(f,g:H\to G\) be two homomorphisms. Let \(u\) be a minimal representation of \(G\). Then the two homomorphisms
\[f_{*},g_{*}:\pi_{*}(H)\otimes\mathbb{Q}\to\pi_{*}(G)\otimes\mathbb{Q}\]
are equal if and only if
\[f^{*}(u)=g^{*}(u)\in V(G).\]
In our examples we do not need to know the \(\Psi\)-module structure of \(V(\operatorname{Spin}(n))\). The details are quite intricate so we avoid following this unnecessary, but interesting, diversion.
**Section 5**.: Now let \(G\) be a simply connected simple Lie group, again excluding the groups \(\operatorname{Spin}(n)\). Let \(U\) be the vector bundle over \(BG\) defined by a minimal representation of \(G\). In this section we apply the results of Section 4 to show that the rational cohomology of \(BG\) is generated by the classes \(\operatorname{ch}_{q}(U)\) with \(q\geq 1\). Here \(\operatorname{ch}_{q}(U)\in H^{2q}(BG;\mathbb{Q})\) is the degree \(2q\) component of the Chern character of \(U\). This is obvious for the classical groups. But it tells us something very useful in the case of the exceptional groups. More generally, let \(\rho\) be a representation of \(G\) and let \(E_{\rho}\) be the vector bundle over \(BG\) defined by \(\rho\). We give a necessary and sufficient condition on \(\rho\) which ensures that the elements \(\operatorname{ch}_{q}(E_{\rho})\) with \(q\geq 1\) generate the rational cohomology of \(BG\).
Now let \(K\) be a closed connected subgroup of \(G\), with the same rank as \(G\). Then we use this result to distill a classical theorem of Borel into the following presentation of \(H^{*}(G/K;\mathbb{Q})\). Let
\(\pi:BK\to BG\) be the map defined by the inclusion of \(K\) in \(G\). Then we can choose \(BK\) in such a way that \(\pi\) is a fibre bundle with fibre \(G/K\). Let \(j:G/K\to BK\) be the inclusion of a fibre. The homomorphism \(j^{*}:H^{*}(BK;\mathbb{Q})\to H^{*}(G/K;\mathbb{Q})\) is surjective since \(K\) and \(G\) have the same rank. Then
\[j^{*}(\operatorname{ch}_{q}(\pi^{*}(U))=0\quad\text{for }q\geq 1\]
since \(j^{*}\pi^{*}(U)\) is a trivial bundle. We show that the kernel of \(j^{*}\) is the ideal \(I\) generated by \(\pi^{*}(\operatorname{ch}_{q}(U))\) for \(q\geq 1\). Therefore \(j^{*}\) defines an isomorphism
\[H^{*}(BK;\mathbb{Q})/I\to H^{*}(G/K;\mathbb{Q}).\]
We can, of course, replace the \(\operatorname{ch}_{q}\) by the Chern classes \(c_{q}\).
**Section 6**.: This section contains a key idea. We can make a very clean transition from K-theory and rational homotopy groups to rational cohomology by using the theory of _rationally elliptic spaces_ due to Felix, Halperin, and Thomas [15]. Any homogeneous space is a rationally elliptic space. This section is short but important. The theory tells us that once we know the rational homotopy groups of a rationally elliptic space we can read off a lot of structural information about its rational cohomology. For example, in the case where the Euler characteristic of \(\pi_{*}(X)\otimes\mathbb{Q}\) is zero, it gives a general formula for the Poincare series and Euler characteristic of \(H^{*}(X;\mathbb{Q})\).
**Section 7**.: We now focus on the Rosenfeld projective planes. These are the four homogeneous spaces \(G/K\) where \(G\) and \(K\) are given in the following table.
\begin{tabular}{|c|c|c|c|c|c|} \hline \(G\) & \(K\) & dim. & (a) & (b) & (c) \\ \hline \(F_{4}\) & Spin(9) & 16 & FII & \(P^{2}(\mathbb{O})\) & \(R_{4}\) \\ \hline \(E_{6}\) & Spin\((10)\times_{C_{4}}\operatorname{U}(1)\) & 32 & EIII & \(P^{2}(\mathbb{O}\otimes\mathbb{C})\) & \(R_{5}\) \\ \hline \(E_{7}\) & Spin\((12)\times_{C_{2}}\operatorname{Sp}(1)\) & 64 & EVI & \(P^{2}(\mathbb{O}\otimes\mathbb{H})\) & \(R_{6}\) \\ \hline \(E_{8}\) & Spin\((16)/C_{2}\) & 128 & EVIII & \(P^{2}(\mathbb{O}\otimes\mathbb{O})\) & \(R_{7}\) \\ \hline \end{tabular} The last three columns are: (a) the label in Cartan's list of symmetric spaces, (b) the Rosenfeld notation, (c) the notation we use. In [15, Table 2] they appear as the exceptional symmetric spaces of Grassmannian type. It is usual to complete this to a list of seven, by setting \(R_{1}=P^{2}(\mathbb{R})\), \(R_{2}=P^{2}(\mathbb{C})\), and \(R_{3}=P^{2}(\mathbb{H})\).
In this section we explain how to calculate the rational homotopy groups of these Rosenfeld projective planes and set out what the theory of rationally elliptic spaces tells us about the rational cohomology these symmetric spaces.
**Section 8**.: Here we provide explicit presentations of the rational cohomology of \(R_{5},R_{6},R_{7}\). We prove the presentation is correct in the case of \(H^{*}(R_{7};\mathbb{Q})\), using the results in Section 5. This is the most complicated and seems to be the first explicit presentation of \(H^{*}(R_{7};\mathbb{Q})\). We do not go into the details of the proofs for \(H^{*}(R_{5};\mathbb{Q})\) and \(H^{*}(R_{6};\mathbb{Q})\) but note that they agree with the results of [14] and [16].
**Section 9**.: In this section we explain how we do the computations in representation theory required to complete the proofs of Theorem 4.3.1 in Section 4 and Lemma 7.0.2 in Section 7.
### Acknowledgments
This research was funded in part by the EPSRC, EP/W000466/1 (Thomas). For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission.
## 1. The K theory of Lie groups and symmetric spaces
### The \(\mathrm{K}\)-theory of Lie groups
Recall that for any connected space \(X\)
\[\mathrm{K}^{1}(X)=[X,\mathrm{U}].\]
Here \([,]\) means homotopy classes of maps and \(\mathrm{U}=\mathrm{U}(\infty)\) is the direct limit of the unitary groups \(\mathrm{U}(n)\) where \(\mathrm{U}(n)\) is embedded as \(\mathrm{U}(n)\oplus 1\) in \(\mathrm{U}(n+1)\).
Now suppose \(G\) is a Lie group and \(\rho:G\to\mathrm{U}(n)\) is a representation of \(G\). We can compose \(\rho\) with the stabilisation homomorphism \(\mathrm{U}(n)\to\mathrm{U}\) to get the element
\[\beta(\rho)\in\mathrm{K}^{1}(G).\]
This gives a homomorphism of abelian groups
\[\beta:R(G)\to\mathrm{K}^{1}(G).\]
In [10] Hodgkin proves that the kernel of this homomorphism is
\[\mathbb{Z}\oplus I^{2}(G)\subseteq R(G).\]
As in the introduction write
\[V(G)=R(G)/(\mathbb{Z}\oplus I^{2}(G))\]
for the indecomposable quotient of the representation ring \(R(G)\). It is a free abelian group of rank equal to the rank of \(G\). We will often identify \(V(G)\) with a subgroup of \(\mathrm{K}^{1}(G)\) using \(\beta\). We can now state Hodgkin's theorem [10, Theorem 1].
**Theorem 1.1.1**.: _Let \(G\) be a Lie group with \(\pi_{1}(G)\) free abelian. Then_
\[\mathrm{K}^{*}(G)=\Lambda^{*}(V(G))\]
_is the \(\mathbb{Z}/2\)-graded exterior algebra over \(\mathbb{Z}\) generated by \(V(G)\subseteq\mathrm{K}^{1}(G)\)._
The first step in Hodgkin's proof is to show that \(\mathrm{K}^{*}(G)\) is torsion free. It follows from this that \(\mathrm{K}^{*}(G)\) is a \(\mathbb{Z}/2\)-graded Hopf algebra over \(\mathbb{Z}\), and this allows him to use facts about Hopf algebras to complete the proof. In [1], Atiyah gives a different way of completing the proof, assuming that \(\mathrm{K}^{*}(G)\) is torsion free, by using the Chern character.
This result can be reformaulted to say that
\[\mathrm{K}^{*}(G)=\mathrm{Tor}^{*}_{R(G)}(\mathbb{Z},\mathbb{Z}),\]
as \(\mathbb{Z}/2\)-graded rings.
### The \(\mathrm{K}\) theory of symmetric spaces
Let \(G\) be a Lie group and let \(K\) be closed subgroup of \(G\). In [1] Atiyah and Hirzebruch describe a natural homomorphism
\[\alpha:R(K)\to\mathrm{K}^{0}(G/K)\]
This homomorphism associates to a representation \(\rho:K\to\mathrm{U}(n)\) the homogeneous vector bundle \(V_{\rho}=G\times_{\rho}\mathbb{C}^{n}\). It is clear that if the representation \(\rho\) extends to \(G\) then \(V_{\rho}\) is isomorphic to the product bundle \(G/K\times\mathbb{C}^{n}\). In this way we get a homomorphism
\[\alpha:R(K)\otimes_{R(G)}\mathbb{Z}\to\mathrm{K}^{0}(G/K).\]
Atiyah and Hirzebruch conjectured that if \(G\) and \(K\) have the same rank then this ring homomorphism is surjective. In [12], (see also [11]), Harsh Pittie, proves the following result.
**Theorem 1.2.1**.: _Let \(G\) be a Lie group such that \(\pi_{1}(G)\) is a free abelian group. Let \(K\subseteq G\) be a closed connected subgroup of \(G\) with the same rank as \(G\). Then_
\[\alpha:R(K)\otimes_{R(G)}\mathbb{Z}\to\mathrm{K}^{0}(G/K)\]
_is a ring isomorphism and_
\[\mathrm{K}^{1}(G/K)=0.\]
Pittie does not explicitly state that \(\mathrm{K}^{1}(G/K)=0\) but his method obviously proves it, as we explain in the next section.
### On the proofs
The above results are both consequences of the Kunneth theorem, due to Luke Hodgkin [10], in Atiyah-Segal equivariant K-theory. Let \(G\) be a Lie group, such that \(\pi_{1}(G)\) is free abelian. Let \(X\) and \(Y\) be compact \(G\)-spaces. There is a spectral sequence with \(E_{2}\) page
\[E_{2}^{p,q}=\mathrm{Tor}_{R(G)}^{p,q}(\mathrm{K}_{G}^{*}(X),\mathrm{K}_{G}^{*} (Y))\Longrightarrow\mathrm{K}_{G}^{*}(X\times Y).\]
\[d_{r}:E_{r}^{p,q}\to E_{r}^{p-r,q+r-1}\]
Here \(p\in\mathbb{Z}\) and \(p\geq 0\) and \(q\in\mathbb{Z}/2\) since \(\mathrm{K}_{G}^{*}\) is \(\mathbb{Z}/2\)-graded.
For example the \(E_{2}\) page has two rows
\[E_{2}^{p,0} =\mathrm{Tor}_{R(G)}^{p}(\mathrm{K}_{G}^{0}(X),\mathrm{K}_{G}^{0} (Y))\oplus\mathrm{Tor}_{R(G)}^{p}(\mathrm{K}_{G}^{1}(X),\mathrm{K}_{G}^{1}(Y))\] \[E_{2}^{p,1} =\mathrm{Tor}_{R(G)}^{p}(\mathrm{K}_{G}^{0}(X),\mathrm{K}_{G}^{1 }(Y)\oplus\mathrm{Tor}_{R(G)}^{p}(\mathrm{K}_{G}^{1}(X),\mathrm{K}_{G}^{0}(Y)).\]
Hodgkin was only able to prove the convergence of the spectral sequence in special cases. This was also studied by Snaith [14]. The proof that it converges when \(\pi_{1}(G)\) is a free abelian group was completed by McLeod in [12].
For example, when \(X=G/K\) and \(Y=G\) we can use the isomorphisms
\[\mathrm{K}_{G}^{*}(G/K)=R(K),\quad\mathrm{K}_{G}^{*}(G)=\mathbb{Z},\quad \mathrm{K}_{G}^{*}(G/K\times G)=\mathrm{K}^{*}(G/K)\]
to get a spectral sequence
\[E_{2}^{p,q}=\mathrm{Tor}_{R(G)}^{p,q}(R(H),\mathbb{Z})\Longrightarrow\mathrm{K }^{*}(G/H).\]
Pittie proves Theorem 1.2.1 by showing that, with the given hypotheses, this spectral sequence collapses at the \(E_{2}\) page. In [13] Steinberg proves, with the given hypotheses, that \(R(H)\) is a free module over \(R(G)\). This clearly proves both statements in Theorem 1.2.1.
A proof of Theorem 1.1.1 is given by taking \(X=Y=G\) to get a spectral sequence
\[E_{2}^{p,q}=\mathrm{Tor}_{R(G)}^{p,q}(\mathbb{Z},\mathbb{Z})\Longrightarrow \mathrm{K}^{*}(G).\]
The homomorphism \(\rho:R(G)\to\mathrm{K}^{1}(G)\) shows that every element in \(E_{2}^{1,*}\) is an infinite cycle. Since \(E_{2}^{*,*}\) is multiplicatively generated \(E_{2}^{1,*}\) this proves the spectral sequence collapses at the \(E_{2}\) page.
## 2. Adams operations in \(\mathrm{K}^{*}(G)\)
Hodgkin's theorem tells us that if \(G\) is a Lie group with \(\pi_{1}(G)\) free abelian, the \(\mathbb{Z}/2\)-graded ring \(\mathrm{K}^{*}(G)\) is generated by elements in \(\mathrm{K}^{1}(G)\). In this section we explain what we mean by Adams operations in a \(\mathbb{Z}/2\)-graded ring, and extend Hodgkin's theorem by working out the Adams operations in \(\mathrm{K}^{*}(G)\).
### \(\Psi\)-modules
By a \(\Psi\)-module, we mean an abelian group \(A\) equipped with a sequence of endomorphisms \(\psi^{k}\), \(k\geq 1\) such that
\[\psi^{1}=1,\quad\psi^{k}\psi^{l}=\psi^{kl}.\]
Such operations are usually referred to as Adams operations.
We need some notation. Let \(A\) be a \(\Psi\)-module. An element \(a\in A\) has _weight_\(r\) if \(\psi^{k}a=k^{r}a\) for all \(k\). Let \(\mathbb{Z}(r)\), be the \(\Psi\)-module freely generated (as an abelian group) by a single element of weight \(r\). In this notation
\[\tilde{\mathrm{K}}^{0}(S^{2r})=\mathbb{Z}(r)\]
as \(\Psi\)-modules (where \(\tilde{\mathrm{K}}^{0}(X)\) denotes the usual reduced K-theory of \(X\)). We also use the notation
\[\mathbb{Z}(r_{1},\ldots,r_{l})=\mathbb{Z}(r_{1})\oplus\cdots\oplus\mathbb{Z}( r_{l})\]
\[\mathbb{Q}(r_{1},\dots,r_{l})=\mathbb{Z}(r_{1},\dots,r_{l})\otimes\mathbb{Q}.\]
Here is an example which is important for us. We take for granted the usual theory of \(\Lambda\)-rings (see [1]). Let \(R\) be a \(\Lambda\)-ring with augmentation ideal \(I\) and suppose \(I^{2}=0\). Then as a ring
\[R=\mathbb{Z}\oplus I\]
with product
\[(n+x)(m+y)=nm+ny+mx\]
where \(n,m\in\mathbb{Z}\), and \(x,y\in I\). It follows that if \(x,y\in I\)
\[\lambda^{k}(x+y)=\lambda^{k}(x)+\lambda^{k}(y),\quad\lambda^{k}(xy)=\lambda^{ k}(x)\lambda^{k}(y)=0.\]
Furthermore if \(x\in I\)
\[\lambda^{k}(\lambda^{l}(x))=\lambda^{kl}(x).\]
Finally for \(n\in\mathbb{Z}\) and \(x\in I\)
\[\lambda^{k}(n+x)=\sum_{i=0}^{k}\binom{n}{k-i}\lambda^{i}(x).\]
So the whole theory of \(\Lambda\)-operations in \(R\) boils down to the fact that the \(\lambda^{k}\)-operations define a \(\Psi\)-module structure on \(I\). The inductive formula for the Adams operations shows that if \(x\in I\) then
\[\psi^{k}(x)=(-1)^{k+1}k\lambda^{k}(x).\]
If \(n\in\mathbb{Z}\) and \(x\in I\) then \(\psi^{k}(n+x)=n+\psi^{k}(x)\), and so, in general, it is not equal to \((-1)^{k+1}k\lambda^{k}(n+x)\).
If we start with the \(\Lambda\)-ring \(R(G)\) where \(G\) is a Lie group we can form the \(\Lambda\)-ring
\[S(G)=R(G)/I^{2}(G).\]
Then the \(\Lambda\)-ring structure is determined by the \(\psi\)-module structure of the augmentation ideal \(I(G)/I^{2}(G)\) of \(S(G)\). We now have two quotient homomorphisms
\[R(G)\to S(G)\to V(G)\]
where, as in the introduction, \(V(G)=R(G)/(\mathbb{Z}\oplus I^{2}(G))\). Now \(\mathbb{Z}\oplus I^{2}(G)\) is sub-\(\psi\)-module of \(R(G)\) so \(V(G)\) inherits a \(\Psi\)-module structure. The first quotient is a map of \(\Lambda\)-rings and the second is a map of \(\Psi\)-modules. It is straightforward to check that the composition
\[I(G)/I^{2}(G)\to S(G)\to V(G)\]
is an isomorphism of \(\Psi\)-modules.
### \(\mathbb{Z}/2\)-graded \(\Psi\)-rings
By definition, a \(\Psi\)-ring is a ring equipped with ring endomorphisms \(\psi^{k}\) such that \(\psi^{1}=1\) and \(\psi^{k}\psi^{l}=\psi^{kl}\). Any \(\Lambda\)-ring becomes a \(\Psi\)-ring where the \(\psi^{k}\) are the Adams operations associated to the \(\lambda\)-operations.
In [1] Bousfield gives a thorough account of the general theory of \(\mathbb{Z}/2\)-graded \(\Lambda\)-rings. Here we give a self-contained account of the basic theory of \(\mathbb{Z}/2\)-graded \(\Psi\)-rings, which is adequate for our applications. This is a pared down version of the theory, but as Bousfield points out, if the ring is torsion free the Adams operations determine the \(\lambda\)-operations.
A \(\mathbb{Z}/2\)_-graded \(\Psi\)-ring_ is a \(\mathbb{Z}/2\)-graded ring \(A^{*}\), which is strictly graded commutative, and is equipped with \(\Psi\)-module structures \(\psi^{k}\) on \(A^{0}\), and \(\phi^{k}\) on \(A^{1}\) with the following properties.
1. The operations \(\psi^{k}\) make \(A^{0}\) into a \(\Psi\)-ring.
2. For \(x\in A^{0}\) and \(y\in A^{1}\), \(\phi^{k}(xy)=\psi^{k}(x)\phi^{k}(y)\).
3. For \(x,y\in A^{1},\psi^{k}(xy)=k\phi^{k}(x)\phi^{k}(y)\).
The ring \(\mathrm{K}^{0}(X)\) is a \(\Lambda\)-ring and the corresponding Adams operations make \(\mathrm{K}^{0}(X)\) into a \(\Psi\)-ring. We define operations \(\phi^{k}\) on \(\mathrm{K}^{1}(X)\) which give a \(\Psi\)-module structure on \(\mathrm{K}^{1}(X)\).
The ring \(\mathrm{K}^{0}(\Sigma X)\) is a \(\Lambda\)-ring with augmentation ideal \(\tilde{\mathrm{K}}^{0}(\Sigma X)\). If \(x,y\in\tilde{\mathrm{K}}^{0}(\Sigma X)\) then \(xy=0\). So it follows that the operations \(\lambda^{k}\) define a \(\Psi\)-module structure on \(\tilde{\mathrm{K}}^{0}(\Sigma X)\). Now we use the standard natural isomorphism
\[\mathrm{K}^{1}(X)\to\tilde{\mathrm{K}}^{0}(\Sigma X)\]
to transfer these operations on \(\tilde{\mathrm{K}}^{0}(\Sigma X)\) to \(\mathrm{K}^{1}(X)\). Define
\[\phi^{k}:\mathrm{K}^{1}(X)\to\mathrm{K}^{1}(X)\]
to be the endomorphism, of abelian groups, defined by
\[(-1)^{k+1}\lambda^{k}:\tilde{\mathrm{K}}^{0}(\Sigma X)\to\tilde{\mathrm{K}}^{ 0}(\Sigma X).\]
This defines a \(\Psi\)-module structure on \(\mathrm{K}^{1}(X)\). The sign convention is the one needed to get the correct signs in the following lemma.
**Lemma 2.2.1**.: _The operations \(\psi^{k}\) on \(\mathrm{K}^{0}(X)\) and \(\phi^{k}\) on \(\mathrm{K}^{1}(X)\) make \(\mathrm{K}^{*}(X)\) into a \(\mathbb{Z}/2\)-graded \(\Psi\)-ring._
Proof.: The proof is an exercise in the theory of products in K-theory, see [1, Chapter 2, Section 2.6]. The basic product is the external tensor product
\[\mathrm{K}^{0}(X)\otimes\mathrm{K}^{0}(Y)\to\mathrm{K}^{0}(X\times Y).\]
For this particular exercise it is best to use the reduced version
\[\tilde{\mathrm{K}}^{0}(X)\otimes\tilde{\mathrm{K}}^{0}(Y)\to\tilde{\mathrm{K}} ^{0}(X\wedge Y).\]
We assume that \(X,Y\) are connected, and we have chosen base points \(x_{0}\in X\), \(y_{0}\in Y\). By definition
\[X\wedge Y=X\times Y/(X\times y_{0}\cup x_{0}\times Y).\]
Next take \(X=Y\) and use the reduced diagonal \(\Delta:X\to X\wedge X\) to get the internal product
\[\tilde{\mathrm{K}}^{0}(X)\otimes\tilde{\mathrm{K}}^{0}(X)\to\tilde{\mathrm{K}} ^{0}(X)\]
Finally replace \(X\) by \(S^{p}\wedge X\) and \(Y\) by \(S^{q}\wedge X\) and use the appropriate suspension of \(\Delta\) to get the product
\[\tilde{\mathrm{K}}^{0}(S^{p}\wedge X)\otimes\tilde{\mathrm{K}}^{0}(S^{q} \wedge X)\to\tilde{\mathrm{K}}^{0}(S^{p}\wedge X\wedge S^{q}\wedge X)\to \tilde{\mathrm{K}}^{0}(S^{p+q}\wedge X).\]
We have to interchange \(X\) and \(S^{q}\), this introduces the sign \((-1)^{q}\), and use the standard identification of \(S^{p}\wedge S^{q}\) with \(S^{p+q}\).
Now use Bott periodicity to identify \(\tilde{\mathrm{K}}^{0}(S^{p}\wedge X)\) with \(\tilde{\mathrm{K}}^{0}(X)\) if \(p\) is even, or with \(\mathrm{K}^{1}(X)\) if \(p\) is odd. We end up with products
\[\mathrm{K}^{1}(X)\otimes\tilde{\mathrm{K}}^{0}(X)\to\tilde{\mathrm{K}}^{0}(X),\quad\mathrm{K}^{1}(X)\otimes\mathrm{K}^{1}(X)\to\tilde{\mathrm{K}}^{0}(X).\]
The verification of the formulas in the above definition follow from the fact that the Adams operations are ring homomorphisms of the product in \(\mathrm{K}^{0}(X)\), the definition of the operations \(\phi^{k}\) in \(\mathrm{K}^{1}(X)\) and the relation between the Adams operations and Bott periodicity.
### The \(\mathbb{Z}/2\)-graded \(\Psi\)-ring \(\mathrm{K}^{*}(G)\)
As usual, we assume that \(G\) is a Lie group with \(\pi_{1}(G)\) free abelian.
**Theorem 2.3.1**.: _The \(\mathbb{Z}/2\)-graded \(\Psi\)-ring \(\mathrm{K}^{*}(G)\) is isomorphic to \(\Lambda(V(G))\), the \(\mathbb{Z}/2\)-graded exterior algebra over \(\mathbb{Z}\), generated by \(V(G)\subseteq\mathrm{K}^{1}(G)\) equipped with the unique \(\mathbb{Z}/2\)-graded \(\Psi\)-ring structure determined by the \(\Psi\)-module structure on \(V(G)\)._
The proof of Theorem 2.3.1 follows directly from the following lemma. Recall that both \(R(G)\) and \(\mathrm{K}^{1}(G)\) are \(\Psi\)-modules.
**Lemma 2.3.2**.: _The homomorphism \(\beta:R(G)\to\mathrm{K}^{1}(G)=\tilde{\mathrm{K}}^{0}(\Sigma G)\) is a homomorphism of \(\Psi\)-modules._
Proof.: The isomorphism \(\mathrm{K}^{1}(X)\to\tilde{\mathrm{K}}^{0}(\Sigma X)\) is defined by the the so-called clutching construction in \(\mathrm{K}\)-theory. Given a space \(X\) we take two copies \(C_{\pm}(X)\) of the cone on \(X\) and glue them together along \(X\) to get
\[\Sigma X=C_{+}(X)\cup_{X}C_{-}(X).\]
Suppose we have a map \(f:X\to U(n)\). This give us an isomorphism \(\hat{f}:X\times\mathbb{C}^{n}\to X\times\mathbb{C}^{n}\) and we can form the vector bundle
\[V(f)=C_{+}(X)\times\mathbb{C}^{n}\cup_{\hat{f}}C_{-}(X)\times\mathbb{C}^{n}\]
over \(\Sigma X\). If we replace \(f\) by \(\lambda^{k}(f):X\to\mathrm{U}(\lambda^{k}(\mathbb{C}^{n}))\) it is clear that
\[\lambda^{k}(V(f))=V(\lambda^{k}(f)).\]
It is straightforward to follow through the arguments of [1, Chapter 3, Section 3.1] in this special case to convert this observation into a proof of the lemma.
In the case of a Lie group \(G\) the \(\alpha,\beta\) constructions of Section 1 are related in the following way. The join \(G*G\) has a standard free action of \(G\) and \((G*G)/G\) is homotopy equivalent to \(\Sigma G\), the suspension of \(G\). This gives us a map \(\alpha:R(G)\to\mathrm{K}^{0}((G*G)/G)=\mathrm{K}^{0}(\Sigma G)\). This map is clearly a map of \(\Lambda\)-rings. On the other hand we have the map \(\beta:R(G)\to\mathrm{K}^{1}(G)\). Now we can compare these two maps by using the isomorphism \(\sigma:\mathrm{K}^{1}(G)\to K^{0}(\Sigma G)\) defined by the clutching construction. It is not difficult to show that the following diagram commutes
\[\begin{CD}R(G)@>{\alpha}>{}>\mathrm{K}^{0}(\Sigma G)\\ @V{\beta}V{}V@V{}V{p}V\\ \mathrm{K}^{1}(G)@>{}>{\sigma}>\tilde{\mathrm{K}}^{0}(\Sigma G)\end{CD}\]
where \(p:\mathrm{K}^{0}(\Sigma G)\to\tilde{\mathrm{K}}^{0}(\Sigma G)\) is the projection. This gives a different proof of Lemma 2.3.2 in this special case.
## 3. The type of a Lie group
Let \(G\) be a Lie group of rank \(l\). Serre proves in [1] that \(G\) is rationally homotopy equivalent to a product of odd dimensional spheres, \(S^{2r_{l}-1}\times\cdots\times S^{2r_{l}-1}\), where
1. \(1\leq r_{1}\leq r_{2}\leq\cdots\leq r_{l}\),
2. \(\dim G=(2r_{1}-1)+\cdots+(2r_{l}-1)\).
The _type_ of \(G\) is the sequence
\[(2r_{1}-1,\ldots,2r_{l}-1).\]
Using this result, it is straightforward to calculate the \(\mathbb{Z}/2\)-graded \(\Psi\)-ring \(\mathrm{K}^{*}(G)\otimes\mathbb{Q}\) and the \(\Psi\)-module \(V(G)\otimes\mathbb{Q}\).
**Theorem 3.0.1**.: _Let \(G\) be a Lie group of rank \(l\) with \(\pi_{1}(G)\) free abelian. Let \((2r_{1}-1,\ldots,2r_{l}-1)\) be the type of \(G\). There are isomorphisms_
1. \[\mathrm{K}^{*}(G)\otimes\mathbb{Q}\to\Lambda_{\mathbb{Q}}(a_{1},\ldots,a_{l}),\] _where the weight of_ \(a_{i}\) _is_ \(r_{i}\)_, and_
2. \[V(G)\otimes\mathbb{Q}\to\mathbb{Q}(r_{1},\ldots,r_{l}).\]
Proof.: We know that \(\mathrm{K}^{1}(S^{2r-1})=\mathbb{Z}(a)\) where \(a\) has weight \(r\). The result now follows directly from the Kunneth formula for \(\mathrm{K}\)-theory.
This tells us how to compute the type of \(G\) from the representation ring of \(G\): we must compute the eigenvalues of the Adams operations on \(V(G)\).
### \(V(g)\) and the rational homotopy groups of \(G\)
If \(X\) is a space and \(f:S^{n}\to X\) is a map then we have the induced homomorphism
\[f^{*}:\mathrm{K}^{*}(X)\to\mathrm{K}^{*}(S^{n}).\]
This clearly defines a homomorphism
\[d:\pi_{n}(X)\to\mathrm{Hom}_{\Psi}(\mathrm{K}^{*}(X),\mathrm{K}^{*}(S^{n})),\]
which is the Adams \(d\)-invariant.
We now examine the \(d\)-invariant
\[d:\pi_{2r-1}(G)\otimes\mathbb{Q}\to\mathrm{Hom}_{\Psi}(\mathrm{K}^{1}(G), \mathbb{Q}(r)).\]
If \(f:S^{2r-1}\to G\) then \(f^{*}:\mathrm{K}^{1}(G)\to\mathrm{K}^{1}(S^{2r-1})\) vanishes on non-trivial products so it is uniquely determined by its restriction to the generators \(V(G)\subseteq\mathrm{K}^{1}(G)\). So we abuse notation and regard \(d(f)\) as an element
\[d(f)\in\mathrm{Hom}_{\Psi}(V(G),\mathbb{Q}(r)).\]
Finally, we rename \(\mathrm{Hom}_{\Psi}(V(G),\mathbb{Q}(r))\) as \(V_{r}^{*}(G)\otimes\mathbb{Q}\) since it is the weight \(r\) subspace of the \(\Psi\)-module \(V^{*}(G)\otimes\mathbb{Q}\) dual to the \(\Psi\)-module \(V(G)\).
**Theorem 3.1.1**.: _Let \(G\) be a Lie group such that \(\pi_{1}(G)\) is torsion free. The homomorphism_
\[d:\pi_{2r-1}(G)\otimes\mathbb{Q}\to V_{r}^{*}(G)\otimes\mathbb{Q}.\]
_is an isomorphism._
Proof.: If we replace \(G\) by an odd sphere \(S^{2r-1}\) this result is straightforward. The full result follows because \(G\) is rationally homotopy equivalent to a product of odd dimensional spheres
Suppose \(h:H\to G\) is a homomorphism of Lie groups. Then Theorem 3.1.1 gives a general method of computing the homomorphism \(h_{*}:\pi_{2r-1}(H)\otimes\mathbb{Q}\to\pi_{2r-1}(G)\otimes\mathbb{Q}\) in terms of representation theory. Explicitly, the following diagram commutes
\[\begin{CD}\pi_{2r-1}(H)\otimes\mathbb{Q}@>{h_{*}}>{}>\pi_{2r-1}(G)\otimes \mathbb{Q}\\ @V{d}V{}V@V{}V{d}V\\ V_{r}^{*}(H)\otimes\mathbb{Q}@>{}>{h_{*}}>V_{r}^{*}(G)\otimes\mathbb{Q}\end{CD}\]
where the horizontal arrows are the linear maps induced by \(h\) and the vertical ones are the isomorphisms in Theorem 3.1.1.
### The rational homotopy groups of a homogeneous space
Let \(G\) be a Lie group. It follows from Serre's theorem that \(\pi_{p}(G)\otimes\mathbb{Q}=0\) if \(p\) is even. Now suppose \(K\) is a compact connected subgroup of \(G\). It follows that the exact sequence in rational homotopy groups of the principal \(K\)-bundle \(G\to G/K\) splits up into exact sequences
\[0\to\pi_{2r}(G/K)\otimes\mathbb{Q}\to\pi_{2r-1}(K)\otimes\mathbb{Q}\to\pi_{2r -1}(G)\otimes\mathbb{Q}\to\pi_{2r-1}(G/K)\otimes\mathbb{Q}\to 0\]
for \(r\geq 1\). We refer to these exact sequences as the _four-term exact sequences_. Using Theorem 3.1.1 we can rewrite these exact sequences as
\[0\to\pi_{2r}(G/K)\otimes\mathbb{Q}\to V(K)_{r}^{*}\otimes\mathbb{Q}\to V(G)_{ r}^{*}\otimes\mathbb{Q}\to\pi_{2r-1}(G/K)\otimes\mathbb{Q}\to 0.\]
This proves the following result.
**Theorem 3.2.1**.: _Let \(G\) be a Lie group with a subgroup \(K\). Let \(i:K\to G\) be the inclusion of \(K\) and let_
\[i_{*}:V^{*}(K)\otimes\mathbb{Q}\to V^{*}(G)\otimes\mathbb{Q}\]
_be the homomorphism of \(\Psi\)-modules induced by \(i\). Then there are natural isomorphisms_
\[\pi_{2r}(G/K)\otimes\mathbb{Q} \to\ker(i^{*}:V^{*}(K)_{2r-1}\otimes\mathbb{Q}\to V^{*}(G)_{2r-1} \otimes\mathbb{Q}),\] \[\pi_{2r-1}(G/K)\otimes\mathbb{Q} \to\operatorname{coker}(i^{*}:V^{*}(K)_{2r-1}\otimes\mathbb{Q} \to V^{*}(G)_{2r-1}\otimes\mathbb{Q}).\]
## 4. The \(\Psi\)-module structure of \(V(G)\)
In this section we study the \(\Psi\)-module \(V(G)\) in detail and give two applications to the \(K\)-theory and rational homotopy groups of \(G\). We use the notation and terminology for \(\Psi\)-modules described in Section 2. We say that a \(\Psi\)-module is cyclic if it is generated, as a \(\Psi\)-module, by one element.
### A basic lemma
**Lemma 4.1.1**.:
1. _The_ \(\Psi\)_-module_ \(\mathbb{Q}(r_{1},\ldots,r_{l})\) _is a cyclic_ \(\Psi\)_-module if and only if the_ \(r_{i}\) _are distinct._
2. _Suppose_ \(r_{1},\ldots,r_{l}\) _are distinct. Then_ \(u\) _is a generator of the_ \(\Psi\)_-module_ \(\mathbb{Q}(r_{1},\ldots,r_{l})\) _if and only if_ \(u=u_{1}+\cdots+u_{l}\) _where_ \(u_{i}\) _is a non-zero element of_ \(\mathbb{Q}(r_{1},\ldots,r_{l})\) _with weight_ \(r_{i}\)_._
Proof.: Let \(r\) be one of the \(r_{i}\) and let \(W_{r}\) be the weight \(r\) subspace of \(\mathbb{Q}(r_{1},\ldots,r_{l})\). So the dimension of \(W_{r}\) is the number of \(r_{i}\) equal to \(r\). Let \(p_{r}:\mathbb{Q}(r_{1},\ldots,r_{l})\to W_{r}\) be the standard projection onto \(W_{r}\). Suppose \(\mathbb{Q}(r_{1},\ldots,r_{l})\) is a cyclic \(\Psi\)-module. Then, since \(p_{r}\) is a map of \(\Psi\)-modules \(W_{r}\) is also a cyclic \(\Psi\)-module. Let \(w\) be a non-zero element of \(W_{r}\). Then \(\psi^{k}w=k^{r}w\) and so the \(\Psi\)-module generated by \(w\) must be \(1\)-dimensional and the dimension of \(W_{r}\) must be one. It follows that the \(r_{1},\ldots,r_{l}\) must be distinct.
Recall that for a linear map \(A:V\to V\), a vector \(v\in V\) is called a cyclic vector for \(A\) if
\[v,A(v),\ldots,A^{l-1}(v)\]
is a basis for \(V\).
The eigenvalues of \(\psi^{k}\) are \(k^{r_{1}},\ldots,k^{r_{l}}\). If \(r_{1},\ldots,r_{l}\) are distinct then it is straightforward to check that there exists a cyclic vector for \(\psi^{k}\). If \(u\) is such a cyclic vector, it follows that
\[u,\psi^{k}u,\psi^{k^{2}}u,\ldots\psi^{k^{l}-1}(u)\]
are a basis for \(\mathbb{Q}(r_{1},\ldots,r_{l})\) and so \(\mathbb{Q}(r_{1},\ldots,r_{l})\) is generated as a \(\Psi\)-module by \(u\).
This completes the proof of the first part of the theorem. The second part follows directly from the above characterisation of cyclic vectors.
### Applications
For this section we assume that \(G\) is a simply connected simple Lie group that is not isomorphic to \(\operatorname{Spin}(4n)\).
**Theorem 4.2.1**.: _The \(\Psi\)-module \(V(G)\otimes\mathbb{Q}\) is cyclic._
Proof.: Theorem 3.0.1 shows that the \(\Psi\)-module \(V(G)\otimes\mathbb{Q}\) is \(\mathbb{Q}(r_{1},\ldots,r_{l})\) where the type of \(G\) is \((2r_{1}-1,\ldots,2r_{l}-1)\). The only simply connected simple Lie groups with a repeated integer in their type are the groups \(\operatorname{Spin}(4n)\).
**Corollary 4.2.2**.: _The \(\mathbb{Z}/2\)-graded \(\Psi\)-ring \(\operatorname{K}^{*}(G)\otimes\mathbb{Q}\) is generated by one element._
Proof.: This follows immediately from Theorem 4.2.1.
**Corollary 4.2.3**.: _Let \(H\) be a Lie group and suppose \(f,g:H\to G\) are homomorphisms of Lie groups. Let \(u\in R(G)\) be such that the element of \(V(G)\otimes\mathbb{Q}\) defined by \(u\) generates the \(\Psi\)-module \(V(G)\otimes\mathbb{Q}\). Then the two homomorphisms_
\[f_{*},g_{*}:\pi_{*}(H)\otimes\mathbb{Q}\to\pi_{*}(G)\otimes\mathbb{Q}\]
_are equal if and only if_
\[f^{*}(u)=g^{*}(u)\in V(G)\otimes\mathbb{Q}\]
Proof.: Since \(V(G)\otimes\mathbb{Q}\) is generated as a \(\Psi\) module by the element defined by \(u\) it follows that if \(f^{*}(u)=g^{*}(u)\in V(G)\otimes\mathbb{Q}\) if and only if the homomorphisms \(f^{*},g^{*}:V(G)\otimes\mathbb{Q}\to V(H)\otimes\mathbb{Q}\) are equal. The result now follows directly from Theorem 3.1.1.
### A'most convenient' generator of the cyclic \(\Psi\)-module \(V(g)\)
We now assume that \(G\) is a simply connected simple Lie group with the exception of the groups \(\operatorname{Spin}(n)\).
**Theorem 4.3.1**.: _The element of \(V(G)\otimes\mathbb{Q}\) defined by a minimal representation of \(G\) is a generator of the \(\Psi\)-module \(V(G)\otimes\mathbb{Q}\)._
This result for the classical groups \(\operatorname{SU}(n)\) and \(\operatorname{Sp}(n)\) is straightforward. Indeed the \(\Lambda\)-ring \(R(\operatorname{SU}(n))\) is generated by its usual representation on \(\mathbb{C}^{n}\). This is obviously a faithful representation. It follows that \(V(\operatorname{SU}(n))\otimes\mathbb{Q}\) is generated as a \(\Psi\)-module by the element defined by this representation. The same argument applies to \(R(\operatorname{Sp}(n))\). It is generated as a \(\Lambda\)-ring by its usual representation on \(\mathbb{H}^{n}=\mathbb{C}^{2n}\) and this is obviously faithful.
The case of \(G_{2}\) is also straightforward. The fundamental representations are \(v\), the \(7\)-dimensional representation on the imaginary quaternions, and \(a\) the \(14\)-dimensional adjoint representation. So \(R(G_{2})\) is the polynomial ring \(\mathbb{Z}[v,a]\). The representation \(v\) is obviously faithful and it is easy to check that
\[\lambda^{2}v=v+a.\]
So the \(\Lambda\)-ring \(R(G_{2})\) is generated by \(v\).
In Section 9, we prove this for \(F_{4},E_{6},E_{7},E_{8}\) by computation using Magma. It is inevitable that this will need some computational input simply because of the \(8\) large numbers given by the dimensions of the fundamental representations of \(E_{8}\) but there is more to it than that and this is explained in Section 9.
## 5. On the rational cohomology of \(Bg\) and \(G/k\)
We now use the results of the previous section to throw some light on the rational cohomology of \(BG\) and the homogeneous spaces \(G/K\) where \(G\) is an exceptional Lie group.
### On the rational cohomology of \(Bg\)
Let \(G\) be an exceptional Lie group of type
\[(2r_{1}-1,\ldots,2r_{l}-1)\]
where \(l\) is the rank of \(G\). Then the integers \(r_{1},\ldots,r_{l}\) are distinct. It is well known, see [1], that
\[H^{*}(BG;\mathbb{Q})=\mathbb{Q}[a_{2j}:j=r_{1},\ldots,r_{l}],\]
where \(a_{2j}\in H^{2j}(BG;\mathbb{Q})\). Of course these generators are not unique but we only need one generator in each of the dimensions \(2j\) for \(j=r_{1},\ldots,r_{l}\).
From now on, if \(\rho\) is a representation of \(G\) we will write \(\operatorname{ch}_{k}(\rho)\in H^{2k}(BG;\mathbb{Q})\) for the \(k\)-th component of the Chern character of the vector bundle over \(BG\) associated to \(\rho\). We extend this convention to any characteristic class and indeed to the associated bundle over \(X/G\), where \(G\) acts freely on \(X\).
We now ask what might constitute a good choice of generators? Our basic strategy is to use representation theory to do computations. So we are led to the following question. Is there a representation \(u\) of \(G\) such that the elements \(\operatorname{ch}_{k}(u)\) where \(k\geq 1\) generate the ring \(H^{*}(BG;\mathbb{Q})\)?
**Theorem 5.1.1**.: _If \(u\in R(G)\), then \(H^{*}(BG;\mathbb{Q})\) is generated as a \(\mathbb{Q}\)-algebra by_
\[\operatorname{ch}_{r_{1}}(u),\dots,\operatorname{ch}_{r_{l}}(u)\]
_if and only if the element of \(V(G)\otimes\mathbb{Q}\) defined by \(u\) generates \(V(G)\otimes\mathbb{Q}\) as a \(\Psi\)-module._
Proof.: If \(X\) is a topological space and \(x\in\operatorname{K}^{0}(X)\) then
\[\operatorname{ch}_{q}(\psi^{k}x)=k^{q}\operatorname{ch}_{q}(x).\]
So we make \(H^{\operatorname{ev}}(X;\mathbb{Q})\) into a \(\Psi\)-ring by defining Adams operations as follows. For \(y\in H^{2q}(X;\mathbb{Q})\) define \(\psi^{k}(y)\) by
\[\psi^{k}(y)=k^{q}y.\]
It follows that if \(X\) is homotopy equivalent to a CW complex with a finite number of cells in each dimension then
\[\operatorname{ch}:\operatorname{K}^{0}(X)\otimes\mathbb{Q}\to H^{ \operatorname{ev}}(X;\mathbb{Q})\]
is an isomorphism of \(\Psi\)-rings.
We use the ad-hoc notation \(W(G)\) for the indecomposable quotient of the \(\Psi\)-ring \(H^{\operatorname{ev}}(BG;\mathbb{Q})\). It is isomorphic to the \(\Psi\)-module \(\mathbb{Q}(r_{1},\dots,r_{l})\). We get a basis for this \(\Psi\)-module by choosing homogeneous generators \(a_{r_{1}},\dots,a_{r_{l}}\) for the graded ring \(H^{*}(BG;\mathbb{Q})\). We retain the notation
\[\operatorname{ch}:V(G)\otimes\mathbb{Q}\to W(G)\]
for the homomorphism of \(\Psi\)-modules defined by the Chern character. This homomorphism is an isomorphism of \(\Psi\)-modules, since \(\operatorname{ch}:\operatorname{K}^{0}(BG)\otimes\mathbb{Q}\to H^{ \operatorname{ev}}(BG;\mathbb{Q})\) is an isomorphism of \(\Psi\)-rings. Lemma 4.1.1 completes the proof.
Combining this theorem with Theorem 4.3.1 leads to the following result.
**Theorem 5.1.2**.: _Let \(G\) be an exceptional Lie group with type \((2r_{1}-1,\dots,2r_{l}-1)\). Let \(u\) be a minimal representation of \(G\). Then \(H^{*}(BG;\mathbb{Q})\) is generated as a \(\mathbb{Q}\)-algebra by_
\[\operatorname{ch}_{r_{1}}(u),\dots,\operatorname{ch}_{r_{l}}(u).\]
### On the rational cohomology of homogeneous spaces
Let \(G\) be a Lie group with free abelian fundamental group and let \(K\subseteq G\) be a closed subgroup. Let \(i:K\to G\) be the inclusion and \(\pi=Bi:BK\to BG\) the map of classifying spaces defined by \(i\). Then there is the usual fibration
\[G/K\xrightarrow{j}\;BK\xrightarrow{\pi}\;BG\]
The induced map of cohomology \(\pi^{*}:H^{*}(BG;\mathbb{Q})\to H^{*}(BK;\mathbb{Q})\) makes \(H^{*}(BK;\mathbb{Q})\) into a module over \(H^{*}(BG;\mathbb{Q})\). Then we have the following theorem, see [1, Section 26].
**Theorem 5.2.1**.: _Let \(G\) be an exceptional Lie group and let \(u\in R(G)\) be a minimal representation of \(G\). Let \(K\) be a closed connected subgroup of \(G\), \(i:K\to G\) be the inclusion of \(K\), and \(\pi=Bi\) be the map of classifying spaces defined by \(i\). Finally, let \(j:G/K\to BK\) be the inclusion of a fibre of \(\pi=Bi:BK\to BG\). Suppose the rank of \(K\) is equal to the rank of \(G\). Then the rational cohomology of \(G/K\) is given by_
\[H^{*}(G/K;\mathbb{Q})=H^{*}(BK;\mathbb{Q})/I\]
_where \(I\) is the ideal in \(H^{*}(BK;\mathbb{Q})\) generated by the elements_
\[\operatorname{ch}_{q}(i^{*}u),\quad q\geq 1.\]
Proof.: Consider the following commutative diagram.
\[\begin{CD}R(K)\otimes\mathbb{Q}@>{}>{}>{\rm K}^{*}(G/K)\otimes\mathbb{Q}@>{\rm ch }>{}>H^{*}(G/K;\mathbb{Q})\\ @V{}V{}V@V{j^{*}}V{}V@V{}V{j^{*}}V{j^{*}}V\\ R(K)\otimes\mathbb{Q}@>{}>{}>{\rm K}^{*}(BK;\mathbb{Q})@>{}>{\rm ch}>{}>H^{*}( BK;\mathbb{Q})\end{CD}\]
By Theorem 1.2.1, \(R(K)\otimes\mathbb{Q}\to{\rm K}^{*}(G/K)\otimes\mathbb{Q}\) is surjective. Since the Chern character is a rational isomorphism it follows that the composite of the two homomorphisms in the top row is surjective. The commutativity of the diagram shows the the right hand vertical arrow is surjective. A standard application of the Leray-Hirsch theorem tell us that \(j^{*}:H^{*}(BK;\mathbb{Q})\to H^{*}(G/K;\mathbb{Q})\) is surjective with kernel the ideal in \(H^{*}(BK;\mathbb{Q})\) generated by the elements
\[\pi^{*}(x),\text{ for }x\in H^{q}(BG),q\geq 1.\]
Applying Theorem 5.1.2 completes the proof.
### An example
The Grassmann manifold \(G_{k}(\mathbb{C}^{n})\) is the homogeneous space
\[{\rm U}(n)/({\rm U}(k)\times{\rm U}(n-k)).\]
We assume that \(k\leq n-k\).
Write \(V_{m}\) for the universal vector bundle over \(B{\rm U}(m)\). This is the bundle defined by the usual action of \({\rm U}(m)\) on \(\mathbb{C}^{m}\). There are two obvious bundles \(V_{k}\) and \(V_{n-k}\) on \(B({\rm U}(k)\times{\rm U}(n-k))\) and \(H^{*}(B({\rm U}(k)\times{\rm U}(n-k));\mathbb{Q})\) is the polynomial algebra generated by
\[a_{2}={\rm ch}_{1}(V_{k}),\ldots,a_{2k}={\rm ch}_{k}(V_{k}),\]
\[b_{2}={\rm ch}_{1}(V_{n-k}),\ldots,b_{2(n-k)}={\rm ch}_{n-k}(V_{n-k}).\]
The restriction of \(V_{n}\) to \(B({\rm U}(k)\times{\rm U}(n-k))\) is \(V_{k}\oplus V_{n-k}\). So the relations given by Theorem 5.2.1 are simply
\[{\rm ch}_{i}(V_{k})+{\rm ch}_{i}(V_{n-k})=0\quad\text{for }1\leq i\leq n.\]
This gives a presentation of \(H^{*}(G_{k}(\mathbb{C}^{n});\mathbb{Q})\) with \(n\) generators and \(n\) relations.
We should decode these relations. The first \(k\) relations simply say that
\[a_{2i}+b_{2i}=0\quad\text{for }1\leq i\leq k\]
Now recall that \({\rm ch}_{k+1}(V_{k})\) is not necessarily zero. But it is a polynomial in \({\rm ch}_{1}(V_{k}),\ldots,{\rm ch}_{k}(V_{k})\). So \({\rm ch}_{k+1}(V_{k})+{\rm ch}_{k+1}(V_{n-k})=0\) tells us that \(b_{2k+2}\) is a polynomial in \(a_{2},\ldots,a_{2k}\). This carries on to show that \(b_{2k+2},\ldots,b_{2(n-k)}\) are also polynomials in \(a_{2},\ldots,a_{2k}\). These polynomials are given by the Newton identities for symmetric polynomials. For \(i=n-k+1,\ldots,n\) the equations
\[{\rm ch}_{i}(V_{k})+{\rm ch}_{i}(V_{n-k})=0\]
tell us that a particular polynomial in \(a_{2},\ldots,a_{2k}\) is equal to another particular polynomial in \(b_{2},\ldots,b_{2(n-k)}\). Now when we substitute the previous formulas for the \(b_{2i}\) in terms of the \(a_{2j}\) this shows that two polynomials in the \(a_{2i}\) are equal. This gives \(k\) relations in the \(a_{2i}\). We end up with a presentation with \(k\) generators and \(k\) relations.
This calculation does not use the Schubert cell decomposition of the Grassmann manifold.
## 6. Rationally elliptic spaces.
A simply connected topological space is _rationally elliptic_ if both \(H^{*}(X;\mathbb{Q})\) and \(\pi_{*}(X)\otimes\mathbb{Q}\) are finite dimensional. Any compact simply connected homogeneous space is rationally elliptic. The space \(S^{3}\lor S^{3}\) is a simple example of a space that not rationally elliptic: its rational cohomology is finite dimensional but its rational homotopy is not. The space \(\mathbb{CP}^{\infty}\) is not rationally elliptic: its rational homotopy is finite dimensional but its rational cohomology is not. If \(X\) is rationally elliptic we write
\[\chi_{H}(X),\quad\chi_{\pi}(X)\]
for the Euler characteristic of \(H^{*}(X;\mathbb{Q})\) and \(\pi_{*}(X)\otimes\mathbb{Q}\) respectively.
### The main structural theorem on rationally elliptic spaces
A very thorough account of the theory of rationally elliptic spaces can be found in [10, Section 32] together with references to the original papers. The following theorem is a combination of Proposition 32.10 and Proposition 32.16 in [10].
**Theorem 6.1.1**.: _Suppose that \(X\) is a rationally elliptic space. Then_
\[\chi_{H}(X)\geq 0,\quad\chi_{\pi}(X)\leq 0.\]
_Furthermore, the following statements are equivalent:_
1. \(\chi_{H}(X)>0\)_,_
2. \(H^{*}(X;\mathbb{Q})\) _is concentrated in even degrees,_
3. \(H^{*}(X;\mathbb{Q})\) _is the quotient of a polynomial ring_ \(\mathbb{Q}[a_{1},\dots,a_{q}]\) _where the_ \(a_{i}\) _have even degree, by an ideal generated by a regular sequence of length_ \(q\)_,_
4. \(\dim\pi_{\text{even}}(X)\otimes\mathbb{Q}=\dim\pi_{\text{odd}}(X)\otimes \mathbb{Q}\)_._
It follows that the minimum number of generators, \(q\) in (iii), is equal to \(\dim\pi_{\text{even}}(X)\otimes\mathbb{Q}\). Furthermore a minimal set of generators in a fixed degree \(2s\) is in one-to-one correspondence with a basis for \(\pi_{2s}(X)\otimes\mathbb{Q}\). Furthermore, a minimal set of relations in degree \(2t\) is in one-to-one correspondence with a basis of \(\pi_{2t-1}(X)\otimes\mathbb{Q}\).
### The Poincare polynomial of a rationally elliptic space with positive Euler characteristic
Now suppose \(X\) is a rationally elliptic space with positive Euler characteristic. Let
\[q=\dim\pi_{\text{ev}}(X)\otimes\mathbb{Q}=\dim\pi_{\text{odd}}(X)\otimes \mathbb{Q}.\]
and let \((2n_{1},\dots,2n_{q})\), \((2m_{1}-1,\dots,2m_{q}-1)\) be the degrees of a basis of the rational homotopy groups of \(X\). Then the Poincare polynomial of \(X\) is
\[P(t)=\frac{(1-t^{2n_{1}})\cdots(1-t^{2n_{q}})}{(1-t^{2m_{1}})\cdots(1-t^{2m_{q} })}.\]
This follows from the Theorem 6.1.1 and the following standard algebraic fact. Let \(A\) be a graded commutative algebra over \(\mathbb{Q}\) of the form
\[A=\mathbb{Q}[a_{1},\dots,a_{q}]/(r_{1},\dots,r_{q})\]
where the \(a_{i}\) have even degree and the \(r_{i}\) form a regular sequence. Then the Poincare polynomial of \(A\) is
\[\frac{(1-t^{|r_{1}|})\cdots(1-t^{|r_{q}|})}{(1-t^{|a_{1}|})\cdots(1-t^{|a_{q}| })}.\]
## 7. The rational homotopy groups of the Rosenfeld projective planes
The rational homotopy groups of the Rosenfeld projective planes were first calculated by Svjetlana Terzic in [12]. We give a proof following the main strategy of this paper, using Theorem 3.2.1 and a computation in representation theory.
**Theorem 7.0.1**.: _The non-zero rational homotopy groups of \(R_{5}\), \(R_{6}\) and \(R_{7}\) are given by_
1. \(\pi_{n}(R_{5})\otimes\mathbb{Q}=\mathbb{Q}\)_, if_ \(n=2,8,17,23\)_,_
2. \(\pi_{n}(R_{6})\otimes\mathbb{Q}=\mathbb{Q}\)_, if_ \(n=4,8,12,23,27,35\)_,_
3. \(\pi_{n}(R_{7})\otimes\mathbb{Q}=\mathbb{Q}\)_, if_ \(n=8,12,16,20,35,39,47,59\)_._
Recall, from [1] that there are homomorphisms
\[h_{6}:\operatorname{Spin}(10)\to E_{6},\quad h_{7}:\operatorname{Spin}(12) \to E_{7},\quad h_{8}:\operatorname{Spin}(16)\to E_{8}.\]
The first two are injective and the kernel of \(h_{8}\) is a central subgroup of \(\operatorname{Spin}(16)\) of order \(2\) which does not contain the kernel of the universal covering \(\operatorname{Spin}(16)\to\operatorname{SO}(16)\).
**Lemma 7.0.2**.:
1. _The kernel of_ \((h_{6})_{*}:V^{*}(\operatorname{Spin}(10))\to V^{*}(E_{6})\) _has dimension_ \(1\)_._
2. _The kernel of_ \((h_{7})_{*}:V^{*}(\operatorname{Spin}(12))\to V^{*}(E_{7})\) _has dimension_ \(2\)_._
3. _The kernel of_ \((h_{8})_{*}:V^{*}(\operatorname{Spin}(16))\to V^{*}(E_{8})\) _has dimension_ \(4\)_._
The proof is computational and we postpone it until Section 9.
### The rational homotopy groups of \(R_{5}\)
Recall that \(R_{5}=E_{6}/(\operatorname{Spin}(10)\times_{C_{4}}S^{1})\). Let \(N_{5}\) be the homogeneous space \(E_{6}/\operatorname{Spin}(10)\), so \(N_{5}\) is an \(S^{1}\)-bundle over \(R_{5}\).
First we use the four-term exact sequences of Section 3.2 to compute the rational homotopy groups of \(N_{5}\). The type of \(\operatorname{Spin}(10)\) is
\[(3,7,9,11,15)\]
and the type of \(E_{6}\) is
\[(3,9,11,15,17,23)\]
The following table summarises the four term exact sequences for \(N_{5}\).
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \(r\) & \(3\) & \(7\) & \(9\) & \(11\) & \(15\) & \(17\) & \(23\) \\ \hline \(\pi_{r+1}(N_{5})\) & & \(\mathbb{Q}\) & & & & & \\ \hline \(\pi_{r}(\operatorname{Spin}(10))\) & \(\mathbb{Q}\) & \(\mathbb{Q}\) & \(\mathbb{Q}\) & \(\mathbb{Q}\) & & \\ \hline \(\pi_{r}(E_{6})\) & \(\mathbb{Q}\) & \(\mathbb{Q}\) & \(\mathbb{Q}\) & \(\mathbb{Q}\) & \(\mathbb{Q}\) & \(\mathbb{Q}\) \\ \hline \(\pi_{r}(N_{5})\) & & & & & & \(\mathbb{Q}\) & \(\mathbb{Q}\) \\ \hline \end{tabular}
Our conventions are
1. homotopy groups are rational homotopy groups,
2. a blank entry in the table means the relevant group is \(0\), as does not appearing in the table.
The two middle rows are the homotopy groups of \(\operatorname{Spin}(10)\) and \(E_{6}\), so we must calculate the homomorphism
\[\pi_{r}(\operatorname{Spin}(10))\to\pi_{r}(E_{6})\]
for \(r=3,7,9,11,15\). When \(r=3\) this homomorphism has to be an isomorphism and for \(i=r\) it has to be zero, so it has a one dimensional kernel. Using Theorem 3.2.1 and Lemma 7.0.2 we know that the kernel of \(\pi(\operatorname{Spin}(10))\to\pi(E_{6})\) is one dimensional. It must follow that for \(r=9,11,15\) this homomorphism is injective and hence an isomorphism. This gives the complete table.
A simple argument with the homotopy exact sequence of the circle bundle \(N_{5}\to R_{5}\) completes the calculation of the rational homotopy groups of \(R_{5}\).
### The rational homotopy groups of \(R_{6}\)
Recall that \(R_{6}=E_{7}/(\operatorname{Spin}(12))\times_{C_{2}}S^{3})\). Let \(N_{6}\) be the homogeneous space \(E_{7}/\operatorname{Spin}(12)\), so \(N_{6}\) is an \(S^{3}\) bundle over \(R_{6}\).
As before we begin by computing the homotopy groups of \(N_{6}\) using the four-term exact sequences. The type of \(\operatorname{Spin}(12)\) is
\[(3,7,11,11,15,19,23)\]
and the type of \(E_{7}\) is
\[(3,11,15,19,23,27,35).\]
The following table, with the same conventions as the previous case, summarises the four term exact sequences of \(N_{6}\).
This time we must calculate the homomorphism
\[\pi_{r}(\operatorname{Spin}(12))\to\pi_{r}(E_{7}).\]
Elementary arguments with the four-term exact sequences show that when \(r=3\) this must be an isomorphism and when \(r=7,11\) it must have one-dimensional kernel. Once more we know from Theorem 3.2.1 and Lemma 7.0.2 that the kernel of \(\pi(\operatorname{Spin}(12))\to\pi(E_{7})\) is two-dimensional. Therefore the entries in the top row must be \(0\) from column \(15\) onwards. Finally the homotopy exact sequence of the \(S^{3}\) bundle \(N_{6}\to R_{6}\) completes the proof.
### The rational homotopy groups of \(R_{7}\)
First we must explain what the half-spin or semi-spin group \(\operatorname{HSpin}(2n)\) is. The centre of \(\operatorname{Spin}(4n)\) is \(C_{2}\times C_{2}\). So there are three non-trivial central subgroups of order two in \(\operatorname{Spin}(4n)\). If \(4n\neq 8\) the outer automorphism group of \(\operatorname{Spin}(4n)\) is cyclic of order two. The non-trivial outer automorphism interchanges two of these central subgroups, and this gives two different, but isomorphic, \(C_{2}\) quotients of \(\operatorname{Spin}(4n)\). These are the half-spin groups. It is usual to choose one to work with and call this \(\operatorname{HSpin}(4n)\). The third quotient is \(\operatorname{SO}(4n)\). In the case of \(\operatorname{Spin}(8)\) the outer automorphism group is the symmetric group \(\Sigma_{3}\) and this acts transitively on the three central subgroups of order \(2\). So all three \(C_{2}\) quotients of \(\operatorname{Spin}(8)\) are isomorphic to \(\operatorname{SO}(8)\).
Now \(R_{7}=E_{8}/\operatorname{HSpin}(16)\), the type of \(\operatorname{HSpin}(16)\) is
\[(3,7,11,15,15,19,23,27),\]
and the type of \(E_{8}\) is
\[(3,15,23,27,35,39,47,59)\]
The following table, with the usual conventions, summarises the conclusions of the calculations with the four-term exact sequences for \(R_{7}\).
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \(r\) & \(3\) & \(7\) & \(11\) & \(15\) & \(19\) & \(23\) & \(27\) & \(35\) & \(39\) & \(47\) & \(59\) \\ \hline \(\pi_{r+1}(R_{7})\) & & \(\mathbb{Q}\) & \(\mathbb{Q}\) & \(\mathbb{Q}\) & & & & & & \\ \hline \(\pi_{r}(\operatorname{HSpin}(16))\) & \(\mathbb{Q}\) & \(\mathbb{Q}\) & \(\mathbb{Q}\oplus\mathbb{Q}\) & \(\mathbb{Q}\) & \(\mathbb{Q}\) & & & & \\ \hline \(\pi_{r}(E_{8})\) & \(\mathbb{Q}\) & & & \(\mathbb{Q}\) & & \(\mathbb{Q}\) & \(\mathbb{Q}\) & \(\mathbb{Q}\) & \(\mathbb{Q}\) & \(\mathbb{Q}\) \\ \hline \(\pi_{r}(R_{7})\) & & & & & & & & & \(\mathbb{Q}\) & \(\mathbb{Q}\) & \(\mathbb{Q}\) \\ \hline \end{tabular}
This time we must compute
\[\pi_{r}(\operatorname{HSpin}(16))\to\pi_{r}(E_{8}).\]
The usual arguments with the four-term exact sequences show that it must be an isomorphism when \(r=3\) and have one dimensional kernel when \(r=7,11,15,19\). As in the other cases the proof is completed by Theorem 3.2.1 and Lemma 7.0.2.
### What does the theory of elliptic spaces tell us?
Applying the results in Section 6, we read off the following information about the rational cohomology of the Rosenfeld projective planes \(R_{5},R_{6},R_{7}\).
1. \[H^{*}(R_{5},\mathbb{Q})=\frac{\mathbb{Q}[a_{2},a_{8}]}{(r_{18},r_{24})}\]
\[P=\left(\frac{1-t^{18}}{1-t^{2}}\right)\left(\frac{1-t^{24}}{1-t^{8}}\right), \quad\chi=27.\]
2. \[H^{*}(R_{6},\mathbb{Q})=\frac{\mathbb{Q}[a_{4},a_{8},a_{12}]}{(r_{24},r_{28},r_{ 36}}\]
3. \[P=\left(\frac{1-t^{24}}{1-t^{8}}\right)\left(\frac{1-t^{28}}{1-t^{4}}\right) \left(\frac{1-t^{36}}{1-t^{12}}\right),\quad\chi=63.\]
4. \[H^{*}(R_{7},\mathbb{Q})=\frac{\mathbb{Q}[a_{8},a_{12},a_{16},a_{20}]}{(r_{36}, r_{40},r_{48},r_{60})}\]
\[P=\left(\frac{1-t^{36}}{1-t^{12}}\right)\left(\frac{1-t^{40}}{1-t^{8}}\right) \left(\frac{1-t^{48}}{1-t^{16}}\right)\left(\frac{1-t^{60}}{1-t^{20}}\right), \quad\chi=135.\]
In all three cases the subscripts on the generators and the relations are the degrees. Furthermore, in all three cases the relations form a regular sequence.
These results agree with [10, Table A] which also contains references to other proofs. It is worth repeating here that one of our main aims is to give a systematic account of these results. In the next section we show how to get explicit presentations for the rings \(H^{*}(R_{5};\mathbb{Q})\), \(H^{*}(R_{6};\mathbb{Q})\) and \(H^{*}(R_{7};\mathbb{Q})\) by using Theorem 5.2.1.
## 8. The rational cohomology of \(R_{5},R_{6},R_{7}\)
### The general strategy
We have the following general situation: two Lie groups \(K\) and \(G\) of the same rank and a homomorphism
\[h:K\to G\]
with finite kernel. From this we get a quotient homomorphism
\[p:K\to\bar{K}=K/\ker(h)\]
and an embedding
\[i:\bar{K}\to G.\]
Let \(R=G/\bar{K}\) and let
\[j:R\to B\bar{K}\]
be the inclusion of \(R\) as a fibre of \(Bi:B\bar{K}\to BG\). This gives a diagram of spaces
Since the kernel of \(p\) is finite \((Bp)^{*}:H^{*}(B\bar{K};\mathbb{Q})\to H^{*}(BK;\mathbb{Q})\) is an isomorphism. We get a surjective homomorphism
\[\phi=j^{*}\circ((Bp)^{*})^{-1}:H^{*}(BK;\mathbb{Q})\to H^{*}(R;\mathbb{Q})\]
and an isomorphism
\[H^{*}(BK;\mathbb{Q})/I\to H^{*}(R;\mathbb{Q})\]
where \(I\) is the kernel of \(\phi\).
In our examples \(G\) is one of \(E_{6},E_{7},E_{8}\) and in these cases we can choose a minimal representation \(w\) of \(G\). From Theorem 5.2.1 it follows that
\[I=(\mathrm{ch}_{q}(h^{*}(w)):q\geq 1).\]
To convert this into an explicit presentation we must compute \(\mathrm{ch}(h^{*}(w))\in H^{*}(BK;\mathbb{Q})\).
We now explain how to calculate characteristic classes using this data; this is used in the detailed study of \(R_{5}\) in [11].
Let \(\rho\) be a (complex) representation of \(\bar{K}\). This gives a representation \(p^{*}(\rho)\) of \(K\) in which \(\ker(h)\) acts trivially. We continue to write \(\rho\) and \(p^{*}(\rho)\) for the vector bundles over \(B\bar{K}\) and \(BK\), respectively, associated to these representations. We write
\[E_{\bar{\rho}}=j^{*}(\bar{\rho})\]
for the complex vector bundle over \(R\) associated to \(\bar{\rho}\).
It follows from the definition of \(\phi:H^{*}(K;\mathbb{Q})\to H^{*}(R;\mathbb{Q})\) above that
\[c_{k}(E_{\bar{\rho}})=\phi(c_{k}(\rho)).\]
This allows us to calculate
\[c_{k}(E_{\bar{\rho}})\in H^{2k}(R;\mathbb{Z})\mod\text{torsion}.\]
Of course we can use this method to calculate Pontryagin classes, the Chern character, or any other characteristic class.
### The rational cohomology of \(R_{5}\)
Adams shows in [1, Chapter 8] that there is a homomorphism
\[h:\operatorname{Spin}(10)\times\operatorname{U}(1)\to E_{6}\]
such that the kernel of \(h\) is a central subgroup \(C_{4}\). If we first choose one of the two minimal representations \(w\) of \(E_{6}\) then we can subsequently choose \(h\) so that
\[h^{*}(w)=\xi^{4}+\rho_{10}\otimes\xi^{2}+\delta_{10}^{-}\otimes\xi.\]
Here \(\rho_{10}\) is the \(10\)-dimensional vector representation of \(\operatorname{Spin}(10)\), \(\delta_{10}^{\pm}\) are the two spin representations of \(\operatorname{Spin}(10)\), and \(\xi\) is the standard representation of \(\operatorname{U}(1)\) on \(\mathbb{C}\). This relation between representations leads to the relations in the presentation of \(H^{*}(R_{5};\mathbb{Q})\).
By definition
\[R_{5}=E_{6}/(\operatorname{Spin}(10)\times_{C_{4}}\operatorname{U}(1)).\]
We get a surjective homomorphism
\[\phi:H^{*}(B(\operatorname{Spin}(10)\times\operatorname{U}(1));\mathbb{Q}) \to H^{*}(R_{5};\mathbb{Q})\]
whose kernel is generated by the \(\operatorname{ch}_{q}(h^{*}(w))\) with \(q\geq 1\).
Next we compute
\[\operatorname{ch}_{q}(h^{*}(w))\in H^{2q}(B(\operatorname{Spin}(10)\times \operatorname{U}(1));\mathbb{Q})\]
in terms of the Pontryagin classes \(p_{i}(\rho_{10})\in H^{4i}(B\operatorname{Spin}(10);\mathbb{Q})\), the Euler class \(e(\rho_{10})\in H^{10}(B\operatorname{Spin}(10);\mathbb{Q})\), and the first Chern class \(c_{1}(\xi)\) in \(H^{2}(B\operatorname{U}(1);\mathbb{Q})\). This leads to the following theorem.
**Theorem 8.2.1**.: _The cohomology ring \(H^{*}(R_{5};\mathbb{Q})\) is isomorphic to_
\[\frac{\mathbb{Q}[a_{2},a_{8}]}{(r_{18},r_{24})},\]
_where the generators are_
\[a_{2}=\phi(c_{1}(\xi)),\quad a_{8}=\phi(p_{2}(\rho_{10})),\]
_and the relations are_
\[r_{18} =-39936a_{2}^{9}+1728a_{2}^{5}a_{8}+a_{2}a_{8}^{2},\] \[r_{24} =-50429952a_{2}^{12}+3068928a_{2}^{8}a_{8}-11808a_{2}^{4}a_{8}^{2 }-a_{8}^{3}.\]
The integral cohomology of \(R_{5}\) has been calculated by Toda and Watanabe, in [14]. Although our presentation is different to the one in [14, Corollary C] the two presentations define the same ring over \(\mathbb{Q}\).
### The rational cohomology of \(R_{6}\)
In this case Adams shows in [1, Chapter 8] that there is a homomorphism
\[h:\operatorname{Spin}(12)\times\operatorname{Sp}(1)\to E_{7}\]
such that the kernel of \(h\) is a central subgroup of order \(2\). Now let \(w\) be the minimal representation of \(E_{7}\). Adams also shows that we may choose \(h\) so that
\[h^{*}(w)=\rho_{12}\otimes\zeta+\delta_{12}^{+}.\]
Here \(\rho_{12}\) is the \(12\) dimensional vector representation of \(\operatorname{Spin}(12)\), \(\delta_{12}^{\pm}\) are the spin representations, and \(\zeta\) is the representation of \(\operatorname{Sp}(1)\) on \(\mathbb{H}=\mathbb{C}^{2}\). This relation between representations leads to the relations in the presentation of \(H^{*}(R_{6};\mathbb{Q})\).
By definition
\[R_{6}=E_{6}/(\operatorname{Spin}(12)\times_{C_{2}}\operatorname{Sp}(1)).\]
This gives a surjective homomorphism
\[\phi:H^{*}(B(\operatorname{Spin}(12)\times\operatorname{Sp}(1));\mathbb{Q}) \to H^{*}(R_{6};\mathbb{Q})\]
whose kernel is generated by the \(\operatorname{ch}_{q}(h^{*}(w))\) with \(q\geq 1\). So we compute
\[\operatorname{ch}_{q}(h^{*}(w))\in H^{2q}(B(\operatorname{Spin}(12)\times \operatorname{Sp}(1);\mathbb{Q})\]
in terms of the Pontryagin classes \(p_{i}(\rho_{12})\in H^{4i}(B\!\operatorname{Spin}(12);\mathbb{Q})\), the Euler class \(e(\rho_{12})\in H^{12}(B\!\operatorname{Spin}(12);\mathbb{Q})\), and the Chern class \(c_{2}(\zeta)\in H^{4}(B\!\operatorname{Sp}(1);\mathbb{Q})\).
**Theorem 8.3.1**.: _The cohomology ring \(H^{*}(R_{6};\mathbb{Q})\) is isomorphic to_
\[\frac{\mathbb{Q}[a_{4},a_{8},a_{12}]}{(r_{24},r_{28},r_{36})},\]
_where the generators are_
\[a_{4}=\phi(p_{1}(\rho_{12}))=2c_{2}(\zeta),\quad a_{8}=\phi(p_{2}(\rho_{12})), \quad a_{12}=\phi(p_{3}(\rho_{12})),\]
_and the relations are_
\[r_{24} =38367a_{4}^{6}-131436a_{4}^{4}a_{8}+88272a_{4}^{2}a_{8}^{2}-1600 a_{8}^{3}-273024a_{4}^{3}a_{12}+55296a_{4}a_{8}a_{12}-10368a_{12}^{2},\] \[r_{28} =63a_{4}^{7}+96a_{4}^{5}a_{8}+48a_{4}^{3}a_{8}^{2}+640a_{4}a_{8}^ {3}-1686a_{4}^{4}a_{12}-4656a_{4}^{2}a_{8}a_{12}+160a_{8}^{2}a_{12}-1152a_{4}a_ {12}^{2},\] \[r_{36} =19503a_{4}^{9}+41184a_{4}^{7}a_{8}-150816a_{4}^{5}a_{8}^{2}-1566 72a_{4}^{3}a_{8}^{3}+19200a_{4}a_{8}^{4}-127224a_{4}^{6}a_{12}-\] \[\quad 908448a_{4}^{4}a_{8}a_{12}+264576a_{4}^{2}a_{8}^{2}a_{12}+128 00a_{8}^{3}a_{12}-1806336a_{4}^{3}a_{8}^{2}+36864a_{4}a_{8}a_{12}^{2}+18432a_{ 12}^{3}.\]
In [20], Nakagawa gives a presentation of \(H^{*}(R_{6};\mathbb{Q})\). Ours is different but as in the case of \(R_{5}\) it is not difficult to check that the two presentations give isomorphic algebras over \(\mathbb{Q}\).
### The rational cohomology of \(R_{7}\)
This time, see [1, Chapter 7], there is a homomorphism
\[h:\operatorname{Spin}(16)\to E_{8}\]
such that the kernel of \(h\) is generated by a central element of order \(2\) in \(\operatorname{Spin}(16)\) of which is not in the kernel of the covering map \(\operatorname{Spin}(16)\to\operatorname{SO}(16)\). This gives an embedding
\[i:\operatorname{HSpin}(16)\to E_{8}.\]
By definition
\[R_{7}=E_{8}/\operatorname{HSpin}(16)\]
is the Rosenfeld projective plane of dimension \(128\).
Let \(w\) be the adjoint representation of \(E_{8}\). This is the minimal representation of \(E_{8}\). Then we may choose \(h\) so that in the natural notation
\[h^{*}(w)=\lambda^{2}\rho_{16}+\delta_{16}^{+},\]
see [1, Theorem 6.1]. This relation between representations once more leads to the relations in the presentation of \(H^{*}(R_{7};\mathbb{Q})\).
As before we get a surjective ring homomorphism
\[\phi:H^{*}(B\mathrm{Spin}(16);\mathbb{Q})\to H^{*}(R_{7};\mathbb{Q}).\]
This time we compute \(\mathrm{ch}(h^{*}w)\) in terms of the Pontryagin classes and the Euler class of \(\rho_{16}\) in \(H^{*}(B\mathrm{Spin}(16);\mathbb{Q})\).
**Theorem 8.4.1**.: _The cohomology ring \(H^{*}(R_{7};\mathbb{Q})\) is isomorphic to_
\[\frac{\mathbb{Q}[a_{8},a_{12},a_{16},a_{20}]}{(r_{36},r_{40},r_{48},r_{60})},\]
_where the generators are_
\[a_{8}=\phi(p_{2}(\rho_{16})),\quad a_{12}=\phi(p_{3}(\rho_{16})),\quad a_{16}= \phi(p_{4}(\rho_{16}))\quad a_{20}=\phi(p_{5}(\rho_{16})),\]
_and the relations are_
\[r_{36} =275a_{8}^{3}a_{12}-6150a_{8}^{2}a_{20}+5400a_{8}a_{12}a_{16}-756 a_{12}^{3}-10800a_{16}a_{20},\] \[r_{40} =275a_{8}^{5}+4080a_{8}^{3}a_{16}+945a_{8}^{2}a_{12}^{2}-26460a_{ 8}p_{12}p_{20}-25920a_{8}a_{20}^{2}+27216a_{12}^{2}a_{16}+26460a_{20}^{2},\] \[r_{48} =-225875a_{8}^{6}-8037000a_{8}^{4}a_{16}+4233600a_{8}^{3}a_{12}^{ 2}-23020200a_{8}^{2}a_{12}a_{20}-29160000a_{8}^{2}a_{16}^{2}+\] \[\quad 28576800a_{8}a_{12}^{2}a_{16}-166698000a_{8}a_{20}^{2}+3000564 a_{12}^{4}+57153600a_{12}a_{16}a_{20}+466560000a_{16}^{3},\] \[r_{60} =-2868125a_{8}^{6}a_{12}+22312500a_{8}^{5}a_{20}-36945000a_{8}^{4 }a_{12}a_{16}-3307500a_{8}^{3}a_{12}^{3}-\] \[\quad 390600000a_{8}^{3}a_{16}a_{20}+222264000a_{8}^{2}a_{12}^{2}a_ {20}-243000000a_{8}^{2}a_{12}a_{16}^{2}+71442000a_{8}a_{12}^{3}a_{16}-\] \[\quad 972405000a_{8}a_{12}a_{20}^{2}+1360800000a_{8}a_{16}^{2}a_{20 }-18003384a_{12}^{5}+1000188000a_{12}^{2}a_{16}a_{20}-\] \[\quad 699840000a_{12}a_{16}^{3}-463050000a_{20}^{3}.\]
This seems to be the first explicit presentation of \(H^{*}(R_{7};\mathbb{Q})\) in the literature.
### The proof for \(R_{7}\)
The ring \(H^{*}(B\mathrm{Spin}(16);\mathbb{Q})\) is given by
\[H^{*}(B\mathrm{Spin}(16);\mathbb{Q})=\mathbb{Q}[p_{1},p_{2},p_{3},p_{4},e,p_{ 5},p_{6},p_{7}]\]
where the \(p_{i}=p_{i}(\rho_{16})\in H^{4i}(B\mathrm{Spin}(16);\mathbb{Q})\) are the Pontryagin classes of \(\rho_{16}\), the 16-dimensional real vector bundle over \(B\mathrm{Spin}(16)\) and \(e=e(\rho_{16})\in H^{16}(B\mathrm{Spin}(16);\mathbb{Q})\) is the Euler class of the same bundle. Following on from the previous section we now do the calculation of
\[\mathrm{ch}(h^{*}(w))=\mathrm{ch}(\lambda^{2}(\rho_{16}))+\mathrm{ch}(\delta_{1 6}^{+}).\]
We provide some details about this computation in Section 8.6.
We know that \(H^{*}(R_{7};\mathbb{Q})\) is generated by 4 elements of degrees \(8,12,16,20\). Since the map \(\phi:H^{*}(B\mathrm{Spin}(16);\mathbb{Q})\to H^{*}(R_{7};\mathbb{Q})\) is surjective we can take
\[a_{8}=\phi(p_{2}),\quad a_{12}=\phi(p_{3}),\quad a_{20}=\phi(p_{5})\]
as the generators in degree 8, 12, and 20. However, the ring \(H^{*}(B\mathrm{Spin}(16);\mathbb{Q})\) has two generators in degree 16 so we have to make a choice of an element in \(H^{16}(BE_{8};\mathbb{Q})\) which maps to a generator in \(H^{16}(BR_{7};\mathbb{Q})\).
However, \(\phi(p_{1})=0\) since \(H^{4}(R_{5};\mathbb{Q})\) is zero. By definition \(a_{8}=\phi(p_{2})\), and \(a_{12}=\phi(p_{3})\). The computations show that
\[\mathrm{ch}_{8}(h^{*}(w))=\frac{1}{336}p_{2}^{2}+\frac{1}{28}p_{4}+\frac{1}{2}e \mod p_{1}.\]
Here \(\mathrm{mod}\ p_{1}\) means up to an element of the form \(xp_{1}\). This allows us to make a choice of the 16 dimensional generator and we choose \(a_{16}=\phi(p_{4})\). It follows that
\[\phi(e)=-\frac{1}{168}a_{8}^{2}-\frac{1}{14}a_{16}.\]
Next we compute \(\mathrm{ch}_{12}(v)\) as a polynomial in \(p_{1},p_{2},p_{3},p_{4},e,p_{5},p_{6}\); it does not involve \(p_{7}\) since \(p_{7}\) has degree 28. Apply \(\phi\) to this polynomial using the above formulas. It gives
\[\phi(p_{6})=\frac{13}{1512}a_{8}^{3}+\frac{3}{14}a_{8}a_{16}-\frac{1}{20}a_{12} ^{2}.\]
Repeating this with \(\mathrm{ch}_{14}(v)\) gives
\[\phi(p_{7})=\frac{1}{168}a_{8}^{2}a_{12}-\frac{1}{12}a_{8}a_{20}+\frac{1}{14}a _{12}a_{16}.\]
We now have formulas for \(\phi\) applied to each of the 8 generators of \(H^{*}(B\mathrm{Spin}(16);\mathbb{Q})\) as polynomials in \(a_{4},a_{12},a_{16},a_{20}\). This allows us to factorise the homomorphism \(\phi\) as
\[H^{*}(\mathrm{Spin}(16);\mathbb{Q})\xrightarrow{\theta}\mathbb{Q}[a_{4},a_{12 },a_{16},a_{20}]\xrightarrow{q}H^{*}(R_{7};\mathbb{Q})\]
where \(q\) is the quotient homomorphism, and \(\theta\) is defined by the above formulas for \(\phi(p_{i})\) for \(i=1,\ldots,7\) and \(\phi(e)\). It follows that \(\ker(q)\) is generated by the 4 elements
\[\theta(\mathrm{ch}_{18}(h^{*}w)),\quad\theta(\mathrm{ch}_{20}(h^{*}w)),\quad \theta(\mathrm{ch}_{24}(h^{*}w)),\quad\theta(\mathrm{ch}_{30}(h^{*}w)).\]
This gives us four polynomials
\[r_{36},r_{40},r_{48},r_{60}\in\mathbb{Q}[a_{8},a_{12},a_{16},a_{20}],\]
such that
\[H^{*}(R_{7})=\mathbb{Q}[a_{8},a_{12},a_{16},a_{20}]/(r_{36},r_{40},r_{48},r_{6 0}).\]
Calculating these four polynomials in detail gives the relations in Theorem 8.4.1.
### Comments on the computation
The powers of 2 in these calculation need care for two reasons. The first is that the kernel of the homomorphism \(h\) in our three examples is \(C_{4},C_{2},C_{2}\) and this introduces some extra factors of 2 in the detailed calculations.
The second is that to compute \(\mathrm{ch}_{q}(h^{*}(w))\) we must choose generators for \(H^{*}(B\mathrm{Spin}(2n);\mathbb{Q})\). The standard choices are the Pontryagin classes \(p_{i}\in H^{4i}\) and the Euler class in \(H^{2n}\). The Pontryagin classes are integral but they do not generate the integral cohomology. For example if \(V\) is a real spin bundle then \(p_{1}(V)\) is divisible by 2. See [1, 2, 3] for more details. However they do generate \(H^{*}(B\mathrm{Spin}(2n);\Lambda)\) if 2 is invertible in \(\Lambda\).
The paper [15], in particular Section 3, is very helpful for doing these calculations. We illustrate our methods by outlining one method of computing
\[\mathrm{ch}(h^{*}(w))=\mathrm{ch}(\lambda^{2}\rho_{16})+\mathrm{ch}(\delta_{16} ^{+})\]
in terms of the Pontryagin classes and Euler class in \(H^{*}(B\mathrm{Spin}(16);\mathbb{Q})\). This is the computation which leads to the presentation of \(H^{*}(R_{7};\mathbb{Q})\).
We begin with the computation of \(\mathrm{ch}(\delta_{16}^{+})\). The standard method is to use the splitting principle, as described in Jay Wood's paper [3], and then use the correspondence between symmetric functions and the Pontryagin classes and the Euler class to express \(\mathrm{ch}(\delta_{16}^{+})\) in terms of the generators chosen above. The explicit expansion up to degree 16 is
\[\mathrm{ch}(\delta_{16}^{+}) =128+16p_{1}+\frac{1}{3}p_{1}^{2}+\frac{4}{3}p_{2}+\frac{1}{360}p _{1}^{3}+\frac{1}{30}p_{1}p_{2}+\frac{2}{15}p_{3}+\frac{1}{80640}p_{1}^{4}+ \frac{1}{3360}p_{1}^{2}p_{2}+\] \[\frac{1}{315}p_{1}p_{3}+\frac{1}{5040}p_{2}^{2}+\frac{17}{1260}p_ {4}+\frac{1}{2}e+\cdots\]
Again, the standard method for calculating \(\mathrm{ch}(\lambda^{2}\rho_{16})\) is to use the splitting principle to express \(\mathrm{ch}_{k}(\lambda^{2}\rho_{16})\) as a polynomial in the Pontryagin classes and the Euler class. The explicit expansion up to degree 16 is
\[\mathrm{ch}(\lambda^{2}\rho_{16}) =120+14p_{1}+\frac{7}{6}p_{1}^{2}-\frac{4}{3}p_{2}+\frac{7}{180}p_ {1}^{3}-\frac{1}{30}p_{1}p_{2}-\frac{2}{15}p_{3}+\frac{1}{1440}p_{1}^{4}- \frac{1}{72}p_{1}p_{3}+\] \[\frac{1}{360}p_{2}^{2}+\frac{1}{45}p_{4}+\cdots\]
## 9. Computations in representation theory
The proofs of Theorem 4.3.1 and Lemma 7.0.2 both require computations in representation theory which follow a standard operating procedure.
1. First we need detailed information about the \(\Lambda\)-ring \(R(G)\) where \(G\) is an appropriate Lie group. Sometimes this is in the literature, but usually in the case of \(E_{6},E_{7},E_{8}\) we use Magma to get this information. The main issue is that if we start with a representation \(\rho\) of \(G\) and ask Magma to compute \(\lambda^{k}(\rho)\) it will give the answer as a decomposition into a sum of irreducible representations. However we want to express it as a polynomial in a chosen set of generators. All computational programmes currently use the fundamental representations as the choice of generators, so that is our default choice. We now have to use Magma to convert the decomposition of \(\lambda^{k}(w)\), where \(w\) is fundamental representation, into a polynomial in the fundamental representations.
2. Next we have to soften this information to allow us to extract what we need. The first step in this process is to pass from the \(\Lambda\)-ring \(R(G)\) to the quotient \(\Lambda\)-ring \[S(G)=R(G)/I^{2}(G).\] Our standard notation is \([x]\in S(G)\) for the image of the element \(x\) in \(R(G)\) under the quotient homomorphism \(R(G)\to S(G)\). If \(x\in R(G)\) we write \(\epsilon_{x}\in\mathbb{Z}\) for the augmentation of \(x\). If \(x,y\in R(G)\) then \[(x-\epsilon_{x})(y-\epsilon_{y})\in I^{2}(G)\] and it follows that in \(S(G)\) \[[xy]=\epsilon_{y}[x]+\epsilon_{x}[y]-\epsilon_{x}\epsilon_{y}.\] This will give the \(\lambda\)-operations in \(S(G)\). The only practical way to handle the arithmetic in this step of the process to use Magma.
3. Both of the results we prove using this procedure involve the \(\Psi\)-module \[V(G)=S(G)/\mathbb{Z}.\] Note that \(S(G)\) is a \(\Lambda\)-ring but \(V(G)\) is not. However, while the quotient map \(S(G)\to V(G)\) is not a map of \(\Lambda\)-rings, it is a map of \(\Psi\)-rings.
### The proof of Theorem 4.3.1
We illustrate this procedure in the proof of Theorem 4.3.1 in the case \(G=F_{4}\). The fundamental representations of \(F_{4}\) are, in Bourbaki order ([10, Ch. VI, Planches I-IX]),
\[w_{1},w_{2},w_{3},w_{4},\quad\text{ of dimension }\quad 52,1274,273,26,\text{ respectively.}\]
In \(R(F_{4})\) the \(\lambda\)-operations are given by
\[\lambda^{2}w_{4}=w_{1}+w_{3},\]
\[\lambda^{3}w_{4}=w_{1}w_{4}+w_{2}-w_{4},\]
\[\lambda^{4}w_{4}=w_{1}^{2}+w_{1}w_{3}-w_{2}-w_{4}^{2}.\]
This is the input for step (i) of the standard operating procedure.
Next we reduce these formulas to the \(\Lambda\)-ring \(S(F_{4})\). This gives the following formulas.
\[\lambda^{1}[w_{4}]=[w_{4}],\]
\[\lambda^{2}[w_{4}]=[w_{1}]+[w_{3}],\]
\[\lambda^{3}[w_{4}]=26[w_{1}]+[w_{2}]+51[w_{4}]-1352,\]
\[\lambda^{4}[w_{4}]=377[w_{1}]-[w_{2}]+52[w_{3}]-52[w_{4}]-16224.\]
Now we reduce to \(V(F_{4})=S(F_{4})/\mathbb{Z}\). This leads to the \(4\times 4\) matrix.
\[\begin{pmatrix}0&1&26&377\\ 0&0&1&-1\\ 0&1&0&52\\ 1&0&51&-52\end{pmatrix}\]
The columns of this matrix are the coordinates, in the basis defined by the fundamental representations \(w_{1},w_{2},w_{3},w_{4}\), of the elements in \(V(F_{4})\) defined by \([w_{4}],\lambda^{2}[w_{4}],\lambda^{3}[w_{4}],\lambda^{4}[w_{4}]\). The determinant of this matrix is \(351\).
Now \(V(F_{4})\otimes\mathbb{Q}\) is the indecomposable quotient of the ring \(R(F_{4})\otimes\mathbb{Q}\). This shows that
\[w_{4},\lambda^{2}w_{4},\lambda^{3}w_{4},\lambda^{4}w_{4}\]
generate the ring \(R(F_{4})\otimes\mathbb{Q}\) and so \(R(F_{4})\otimes\mathbb{Q}\) is generated as a \(\Lambda\)-ring by \(w_{4}\). In turn this shows that \(S(F_{4}\otimes\mathbb{Q})\) is generated as a \(\Lambda\)-ring by \([w_{4}]\), and \(V(F_{4})\otimes\mathbb{Q}\) is generated as a \(\Psi\)-module by the element defined by \([w_{4}]\).
Implementing this procedure for \(E_{6}\), \(E_{7}\), and \(E_{8}\) leads to the following three matrices.
1. For \(E_{6}\) \[\begin{pmatrix}1&0&0&1&-702&-3483\\ 0&0&0&351&3834&17496\\ 0&1&0&-27&-77&-324\\ 0&0&1&0&-54&-236\\ 0&0&0&79&756&2511\\ 0&0&0&-405&-2759&2754\end{pmatrix}\] \[\det=-27208467.\]
2. For \(E_{7}\) \[\begin{pmatrix}0&0&0&0&-42504&-834648&-4655288\\ 0&0&0&0&8645&207424&2129083\\ 0&0&0&0&968&20902&166704\\ 0&0&0&1&0&-267&-2848\\ 0&0&1&0&-132&-856&-5831\\ 0&1&0&1&56&-14742&-179760\\ 1&0&1&0&-7371&-79648&-560651\end{pmatrix}\] \[\det=-1997102661696.\]
3. For \(E_{8}\) \[\begin{pmatrix}0&0&0&0&0&-401581533&-48147450080&-2346940420190\\ 0&0&0&0&0&6661497&860320742&49370806120\\ 0&0&0&0&0&185628&22371987&1105454390\\ 0&0&0&0&1&-1&-7999&-514878\\ 0&0&0&1&0&-3627&-112746&-2002508\\ 0&0&1&0&248&34255&-9767140&-735062326\\ 0&1&0&247&-496&-5059262&-205995340&-5451187498\\ 1&1&495&30380&2573495&304455619&12658360729&209067977980\end{pmatrix}\] \[\det=22804152835143344418390\]
### The proof of Lemma 7.0.2
This time we start with the case of \(E_{6}\). For ease of notation, let \(j=h_{6}\) be the inclusion map of \(\operatorname{Spin}(10)\) into \(E_{6}\) as in Section 7. The fundamental representations of \(E_{6}\) are
\[a,\quad\rho_{1},\quad\rho_{2},\quad\lambda^{2}\rho_{1},\quad\lambda^{2}\rho_{2 },\quad\lambda^{3}\rho_{1}=\lambda^{3}\rho_{2},\]
where \(a\) is the \(78\)-dimensional adjoint representation and \(\rho_{1}\), \(\rho_{2}\) are the two inequivalent \(27\)-dimensional minimal representations.
The fundamental representations of \(\mathrm{Spin}(10)\) are
\[v_{1},\quad v_{2}=\lambda^{2}v_{1},\quad v_{3}=\lambda^{3}v_{1},\quad\delta_{10} ^{-},\quad\delta_{10}^{+},\]
where \(v_{1}\) is the \(10\)-dimensional representation, and \(\delta_{10}^{\pm}\) are the two \(16\)-dimensional spin representations.
Restricting the six fundamental \(E_{6}\)-representations to \(\mathrm{Spin}(10)\) yields the following formulas, as in [10, Lemma 2].
1. \(j^{*}(a)=v_{2}+\delta^{+}+\delta^{-}+1\),
2. \(j^{*}(\rho_{1})=v_{1}+\delta^{-}+1\),
3. \(j^{*}(\rho_{2})=v_{1}+\delta^{+}+1\),
4. \(j^{*}(\lambda^{2}\rho_{1})=v_{1}+v_{2}+v_{3}+\delta^{-}+v_{1}\delta^{-}\),
5. \(j^{*}(\lambda^{2}\rho_{2})=v_{1}+v_{2}+v_{3}+\delta^{+}+v_{1}\delta^{+}\),
6. \(j^{*}(\lambda^{3}\rho_{1})=v_{2}+2v_{3}+v_{2}\delta^{+}+v_{2}\delta^{-}+v_{1} v_{3}\).
Next we reduce these formulas to the \(\Lambda\)-ring \(S(G)\). This produces the following six equations in \(S(\mathrm{Spin}(10))\).
1. \(j^{*}[a]=[v_{2}]+[\delta^{+}]+[\delta^{-}]+1\),
2. \(j^{*}[\rho_{1}]=[v_{1}]+[\delta^{-}]+1\),
3. \(j^{*}[\rho_{2}]=[v_{1}]+[\delta^{+}]+1\),
4. \(j^{*}[\lambda^{2}\rho_{1}]=17[v_{1}]+[v_{2}]+[v_{3}]+11[\delta^{-}]-160\),
5. \(j^{*}[\lambda^{2}\rho_{2}]=17[v_{1}]+[v_{2}]+[v_{3}]+11[\delta^{+}]-160\),
6. \(j^{*}[\lambda^{2}\rho_{2}]=120[v_{1}]+33[v_{2}]+12[v_{3}]+45[\delta^{-}]+45[ \delta^{+}]-2640\).
This gives the matrix with \(5\) rows and \(6\) columns which is the matrix of \(j^{*}:V(E_{6})\to V(\mathrm{Spin}(10))\) in the chosen bases for \(V(E_{6})\) and \(V(\mathrm{Spin}(10))\).
\[\begin{pmatrix}0&1&1&17&17&120\\ 1&0&0&1&1&33\\ 0&0&0&1&1&12\\ 1&1&0&11&0&45\\ 1&0&1&0&11&45\end{pmatrix}\]
The proof of part 1 of Lemma 7.0.2 is given by showing that the rank of this matrix is \(2\) and the nullity is \(4\). It follows that the transpose has rank \(4\) and nullity \(1\). This shows that the kernel of \(j_{*}:V^{*}(\mathrm{Spin}(10))\otimes\mathbb{Q}\to V^{*}(E_{6})\otimes\mathbb{Q}\) is one dimensional.
The same procedure gives the corresponding matrices for the homomorphisms \(h_{7}:\mathrm{Spin}(12)\to E_{7}\) and \(h_{8}:\mathrm{Spin}(16)\to E_{8}\). However in this case we have to use Magma to calculate Adams operations information and to soften it to give the linear maps \(h_{7}^{*}:V(E_{7})\otimes\mathbb{Q}\to V(\mathrm{Spin}(12)\otimes\mathbb{Q}\) and \(h_{8}^{*}:V(E_{8})\otimes\mathbb{Q}\to V(\mathrm{Spin}(16))\otimes\mathbb{Q}\).
Using the standard bases defined by fundamental representations, we find that the matrices of \(h_{7}^{*}\) and \(h_{8}^{*}\) are, respectively,
\[\begin{pmatrix}0&34&220&25840&1890&88&2\\ 1&0&66&890&56&2&0\\ 0&2&12&1320&34&0&0\\ 0&0&1&145&24&1&0\\ 2&12&200&9120&220&0&0\\ 0&2&0&120&210&24&1\end{pmatrix}\]
\[\begin{pmatrix}160&9056&597840&1005303376&13865488&125552&560&0\\ -1&-259&-15473&-49227299&-344942&4241&128&1\\ 0&144&4480&17099280&296400&3168&16&0\\ 1&-3&1808&-1025376&-23023&-240&-1&0\\ 0&16&560&2093696&22048&112&0&0\\ 0&-2&-105&-370711&3367&121&1&0\\ 16&816&58608&94909584&920192&4368&0&0\\ -1&15&-5568&-12391764&-22477&4944&119&1\end{pmatrix}.\]
In both cases the rank is \(4\). Taking account of the fact that the first is a \(6\times 7\) matrix and the second is an \(8\times 8\) matrix this shows that the nullity of the transpose of the first is \(2\) and for the second it is \(4\). This completes the proof.
|
2304.11382 | Proposal for detecting the $π-$shifted Cooper quartet supercurrent | The multiterminal Josephson effect aroused considerable interest recently, in
connection with theoretical and experimental evidence for correlations among
Cooper pairs, that is, the so-called Cooper quartets. It was further predicted
that the spectrum of Andreev bound states in such devices could host Weyl-point
singularities. However, the relative phase between the Cooper pair and quartet
supercurrents has not yet been addressed experimentally. Here, we propose an
experiment involving four-terminal Josephson junctions with two independent
orthogonal supercurrents, and calculate the critical current contours (CCCs)
from a multiterminal Josephson junction circuit theory. We predict a
generically $\pi$-shifted contribution of both the local or nonlocal
second-order Josephson harmonics. Furthermore, we show that these lead to
marked nonconvex shapes for the CCCs in zero magnetic field, where the
dissipative state reenters into the superconducting one. Eventually, we discuss
distinctive features of the non-local Josephson processes in the CCCs. The
experimental observation of the latter could allow providing firm evidence of
the $\pi$-shifted Cooper quartet current-phase relation. | Régis Mélin, Romain Danneau, Clemens B. Winkelmann | 2023-04-22T12:23:39Z | http://arxiv.org/abs/2304.11382v3 | # Proposal for detecting the \(\pi-\)shifted Cooper quartet supercurrent
###### Abstract
The multiterminal Josephson effect aroused considerable interest recently, in connection with theoretical and experimental evidence for correlations among Cooper pairs, that is, the so-called Cooper quartets. It was further predicted that the spectrum of Andreev bound states in such devices could host Weyl-point singularities. However, the relative phase between the Cooper pair and quartet supercurrents has not yet been addressed experimentally. Here, we propose an experiment involving four-terminal Josephson junctions with two independent orthogonal supercurrents, and calculate the critical current contours (CCCs) from a multiterminal Josephson junction circuit theory. We predict a generically \(\pi\)-shifted contribution of both the local or nonlocal second-order Josephson harmonics. Furthermore, we show that these lead to marked nonconvex shapes for the CCCs in zero magnetic field, where the dissipative state reenters into the superconducting one. Eventually, we discuss distinctive features of the non-local Josephson processes in the CCCs. The experimental observation of the latter could allow providing firm evidence of the \(\pi\)-shifted Cooper quartet current-phase relation.
## I Introduction
Entanglement in electronic superconducting circuits is central to quantum engineering, and prototypes of quantum processors were recently realized, unveiling a variety of physical phenomena [1]. Entanglement engines were proposed in the early 2000s, with normal metal-superconductor-normal metal (\(N\)-\(S\)-\(N\)) hybrids as sources of entangled Einstein-Podolsky-Rosen pairs of electrons [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12]. A series of experiments addressed nonlocality in the DC current response [13; 14; 15; 16; 17; 18; 19; 20] and quantum noise [21] as evidence for entangled split Cooper pairs [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12]. On the other hand, the emerging field of all-superconducting multiterminal Josephson junctions [22; 23; 24; 25; 26] offers new perspectives such as exotic transient quantum correlations among Cooper pairs, known as Cooper quartets [27; 28; 29; 30; 31; 32; 33]. While a series of experiments reported clear signatures of Cooper quartets [34; 35; 36; 37], these features were not observed by others [38; 39; 40; 41; 42; 43; 44; 45], possibly due to delicate material and device fabrication issues. In parallel, multiterminal Josephson junctions also focused strong interest recently as a testbed of Floquet theory [46; 47; 48; 49; 50; 51; 52; 53], as well as a platform for the emergence of energy level repulsion in Andreev molecules [54; 55; 56; 57; 58; 59; 60], the production of Weyl-point singularities in the Andreev spectrum [61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78], and the multiterminal superconducting diode effect [79; 80].
In spite of intense experimental efforts for observing signatures of the quartet state and its new physics beyond the standard Resistively Shunted Josephson Junction model [34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78], novel schemes are necessary for ascertaining the Cooper quartets. When driving current between pairs of contacts in a multiterminal Josephson junction with an even number \(2n\) of superconducting leads, \(n\) equations of current conservation are imposed by the external circuit. Those \(n\) constraints (for a total of \(2n\) phase variables) allow for supercurrent inside a region in phase space parameterized by \(2n-n\equiv n\) independent variables. With four terminals, a DC supercurrent is thus established within a two-dimensional region in the plane of the bias currents, separated from the resistive state by a one-dimensional critical current contour (CCC). In a recent work, Pankratova _et al._[39] reported nonconvex shapes in the CCCs of four-terminal semiconductor-superconductor Josephson junctions. However, these nontrivial features appeared only at rather high magnetic fields, corresponding to about half a flux quantum threading the central part of the device. The observation of nonconvex CCCs was interpreted using Random Matrix Theory, assuming time-reversal symmetry breaking, either due to an applied magnetic field or preexisting in the normal state [39].
Here, we demonstrate that in the presence of at least one contact with an intermediate transmission, another mechanism for the emergence of nonconvex CCCs is possible, which does not require a magnetic field. Namely, we find correspondence between the _quartet physics_ and the emergence of nonconvex sharp-angled points in the CCCs at zero magnetic field. This distinctive signature stems from the interference between symmetric quartet channels, which are dephased by a transverse supercurrent (see Fig. 1). In other words, we demonstrate that _macroscopic_ critical current measurements can probe the _microscopic_ internal structure of entangled split Cooper pairs [2; 3; 4; 5; 6; 7; 8; 9].
The article is organized as follows. The \(\pi\)-shifted quartets are introduced in Sec. II. The device and the model are presented in Sec. III. The numerical results, analytical and numerical, are presented and discussed in Sec. IV. Concluding remarks are provided in Sec. V.
## II \(\pi\)-shifted Cooper quartets
In this section, we provide physical arguments supporting the \(\pi\)-shifted Cooper quartet current-phase relation. The key underlying concept can readily be understood starting from a three-terminal configuration of Josephson junctions in the DC superconducting state, connecting the leads \(S\) with respective indices \(i,j,k\)[28; 54], and biased with respective phases \(\varphi\). The corresponding spin-singlet wave-function of a split
Cooper pair for instance between \(S_{i}\) and \(S_{j}\) takes the form
\[\psi=\frac{1}{\sqrt{2}}\left(c^{+}_{i,\uparrow}c^{+}_{j,\downarrow}-c^{+}_{i, \downarrow}c^{+}_{j,\uparrow}\right), \tag{1}\]
where \(c^{+}_{i,\sigma}\) creates a spin-\(\sigma\) fermion in \(S_{i}\). The splitting event in Eq. (1) can come along with a second one. The resulting composite four-fermion transient state, i.e. a Cooper quartet [34; 35; 36; 37; 28], ends up as two Cooper pairs transmitted into \(S_{i}\) and \(S_{j}\), respectively, and described by
\[\langle\psi^{2}\rangle=-\langle c^{+}_{i,\uparrow}c^{+}_{i,\downarrow}\rangle \langle c^{+}_{j,\uparrow}c^{+}_{j,\downarrow}\rangle, \tag{2}\]
where \(\langle...\rangle\) is a quantum mechanical expectation value (details can be found in [81]). By _probing the internal structure of (double) split Cooper pairs_, we mean providing experimental evidence for the negative sign in Eq. (2), which is a direct consequence of both quantum mechanical exchange and the split Cooper pair structure of Eq. (1). Consequently, the relation between the quartet supercurrent \(I_{q}\) and the quartet phase \(\varphi_{q}\) is inverted:
\[I_{q}(\varphi_{q}) = -|F^{\ast,q}|\sin\varphi_{q} \tag{3}\] \[\varphi_{q} = \varphi_{a}+\varphi_{b}-2\varphi_{c}. \tag{4}\]
Eq. (3) can be rewritten as \(I_{q}(\varphi_{q})=|F^{\ast,q}|\sin(\varphi_{q}+\pi)\) and this \(\pi\)-shift is a macroscopic signature for the specific internal structure of single split Cooper pairs, see Eq. (1).
Another simple perspective on the \(\pi\)-shift of the quartets readily follows from considering a single two-terminal superconducting weak link with normal-state transmission \(\alpha\). Here, the energy-phase relation can be Fourier-expanded as \(E^{J}(\varphi)=E^{J}_{0}+E^{J}_{2e}\cos\varphi+E^{J}_{4e}\cos 2\varphi+...\). The \(\cos\varphi\) term represents the Josephson Cooper-pair energy, and is dominant in the limit of small transparency, while the \(\cos 2\varphi\) one describes correlated tunneling of two Cooper pairs. We find \(E^{J}_{4e}/E^{J}_{2e}\approx-\alpha/16\) in the small-\(\alpha\) limit and more generally \(E^{J}_{4e}/E^{J}_{2e}<0\) for all \(\alpha<1\), see Supplemental Material [81]. This negative sign echoes the above current-phase relation of the quartets. More generally, our work proposes a method to directly reveal these \(\pi\)-shifted second-order Josephson harmonics, using a multiterminal configuration.
## III The device and multiterminal Josephson circuit theory
In this section, we present the two types of devices and the approximations sustaining multiterminal Josephson circuit theory. The proposed device consists of four BCS superconducting leads \(S_{L},S_{R},S_{B}\) and \(S_{T}\), with the respective superconducting phase variables \(\varphi_{L}\), \(\varphi_{R}\), \(\varphi_{B}\) and \(\varphi_{T}\), and connected via a square-shaped normal conductor \(N_{0}\) as shown in Fig. 1. The external circuit imposes current in orthogonal directions, that is, a vertical current \(I_{v}\equiv I_{T}=-I_{B}\) and a horizontal one \(I_{h}\equiv I_{R}=-I_{L}\). The absence of coupling between \(I_{v}\) and \(I_{h}\) produces a square or rectangular CCC, while rounded CCCs are indicative of coupling.
Our main result is that assuming a single or two contacts with transparency smaller than the others, nonconvex CCCs emerge in the \((I_{v},I_{h})\) plane already under zero applied magnetic field. We thus find reentrance of the dissipative state into the superconducting region as a distinctive signature of the \(\pi\)-shifted contribution of second-order Josephson harmonics. Furthermore, we show that the \(\pi\)-shifted Cooper quartet supercurrent produces distinctive reentrant sharp-angled points in the CCCs.
The four-terminal geometry is found by a straightforward generalization of Josephson circuits, where now the \(I_{v}\) and \(I_{h}\) supercurrents result from an interference between multipair processes involving the phases of more than two terminals
Figure 1: Sketch of the superconducting four-terminal device with either one current and one phase bias (a), or two orthogonal current biases (b). The superconductors \(S_{a}\), \(S_{b}\), \(S_{c}\) and \(S_{d}\) are connected to the normal metallic region \(N_{0}\). The four \(N_{0}\)-\(S_{i}\) junctions consist of tunable quantum point contacts, where the transmission of the \(N_{0}\)-\(S_{T}\) interface is reduced by a scaling factor \(\tau_{T}\). Panels c-e represent the lowest-order Josephson processes occurring in a simplified toy-model. Panel c shows the two-terminal DC-Josephson effect from \(S_{B}\) to \(S_{T}\), which is insensitive to the horizontal contacts. Panels d and e show the Cooper quartet processes, which take two Cooper pairs from \(S_{B}\), exchange partners and transmit the outgoing pairs into \((S_{T},S_{L})\) and \((S_{T},S_{R})\), respectively. In presence of a horizontal phase drop, these two processes pick up opposite phases, as shown in panels f and g. This leads to interfering quartet supercurrent components within this simplified model, without and with horizontal phase drop, respectively. Due to the \(\pi\)-shift, the critical current along the vertical direction in panel f is reduced by the two quartet processes. On panel g, a phase drop along the horizontal direction dephases the negative contribution of both processes, resulting in an increased critical current and thus a nonconvex CCC.
[28; 31]. For instance, in a two-terminal Josephson junction, the terms corresponding to Cooper pairs transmitted from \(S_{i}\) to \(S_{j}\) couple to the difference \(\delta_{i,j}=\varphi_{i}-\varphi_{j}\). Similarly, with four terminals, the relevant phase variables are then given by gauge-invariant combinations such as \(\delta_{i,j}+\delta_{k,l}\)[31], which reduces to Eq. (4) for three terminals [28].
In our multiterminal Josephson circuit model, we assume tunable contacts with a few transmission modes connecting the four superconductors to a central normal metal island (see Fig. 1), as was recently demonstrated in bilayer graphene-based two-terminal Josephson devices [82] and in multiterminal semiconducting-superconducting quantum point contacts [37]. Considering intermediate contact transparencies, although the DC-Josephson effect is dominant, the next-order Cooper quartets still yield a sizable contribution, while the even higher-order terms are smaller. This _hierarchy_ justifies the approach of the Letter, considering within a single four-terminal device all the Josephson processes involving two, three and then four terminals. The calculation involves two steps: our starting points are the approximate analytical expressions of the current-phase relations discussed above, with sign and amplitude as free parameters. This allows comparing the CCCs with respectively positive or negative Cooper quartet contributions. From this we will arrive to the conclusion that nonconvex CCCs in zero field carry the unique signature of the microscopic \(\pi\)-shifted Cooper quartet current-phase relation, and would be absent with a 0-shift.
We consider intermediate transparency interfaces, with hopping amplitudes \(J_{L},J_{R},J_{B}\) and \(J_{T}\) connecting respectively the four superconducting leads \(S_{L}\), \(S_{R}\), \(S_{B}\) and \(S_{T}\) to a normal tight-binding lattice \(N_{0}\). The DC-Josephson supercurrent of Cooper pairs from lead \(S_{i}\) to lead \(S_{j}\) is written as \(I_{P}=I_{i,j}^{C,p}\sin\delta_{i,j}\). The _nonlocal_ DC-Josephson supercurrent of the Cooper quartets involves, at the lowest order in tunneling, the following three terms:
\[I_{q}=I_{i,j,(k)}^{C,q}\sin(\delta_{i,k}+\delta_{j,k})\] \[+ I_{i,(j),k}^{C,q}\sin(\delta_{i,j}+\delta_{k,j})+I_{(j),j,k}^{C, q}\sin(\delta_{j,i}+\delta_{k,j}).\]
Here, \(I_{i,j,(k)}^{C,q}\) for instance represents the critical quartet current of two pairs emitted by \(S_{k}\) and recombining into \(S_{i}\) and \(S_{j}\). We introduce the individual channel transmissions \(\tau_{i}\) such that all \(J_{i}=\sqrt{\tau_{i}}J^{(0)}\), with \(J^{(0)}\) a constant smaller than the band-width \(W\). The critical currents scale as follows: \(I_{i,j}^{C,p}=\tau_{i}\tau_{j}I_{i,j}^{C(0)}\) for the Cooper pairs, and \(I_{i,j,(k)}^{C,q}=\tau_{i}\tau_{j}\tau_{k}^{2}I_{i,j,(k)}^{C(0)}\), \(I_{i,(j),k}^{C,q}=\tau_{i}\tau_{j}^{2}\tau_{k}I_{i,(j),k}^{C(0)}\) and \(I_{(j),j,k}^{C,q}=\tau_{i}^{2}\tau_{j}\tau_{k}I_{(j),k}^{C(0)}\) for the Cooper quartets, where the \(I^{C(0)}\)s do not scale with the transmissions.
## IV Results
### Polarization with one current and one phase bias
In this subsection we present analytical results for the device polarized with one current and one phase bias, see Fig. 1a. An external source drives a supercurrent from \(S_{B}\) to \(S_{T}\) and an external loop fixes the phase difference between \(S_{L}\) and \(S_{R}\). We additionally assume that the \(N_{0}\)-\(S_{T}\) link has a tunneling amplitude \(J_{T}\) small compared to \(J_{L}=J_{R}=J_{B}\equiv J^{(0)}\), i.e. \(\tau_{T}\lesssim\tau_{L},\tau_{R},\tau_{B}\lesssim 1\). Then, we make a perturbation expansion in tunneling of the Josephson circuit to the dominant order \(\tau_{T}\), neglecting the processes of order \(\tau_{T}^{2}\) (see Supplemental Material [81]). In absence of the quartets, we find two types of processes: (i) The direct two-terminal DC-Josephson effect of the Cooper pairs from \(S_{B}\) to \(S_{T}\) (see Fig. 1e), and (ii) The two-terminal DC-Josephson processes of the Cooper pairs involving the lateral superconductors \(S_{L}\) and \(S_{R}\). Adding now the quartets, we include all possible processes appearing at the orders \(\tau_{T}^{0}\) and \(\tau_{T}\).
The cartoon shown in Fig. 1 illustrates the case where, at the order \(\tau_{T}\), the critical current \(I_{v}^{c}\) from \(S_{B}\) to \(S_{T}\) results from an interference between the amplitudes of the two-terminal DC Josephson effect and both Cooper quartets (see Figs. 1f,g). Taking an opposite relative sign of the two- and three-terminal contributions, respectively, leads to a reduction of \(I_{v}^{c}\) upon including the Cooper quartets. Notably, because each quartet process picks up an opposite phase \(\varphi_{L}=-\varphi_{R}\equiv-\varphi/2\), their respective contributions are dephased and the value of \(I_{v}^{c}\) is restored upon applying a supercurrent \(I_{h}\) (or a phase gradient) in the transverse direction, as shown in Figs. 1f and g.
Now, we evaluate the full set of microscopic two- and three-terminal processes at the relevant orders (details in Supplemental Material [81]). Using the notations \(\varphi_{L}=-\varphi_{R}=-\varphi/2\), we demonstrate in the Supplemental Material that, at small \(\tau_{T}\) and quartet Josephson energy \(E_{q}=(\hbar/2e)I_{q}^{c}\), the critical current \(I_{v}^{c}\) from \(S_{B}\) to \(S_{T}\) can be approximated as
\[I_{v}^{c}\simeq\tau_{T}I_{P}^{c}\left\{3+6\frac{I_{q}^{c}}{I_{P}^{c}}-\frac{ \varphi^{2}}{4}\left[1+14\frac{I_{q}^{c}}{I_{P}^{c}}\right]\right\}, \tag{6}\]
where \(I_{P}^{c}\) and \(I_{q}^{c}\) are proportional to the critical currents of the
Figure 2: The figure shows the shape of the CCC, i.e. \(F(\varphi/2\pi)\) as a function of \(\varphi/2\pi\), where \(F(\varphi/2\pi)\) is proportional to the critical current, see Eq. (8). Polarization is with one current and one phase bias, see Fig. 1a. The notation \(\varphi\) stands for the phase difference between the \(N_{0}-S_{R}\) and \(N_{0}-S_{L}\) contacts. Two regimes are obtained if \(I_{v}^{c}<-I_{P}^{c}/14\) or \(I_{v}^{c}>-I_{P}^{c}/14\), corresponding to nonconvex or convex CCCs respectively.
two- and three-terminal Cooper pair and Cooper quartet processes respectively. Eventually, Eq. (6) predicts nonconvex CCCs if the condition
\[I_{q}^{c}<-\frac{I_{P}^{c}}{14} \tag{7}\]
is fulfilled. In this case, the dissipative state reenters into the superconducting one, as a result of the \(\pi\)-shifted Cooper quartet current-phase relation coming from the spin-singlet minus signs in Eqs. (1) and (2).
We rewrite Eq. (6) as \(I_{v}^{c}(\varphi)=\tau_{T}I_{P}^{c}F(\varphi/2\pi)\), with
\[F\left(\frac{\varphi}{2\pi}\right)=3+6\frac{I_{q}^{c}}{I_{P}^{c}}-\pi^{2}\left( \frac{\varphi}{2\pi}\right)^{2}\left[1+14\frac{I_{q}^{c}}{I_{P}^{c}}\right]. \tag{8}\]
The variations of \(F(\varphi/2\pi)\) are shown in Fig. 2, confirming emergence of nonconvex or convex CCC if \(I_{q}^{c}<-I_{P}^{c}/14\) or \(I_{q}^{c}>-I_{P}^{c}/14\) respectively.
### Polarization with two orthogonal current biases
In this subsection, we numerically solve a related model where we impose current biases in both horizontal and vertical directions, such that \(I_{v}=I_{T}=-I_{B}\) and \(I_{h}=I_{R}=-I_{L}\) (see Fig. 1b). The four superconducting phase variables adjust accordingly. The numerical calculations are based on evaluating convergence of the steepest descent algorithm for a multiterminal Josephson junction. A dichotomic search was implemented, in order to locate the CCCs to high accuracy. We use \(I_{k,l}^{c}\equiv I_{P}^{c}\) and \(I_{k,l(m)}^{c}\equiv I_{q}^{c}\) for the critical currents of the processes coupling to two and three superconducting phase variables, respectively. Fig. 3 shows the CCCs of a four-terminal device with the transmission coefficient scaling factors \(\tau_{B}=\tau_{L}=\tau_{R}=1\) and different values of \(\tau_{T}\). For positive values of \(I_{q}^{c}/I_{P}^{c}\), the CCCs have the shape of nested rounded rectangles. For sufficiently negative \(I_{q}^{c}/I_{P}^{c}\) however, the CCCs evolve from diamond-like to a shape presenting nonconvex sharp-angled points when lowering \(\tau_{T}\). Notably, the CCCs with nonconvex sharp-angled points are only obtained for a sufficiently negative Cooper quartet critical current (here \(I_{q}^{c}/I_{P}^{c}=-0.2\)), which is in agreement with the preceding analytical solution.
In Fig. 4a, we further implement two weak links with \(\tau_{T}\), \(\tau_{L}\leq 1\), while maintaining \(\tau_{B}=\tau_{R}=1\), and we use a negative Cooper quartet critical current \(I_{q}^{c}/I_{P}^{c}=-0.2\). Focusing on the panels on the diagonal, i.e. \(\tau_{T}=\tau_{L}=1/4\), \(1/2\), \(1\), we obtain an evolution from diamond-like to square-like CCCs, as \(\tau_{T}=\tau_{L}\) decreases. Since a rectangular CCC is indicative of independent currents in orthogonal directions, this evolution demonstrates a loss of quantum mechanical coupling between \(I_{v}\) and \(I_{h}\) as the contact transmission coefficient scaling factor decreases. The intermediate value \(\tau_{T}=\tau_{L}=1/2\) yields reentrance on both supercurrent axes, which originates from the underlying diagonal mirror symmetry in the device. Considering now the off-diagonal panels in Fig. 4a, we obtain shapes with nonconvex sharp-angled points on the \(I_{v}^{c}\) axis if \(\tau_{T}=1/4\), \(1/2\) and \(\tau_{L}=1\), and the same on the \(I_{h}^{c}\) axis if \(\tau_{T}=1\) and \(\tau_{L}=1/4\), \(1/2\). This is again in qualitative agreement with the analytical model calculations presented in the above Sec. IV.1.
In Fig. 4b, we introduce all possible higher-order two-terminal \(I_{2T}^{c}\sin(2(\varphi_{i}-\varphi_{j}))\) coupling terms, in addition to the Cooper quartets. We observe the robustness of the reentrant sharp-angled points with respect to addition of these. Qualitatively, this can be interpreted as due to the fact that a smooth feature on top of a sharp cusp does not alter the latter. Figs. 4c and d comparatively show the CCCs with vanishingly small quartet critical current but with finite \(I_{2T}^{c}\) taking negative or positive values. The nonconvex sharp-angled points are absent in the corresponding CCCs if \(I_{q}^{c}=0\) and \(I_{2T}^{c}\neq 0\). Those nonconvex sharp-angled points are thus a unique signature of the nonlocally \(\pi\)-shifted Cooper quartets.
Eventually, we demonstrate robustness of the reentrant pockets upon including the DC-Josephson effect depending on all four superconducting phase variables [31]. At the lowest order in tunneling, the corresponding Josephson critical currents are denoted by \(I_{q}^{c}\) and they scale like \(\tau_{L}\tau_{R}\tau_{R}\tau_{T}\). Fig. 5 provides the CCCs for variable combinations of \(I_{q}^{c}/I_{P}^{c}\) and \(I_{q}^{c}/I_{P}^{c}\), and with \(\tau_{T}\lesssim 1\) and \(\tau_{B}=\tau_{L}=\tau_{R}=1\). The data with \(I_{q}^{c}\gtrsim 0\) reveal smooth nonrenentrant variations, contrasting with the sharper reentrant-like variations on the other panels. We conclude that reentrant features in CCCs at negative Cooper quartet critical current \(I_{q}^{c}\) are robust with respect to including higher-order Josephson terms.
## V Conclusions
To conclude, it follows from basic theoretical arguments that the quartet supercurrent contribution must be \(\pi\)-shifted with respect to the lowest order Josephson Cooper pair
Figure 3: Critical current contours in the \((I_{v},I_{h})\) plane, with \(I_{q}^{c}/I_{P}^{c}=-0.2\), \(0,0.2\) (on panels a, b, c respectively), and with a single weak link. The contact transmission coefficients are such that \(\tau_{R}=\tau_{L}=\tau_{R}=1\) and \(\tau_{T}=1\) (magenta), \(\tau_{T}=0.5\) (green) and \(\tau_{T}=0.25\) (blue). Each panel is rescaled to full size on the \(I_{v}\) and \(I_{h}\)-axis. Temperature is set to zero. Polarization is with two orthogonal current biases, see Fig. 1b. Temperature is set to zero.
supercurrent. We demonstrated that the nonconvex two-dimensional critical current contours (CCCs) of a current-biased four-terminal Josephson junction are generically due to a relative \(\pi\)-shift of the higher-order terms in the current-phase relation. These can either originate simply from the two-terminal Josephson current-phase relation, or, more interestingly, from the Cooper quartets. Finally, we demonstrated that nonconvex sharp-angled points in the CCCs are a distinctive signature of negative Cooper quartet critical current contributions. However, we note that too small negative Cooper quartet critical currents will restore convex CCC, which sets constraints on the transmissions for the observation of the characteristic reentrance. A recent experiment [39] reported the appearance of nonconvex CCCs only under applied magnetic field. However, in contrast to our assumptions, all contacts had large transparencies. Conclusive evidence for the \(\pi\)-shifted quartet term could be realized with bilayer graphene- or semiconducting-quantum point contacts [37; 82] with tunable contact transparencies.
## Acknowledgements
The authors benefited from fruitful discussions with M. d'Astuto, S. Collienne, T. Klein, F. Levy-Bertrand, M.A. Measson, P. Rodiere, and A. Silhanek. R.M. acknowledges a useful correspondence with V. E. Manucharyan. R.M. thanks the Infrastructure de Calcul Intensif et de Donnees (GRICAD) for use of the resources of the Mesocentre de Calcul Intensif de l'Universite Grenoble-Alpes (CIMENT). This work was supported by the International Research Project SUPRADEV-MAT between CNRS in Grenoble and KIT in Karlsruhe. This work received support from the French National Research Agency (ANR) in the framework of the Graphmon (ANR-19-CE47-0007) and JOSPEC (ANR-17-CE30-0030) projects. This work was partly supported by Helmholtz Society through program STN and the DFG via the Project No. DA 1280/7-1.
Figure 4: Critical current contours in the \((I_{r},I_{h})\) plane, with \(\tau_{B}=\tau_{R}=1\), and with two weak links. The values \((I_{q}^{c}/I_{p}^{c},I_{2T}^{c}/I_{p}^{c})=(-0.2,0)\), \((-0.2,-0.2)\), \((0,-0.2)\) and \((0,0.2)\) are used on panels a-d respectively. The panels are organized as a table, and the values of \(\tau_{L}\), \(\tau_{T}\) are indicated. Each panel is rescaled to full size on the \(I_{r}\) and \(I_{h}\)-axis. Polarization is with two orthogonal current biases, see Fig. 1b. Temperature is set to zero. |
2306.01930 | Structural Similarities Between Language Models and Neural Response
Measurements | Large language models (LLMs) have complicated internal dynamics, but induce
representations of words and phrases whose geometry we can study. Human
language processing is also opaque, but neural response measurements can
provide (noisy) recordings of activation during listening or reading, from
which we can extract similar representations of words and phrases. Here we
study the extent to which the geometries induced by these representations,
share similarities in the context of brain decoding. We find that the larger
neural language models get, the more their representations are structurally
similar to neural response measurements from brain imaging. Code is available
at \url{https://github.com/coastalcph/brainlm}. | Jiaang Li, Antonia Karamolegkou, Yova Kementchedjhieva, Mostafa Abdou, Sune Lehmann, Anders Søgaard | 2023-06-02T22:09:46Z | http://arxiv.org/abs/2306.01930v2 | # Large Language Models Converge on Brain-Like Word Representations
###### Abstract
**One of the greatest puzzles of all time is how understanding arises from neural mechanics. Our brains are networks of billions of biological neurons transmitting chemical and electrical signals along their connections. Large language models are networks of millions or billions of digital neurons, implementing functions that read the output of other functions in complex networks. The failure to see how meaning would arise from such mechanics has led many cognitive scientists and philosophers to various forms of dualism - and many artificial intelligence researchers to dismiss
large language models as stochastic parrots or jpeg-like compressions of text corpora. We show that human-like representations arise in large language models. Specifically, the larger neural language models get, the more their representations are structurally similar to neural response measurements from brain imaging.**
On a daily basis, we depend heavily on interactions with dozens, if not hundreds, of other intelligent beings, some human, some artificial (your car, Alexa, ChatGPT, etc.). In this work, we address two deep, profound mysteries in one go: _whether_ there is understanding taking place inside large language models (LLMs) such as Alexa TM or ChatGPT, and _how_ understanding takes place in our brains. Our main research question is: Are LLMs as intelligent as they seem, or are they merely'stochastic parrots'?1 The (even more interesting) flipside of this question, however, is: Are humans as intelligent as they seem, or are they merely (something very similar to) LLMs?
Footnote 1: The term ‘stochastic parrot’ was coined by Emily Bender and colleagues[1] to suggest that an LLM ‘is not grounded in communicative intent, any model of the world, or any model of the reader’s state of mind.’ They continue: ’It can’t have been, because the training data never included sharing thoughts with a listener, nor does the machine have the ability to do that.’ There is growing evidence LLMs _do_ have (reasonably good) models of the world.[2, 3, 4] We show that, moreover, these models are increasingly brain-like.
There are many ways to approach this question. Artificial intelligence researchers evaluate LLMs by measuring their performance on benchmark data and protocols.[5, 4] Doing so, they aim to infer what LLMs have learned, from how they behave. The methodology is behaviorist and has obvious limitations. We instead suggest exploring the inside of LLMs and our brains - or, to be precise, their representational _geometries_. Our experimental flow is represented in Figure 1; see Methods for details.
As Leibniz remarked three centuries ago, if you could blow the brain up to the size of a mill and walk about inside, you would not find understanding. This argument is referred to as Leibniz's Mill. Nothing in how individual axons and dendrites pass messages back and forth seems to explain how we come to have thoughts about the world, how we come to _understand_. In the same way, nothing in how numbers are passed around in neural large language models, explains how semantics would arise from these activities. Several researchers have claimed, relying on arguments remark
Figure 1: **Experimental flow and main results.** We run experiments with three families of LLMs (comparing LLMs of different sizes within families), three fMRI datasets, and three state-of-the-art projection algorithms, and results are the same across all 27 combinations: LLMs converge toward human-like representations, enabling precision@10 (P@10) retrieval rates of up to 25%, i.e., half of all concepts can be decoded from the fMRI signals. The datasets, our Gaussian smoothing technique, and the projection methods are described in Methods. Right side: **Convergence results for three families of LLMs across three datasets, using Procrustes Analysis**. The task here is: Given a neural response, which word (in a vocabulary of 1,291 words, was read at the time the response was recorded? Chance (baseline) P@10 is \(<\) 0.01 (the dotted red lines). Convergence is consistent, and some retrieval rates are remarkably high, decoding almost half of the words correctly to a neighborhood of 10 word forms. More results are presented in the Supplementary Materials.
ably similar to Leibniz', that LLMs are nothing but'stochastic parrots', 'jpegs of the Internet', or'simple pattern matchers'. We think that such metaphors are misleading and fail to explain many properties of LLMs. One such property is presented here - the structural similarity between LLM representations and neural response measurements - and we think that this property has serious implications for Leibniz's Mill and its more recent reincarnations.[6, 7]
What we find is remarkable structural similarity between how words are represented in LLMs, and the neural response measurements of humans reading the same words. The LLM representations of a vocabulary form a geometry in a \(d\)-dimensional vector space; and the neural response measurements from one or more participants reading these words in a brain scanner, form in a similar way a geometry in a \(d^{\prime}\)-dimensional space. We can compute the structural similarity (degree of isomorphism) between these two geometries by multi-way ridge regression, Representational Similarity Analysis,[8] and Procrustes Analysis[9] (if \(d=d^{\prime}\)), for example. We present experiments for three families of LLMs (as well as static word embeddings), all three of these evaluation methods, across three different datasets. Across the board, we see high degrees of isomorphism, enabling decoding or retrieval performance of up to P@10\(\approx\)0.2 (with random performance being P@10\(<\)0.01). Word-level brain decoding thus seems much more feasible than previously expected.
**Results: Convergence to Word-Level fMRI Measurements**. Our main results are presented in Figure 6 and in the Supplementary Material and concern the convergence of three families of LLMs on representations that are remarkably similar to those seen in neural response measurements. These results are consistent across three large-scale fMRI datasets and three mapping methods. The projections map the fMRI vectors to LLM representations, enabling direct comparison of geometries; see Figure 2. We use precision@\(k\) (P@\(k\)) to quantify the alignment precision of linear projections; see Methods for details. In brief, the P@\(k\) score is the fraction of words for which the LLM's representation is a \(k\)-nearest neighbor of the fMRI encoding. Word-level decoding thus amounts to simple nearest neighbor retrieval in the projected space. The plots show average and maximum performance across participants.
The scores are plotted by model size, showing the convergence toward brain-like rep
Figure 2: **t-SNE plot of fMRI and LLM representations** using OPT-30b (large, uncased) over select target words from dataset 10. We evaluate the retrieval performance of our alignments using precision@\(k\), which measures the ratio of word tokens \(w_{i}\), e.g., for \(k=5\), ’Potter\({}_{2183}\)’, whose fMRI representations are projected into the LLM space such that the LLM representation for the word type \(w\), e.g., _Potter_, is among the 5-nearest neighbors of \(w_{i}\). In this case, the neural activity associated with ’never\({}_{313}\)’ is not read as _writing_ directly – but still with _words_ as the top-5 guess. That said, the words _Potter_ and _would_ are decoded correctly by our alignment (top-1 guess or P@1).
resentations as LLMs increase in size. The best scores indicate that LLMs up to 1.5B parameters can achieve alignments such that a bit more than 1 in 5 words are decoded correctly,2 and a bit more than 2 in 5 almost correctly (within neighborhoods of 20-30 word forms); see Supplementary Materials for more analysis. The results are obtained with limited supervision for learning the mapping. In fact, we only rely on 950 data points to induce this linear projection, a small number given the high dimensionality of the derived word representations; see Methods for details. It is likely that somewhat better mappings can be induced with more supervision - a question we briefly explore in the Supplementary Materials.
Footnote 2: The reason we count P@5 or P@10 as correct decoding is that a neighborhood of 5-10 words will tend to consist of inflections of the same lemma or synonymous words11. P@1 would amount to guessing the lemma, the exact inflection, and the correct spelling variant.
**Results: Where are LLMs Most Brain-Like?** We also consider at what layers the different language models align best with the representations extracted from the fMRI data. The results presented in Figure 3 and the Supplementary Materials are unambiguous and show that deeper representations align better with neural response measurements. This holds across all architectures and model sizes.
Interestingly, the alignment improvements at deeper layers do not wear off to reach a plateau. Our results, in fact, suggest that better alignment results can be achieved by training even deeper models. This may also explain the strong correlation between
Figure 3: Alignment precision results across layers. The alignment with fMRI improves with model depth, for BERT and Procrustes Analysis; see Supplementary Materials for similar results for other LLMs and projection methods.
depth and generalization often observed in the literature.[12]
**Discussion**. Our results have direct philosophical and practical implications. As for the philosophical impact, there is currently a very lively debate about whether large language models exhibit 'understanding'.[3, 4] Half of the AI community (51%) - according to a recent survey[4] - are willing to attribute non-trivial understanding to large language models (LLMs). The other half of the community (49%) argue that the illusion of understanding is the result of an Eliza effect. The research question, as formulated by 4, is:
"do these systems (or will their near-term successors) actually, even in the absence of physical experience, create something like the rich concept-based mental models that are central to human understanding, and, if so, does scaling these models create even better concepts?"
Our results suggest the answer to both questions are affirmative. Our finding, by the way, is also corroborated by previous results suggesting that language models for different languages converge on similar knowledge representations.[13] This may be a side-effect of inducing'something like the rich concept-based mental models that are central to human understanding'.
This provides a new point of departure for studying how semantics arises in biological systems. LLMs are trained to predict the next stimulus and implicitly induce word models. Philosophers argue whether structural similarities (isomorphisms, homomorphisms, etc.) between representations and what is represented, is sufficient for content.[14] One observation that goes all the way back to Carnap's _Aufbau[15]_ is: Structural similarities are generally trivial to obtain, but if the relations (distances in the vector space) serve a purpose, structural similarities can ground content. Structural similarity is evidently sufficient to solve semantic problems, such as bilingual dictionary induction[16] or multi-modal alignment.[17] The fact that fMRI vectors exhibit structural similarities to LLMs (and by transitivity, across languages and to computer vision models), is suggestive of such similarities playing a role in grounding.
## Methods
### Data description and pre-processing
#### fMRI datasets
fMRI is a non-invasive neural response measurement technique that records on a spatial resolution in the region of 1 to 6 millimetres, higher than any other technique. fMRI records activity (blood flow) in the entire network of brain areas engaged when subjects undertake particular tasks. On the downside, fMRI is somewhat susceptible to influences of non-neural changes, and the temporal response is poor relative to the electrical signals that define neuronal communication. To compensate for low temporal resolution, we introduce Gaussian smoothing below.
fMRI measurements of human neural responses during reading or listening are somewhat difficult to obtain. It is expensive, requires a large number of participants, and requires the participants to lie still during reading for extended periods of time. We list three datasets below that we use in our experiments. We see small differences across the three datasets (like we see small differences across language model families and alignment methods), but the general trend remains the same.
The datasets are: Harry Potter Dataset 18 (8 subjects), Pereira Dataset 19 (16 subjects), and Natural Stories Audio Dataset 20 (19 subjects). All three datasets are publicly available.
#### Gaussian smoothing
Gaussian smoothing has been used before to study speech-aligned fMRI data [21, 22]. In cases where fMRI data is not collected at the granularity of individual words, we can use Gaussian smoothing to generate word-level fMRI information. For instance, to obtain the fMRI vector for a specific word like "Harry" at a given time point t (Harry\({}_{t}\)), we can extract the fMRI vectors for a certain timeframe T around t, such as \(t\pm T\) seconds. We then apply Gaussian smoothing to this set of vectors, resulting in a final vector that represents the fMRI information for the word "Harry\({}_{t}\)".
This approach has potential benefits for fMRI analysis in various applications, such as studies of language processing and cognitive neuroscience. By generating word-level fMRI information using Gaussian smoothing, we can potentially extend scope from sequence-level to word-level,
and improve the interpretability and accuracy of the results obtained from fMRI analyses. Extracting word-level signals differentiates our work from much other work, but was also shown to be crucial in recent work on brain decoding.[23]
## Models
### Non-auto-regressive models
Non-auto-regressive models are a type of machine learning models that take in an entire input sequence of text and generate a single output vector representation for the entire sequence. Training such models, we mask a fraction of the words in the input text. The language model is then expected to predict the masked words based on the other words in the text. We use the BERT family of language models[24] as an example of a non-auto-regressive model family.
### Auto-regressive models
Auto-regressive models generate output sequences by predicting each element in the sequence based on the previously generated elements. In other words, the output is generated one element at a time, with the model conditioned on the previous output elements. These language models are used to generate text, but typically provide slightly worse similarity estimates. We use two auto-regressive language model families: GPT2[25] and OPT.[26]
## Comparison and projection methods
### Representational Similarity Analysis
Relational Similarity Analysis (RSA) is a multivariate analysis technique commonly used in cognitive neuroscience and computational linguistics to compare the similarity between two sets of representations. RSA can be used to measure the similarity between the neural activity patterns observed in the fMRI data and the representations learned by LLMs. RSA operates by first representing the neural activity and language model features as vectors in a high-dimensional space. The similarity between these vectors is then quantified using a rank-based correlation metric.[27] Specifically, we use the second-order Spearman's correlation coefficient as a way of measuring structural similarity or degree of isomorphism.
### Ridge regression
Ridge regression is a widely used method in statistics and machine learning to address the issue of multicollinearity, which can arise when there are highly correlated predictor variables in a linear regression model. We use ridge regression for every target dimension in our data, modeling the relationship between the brain signals and the individual dimensions in the language model representations. Specifically, ridge regression adds a penalty term to the standard least squares regression objective function, which shrinks the magnitude of the regression coefficients towards zero, effectively reducing the impact of less informative predictors. This regularization technique helps prevent overfitting and can improve the generalization performance of the model See more details in Supplementary Materials.
### Procrustes Analysis
We use Procrustes Analysis, a form of statistical shape analysis, to align brain fMRI representations with those of language models, using a bimodal dictionary. Procrustes Analysis is a method for matching corresponding points in two shapes and finding the transformation (translation, rotation, and scaling) that best aligns them. Specifically, we seek to find the orthogonal matrix \(\Omega\) that optimally maps the brain fMRI matrix \(A\) representing brain responses to the words onto the language model matrix \(B\), i.e. the language model representations of the words, using the \(\min_{R}|R-M|_{F}\) problem subject to \(R^{T}R=I\), which can be solved using singular value decomposition with \(R=UV^{T}\). To evaluate the effectiveness of the alignment, we induce it from a small set of point pairs and test it on held-out data. We stress that successful alignment is contingent on the source and target spaces having the same dimensionality. If a mismatch occurs, we employ principle component analysis to reduce the dimensionality of the larger space. To evaluate the alignment, we induce it from a small set of point pairs and test it on held-out data. It is essential that the source and target spaces have the same dimensionality for successful alignment. In cases where a mismatch occurs, we employ principle component analysis to reduce the dimensionality of the larger space. |
2302.02716 | Ultrafast entropy production in pump-probe experiments | The ultrafast control of materials has opened the possibility to investigate
non-equilibrium states of matter with striking properties, such as transient
superconductivity and ferroelectricity, ultrafast magnetization and
demagnetization, as well as Floquet engineering. The characterization of the
ultrafast thermodynamic properties within the material is key for their control
and design. Here, we develop the ultrafast stochastic thermodynamics for
laser-excited phonons. We calculate the entropy production and heat absorbed
from experimental data for single phonon modes of driven materials from
time-resolved X-ray scattering experiments where the crystal is excited by a
laser pulse. The spectral entropy production is calculated for SrTiO$_3$ and
KTaO$_3$ for different temperatures and reveals a striking relation with the
power spectrum of the displacement-displacement correlation function by
inducing a broad peak beside the eigenmode-resonance. | Lorenzo Caprini, Hartmut Löwen, R. Matthias Geilhufe | 2023-02-06T11:44:14Z | http://arxiv.org/abs/2302.02716v2 | # Ultrafast entropy production in pump-probe experiments
###### Abstract
The ultrafast control of materials has opened the possibility to investigate non-equilibrium states of matter with striking properties, provided the sample of interest can sustain the strong fields often required. Hence, understanding ultrafast thermodynamic processes within the material are key. While slow processes lead to quasi-static changes in equilibrium, we focus here on the opposite limit of extremely short time scales, where the system rapidly approaches a non-equilibrium regime. Thermodynamic processes under fast driving were considered before, e.g. in Ref. [1], but have never been brought into connection with controlled experiments. Here, we derive a mesoscopic model for the entropy production due to non-equilibrium phonons excited by a THz laser pulse. While entropy cannot be measured directly, we show that the spectral entropy production can be obtained from experimentally observable quantities, which we illustrate using time-resolved X-ray scattering data for SrTiO\({}_{3}\). Further, we compute the spectral entropy production as a function of frequency for SrTiO\({}_{3}\) and KTaO\({}_{3}\) for various temperatures. Finally, we predict that the power spectrum of the displacement-displacement correlation function exhibits a broad peak besides the eigenmode-resonance, which is associated with entropy production.
Entropy production has been introduced in the 19-th century to describe the amount of irreversibility in thermodynamic cycles. It is behind the formulation of the Clausius inequality and the second law of thermodynamics. More generally, it characterizes heat and mass transfer processes at the macroscopic scales [2], such as heat exchange, fluid flow, or mixing of chemical species. Furthermore, in terms of information-entropy, it plays a significant role in information theory [3].
Successively, entropy production has been linked to microscopic dynamics [4] to quantify the amount of irreversibility and dissipation at the atomistic (single-particle) level [5; 6]. In the framework of gases, soft materials, or living organisms, each microscopic particle evolves in the presence of stochastic forces. These forces are usually generated by internal mechanisms, e.g., metabolic processes, internal motors, or collisions due to solvent molecules. The stochastic nature of the dynamics allows us to characterize macroscopic observables as averages of fluctuating variables, by considering the probability of observing a path of the microscopic trajectory. This approach is at the basis of stochastic thermodynamics [4], which aims of building the thermodynamic laws in terms of fluctuating work, heat, and entropy which on average are consistent with macroscopic thermodynamics [7].
In ordered phases of matter, we argue that thermal fluctuations of, e.g., ionic positions, spins, or charge lead to stochastic forces on microscopic degrees of freedom. Entropy is produced in non-equilibrium regimes, by excitations of the material with an external drive. This is motivated by immense progress in ultrafast control and characterization of crystalline solids [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. We put specific focus on light-induced phonon dynamics [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32]. Here, selected phonon modes are excited by strong THz laser pulses [33; 34]. Remarkably, the ionic dynamics can be resolved with high precision with time-resolved X-ray scattering present at coherent X-ray light sources [35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47]. We deduce that the information obtained from such a scattering experiment is sufficient to reproduce the spectral entropy production rate within the material, giving rise to information about the ultrafast entropy production process.
Characterizing materials in terms of thermal properties in the ultrafast regime has emerged as a powerful path [16]. Hence, developing stochastic thermodynamics properties generated at short time scales, e.g. entropy production, could open new perspectives for the comprehension of functional materials. In the following, we show that non-equilibrium crystals, driven by a laser pulse, are characterized by spectral entropy production. As illustrated in Fig. 1, we propose to measure entropy production from ionic displacements, e.g., obtained from time-resolved X-ray scattering experiments. Further, we show that the power spectrum of ionic displacement shows a close connection to the spectral entropy production. We compare our theory to experimental data for SrTiO\({}_{3}\) and support our approach by providing estimates for the soft modes of KTaO\({}_{3}\) and SrTiO\({}_{3}\).
## II Ultrafast stochastic thermodynamics of crystals
We model the dynamics of an optical phonon mode by the equation of motion [48; 49; 50; 51; 52; 53; 54; 55; 56]
\[\ddot{u}(t)+\eta\dot{u}(t)+\omega_{0}^{2}u(t)=\sqrt{2\eta\,k_{B}T}\,\xi(t)+F(t)\,, \tag{1}\]
Here, \(k_{B}\) is the Boltzmann constant while \(u(t)\) is a phonon normal mode (units A\(\sqrt{\rm a.m.u}\)) with frequency
\(\omega_{0}\), and damping or line width \(\eta\). \(F(t)\) is an external driving field, which, for a laser excitation can be written as \(F(t)=Z\tilde{E}(t)\). \(Z\) is the mode effective charge [57], \(\tilde{E}(t)=\epsilon^{-1}E(t)\) the screened electric field, and \(\epsilon\) the relative permittivity.
For simplicity, we neglect nonlinear effects [55; 56; 23; 58]. In contrast to previous work, we add an uncorrelated noise \(\sqrt{2\eta\,k_{B}T}\,\xi(t)\) modelling the interaction of the phonon normal mode with thermally excited lattice fluctuations \(\xi\) with the temperature of the environment \(T\). The equation of motion (1) has a formal solution in Fourier space, given by
\[\hat{u}(\omega)=\chi(\omega)\left(\sqrt{2\eta\,k_{B}T}\,\hat{\xi}(\omega)+ \hat{F}(\omega)\right)\,, \tag{2}\]
with the susceptibility \(\chi(\omega)=\left(\omega_{0}^{2}-\omega^{2}+\mathrm{i}\eta\omega\right)^{-1}\). An example of the solution in real-time is reported in the appendix. Let \(u=\{u\}\) denote a specific solution or trajectory between the initial time \(t_{0}\) and the final time \(\mathcal{T}\), with the initial conditions \(u_{0}\). The presence of thermal noise in the equation of motion introduces a final probability of realizing \(\{u\}\), given by \(P\left[\{u\}|u_{0}\right]\). The force \(F(t)\) breaks the time-reversal symmetry. As a consequence, the probability of observing the time-reversed path \(P_{r}\left[\{u\}|u_{0}\right]\) differs from \(P\left[\{u\}|u_{0}\right]\)[59; 60; 61]. This leads to entropy production, \(\Sigma\),
\[\Sigma=k_{B}\log\frac{P\left[\{u\}|u_{0}\right]}{P_{r}\left[\{u\}|u_{0}\right] }=\int\mathrm{d}t\,\sigma(t)\,. \tag{3}\]
where \(\sigma(t)\) is the entropy production rate. In the case of uncorrelated noise \(\left\langle\xi(t)\xi(t^{\prime})\right\rangle\sim\delta(t-t^{\prime})\), the entropy production rate (Eq. (3)) is given by \(\sigma(t)=\left\langle v(t)F(t)\right\rangle/T\), with \(v(t)=\hat{u}(t)\)[4; 60]. Note, that this relation is general and thus also holds for non-linear phonon dynamics. By decomposing \(\Sigma\) in Fourier waves, we introduce the spectral entropy production \(\hat{\sigma}(\omega)\) as
\[\hat{\sigma}(\omega)=\int\mathrm{d}\omega^{\prime}\,S_{r}(\omega,\omega_{r})\,, \tag{4}\]
with the entropy spectral density
\[S_{r}(\omega,\omega^{\prime})=\frac{\mathrm{i}}{T}\omega^{\prime}\chi(\omega^ {\prime})\hat{F}(\omega^{\prime})\hat{F}(\omega-\omega^{\prime})\,. \tag{5}\]
Equations (4) and (5) are central theoretical results of the paper. With the knowledge of the susceptibility and the shape of the applied drive, quantities typically accessible in experiments, the spectral entropy production can be determined (Fig. 1). As a result, our predictions hold beyond phonons and can be applied for other excitations. In stochastic systems, the entropy production rate is a real fluctuating observable but its time average is positive in agreement with the second law of thermodynamics. In contrast, spectral entropy production is generally complex. To shed light on the interpretation of the spectral entropy production \(\hat{\sigma}(\omega)\), we note it can be evaluated analytically for a periodic driving field \(F(t)=A\exp\left(i\omega_{d}t\right)\). The imaginary part of the entropy production follows to be \(\Im\hat{\sigma}=\delta(\omega-2\omega_{d})A^{2}(T)^{-1}\omega_{d}(\omega_{0}^{2 }-\omega_{d}^{2})\left((\omega_{0}^{2}-\omega_{d}^{2})^{2}+\eta^{2}\omega_{d}^ {2}\right)^{-1}\). Hence, it shows a delta peak at twice the driving frequency \(\omega_{d}\). Furthermore, it is negative (positive) if the driving frequency \(\omega_{d}\) is larger (smaller) than the eigenfrequency \(\omega_{0}\). In contrast, the real part \(\Re\hat{\sigma}=\delta(\omega-2\omega_{d})A^{2}(T)^{-1}\omega_{d}^{2}\eta \left((\omega_{0}^{2}-\omega_{d}^{2})^{2}+\eta^{2}\omega_{d}^{2}\right)^{-1}\) is an odd function of the damping \(\eta\). Therefore, \(\Re\hat{\sigma}\) vanishes for zero damping. Hence, \(\Re\hat{\sigma}\) is a measure of dissipation associated with \(\eta\). Both, \(\Im\hat{\sigma}\) and \(\Re\hat{\sigma}\) decrease with the distance between eigenfrequency \(\omega_{0}\) and driving frequency \(\omega_{d}\) as well as with increasing temperature.
## III The power spectrum and spectral entropy production
The spectral entropy production can be determined from the frequency profile of the external force, e.g. THz laser pulses, and the susceptibility of the system. Alternatively, the power spectrum \(\left\langle u(t)^{2}\right\rangle\) can be expressed in terms of the entropy production generated by the laser excitation and, therefore, can be used to extract ultrafast thermodynamics properties of the system. Evaluating \(\left\langle u(t)^{2}\right\rangle\) in Fourier space, the power spectrum can be decomposed in two contributions as (see detail in the appendix)
\[\mathcal{F}\left[\left\langle u(t)^{2}\right\rangle\right](\omega)=\mathcal{F }\left[\left\langle u(t)^{2}\right\rangle\right]_{\mathrm{eq}}(\omega)+ \mathcal{F}\left[\left\langle u(t)^{2}\right\rangle\right]_{\mathrm{neq}}( \omega)\,. \tag{6}\]
The first one \(\mathcal{F}\left[\left\langle u(t)^{2}\right\rangle\right]_{\mathrm{eq}}\) has an equilibrium origin and, indeed, arises from thermal fluctuations,
\[\mathcal{F}\left[\left\langle u(t)^{2}\right\rangle\right]_{\mathrm{eq}}=2 \eta\,k_{B}T\delta(\omega)\,\int\frac{d\omega^{\prime}}{2\pi}\hat{\chi}(\omega^ {\prime})\hat{\chi}(-\omega^{\prime})\,. \tag{7}\]
Figure 1: Schematic representation of a crystal (SrTiO\({}_{3}\) or KTaO\({}_{3}\)) excited by a THz laser pulse. From a direct measure of the diffraction pattern, for instance, obtained from time-resolved X-ray scattering experiments, the ionic displacement can be deduced. Combining this measure with the shape of the THz laser pulse, we can calculate the ultrafast entropy production by applying our theoretical results.
As a result, it is \(\propto T\delta(\omega)\) and fully determined by the susceptibility \(\chi\).
In contrast, the term \(\mathcal{F}\left[\langle u(t)^{2}\rangle\right]_{\text{neq}}\) originates from the external field (THz laser pulse) and, thus, reflects the non-equilibrium part of the dynamics. Indeed, this term (see appendix) can be expressed in terms of the entropy spectral density, \(S_{r}(\omega,\omega^{\prime})\), and reads
\[\mathcal{F}_{\omega}\langle u^{2}(t)\rangle_{\text{neq}}=T\int\frac{d\omega^{ \prime}}{2\pi}\frac{\hat{\chi}(\omega-\omega^{\prime})}{(i\omega^{\prime})} \hat{S}_{r}(\omega,\omega^{\prime})\,. \tag{8}\]
Relation (8) is a key result of the paper providing an alternative route to measure the spectral entropy production. It shows that the ultrafast spectral entropy production in crystals can be measured from the power spectrum of the phonon displacement, an observable signature.
## IV Application to SrTiO\({}_{3}\) and KTaO\({}_{3}\) under laser pulses
To show that the entropy production rate can be obtained from experiments, we compare our model to time-resolved X-ray scattering data obtained by Kozima _et al._, for the nonlinear excitation of phonons in SrTiO\({}_{3}\)[23]. The spectral components of the used THz laser pulse are shown in Fig. 2 (a). To sufficiently reproduce the shape of the spectrum, we assume a superposition of two Gaussian laser pulses, one at frequency \(\omega_{d}=0.75\) THz and a higher-harmonic component with \(2\omega_{d}\), \(F(t)=Z\tilde{E}_{0}\left(\exp{(2\pi\mathrm{i}\omega_{d}t)}+\alpha\exp{(4\pi \mathrm{i}\omega_{d}t)}\right)\exp{\left(-\frac{1}{2}\frac{t^{2}}{\pi^{2}} \right)}\), with \(\alpha\approx 0.2858\). The in-medium field strength is \(\beta E_{0}\), with \(\beta=0.215\) and \(E_{0}=480\) kV cm\({}^{-1}\), while the pulse width is \(\tau=0.5\) ps. The experiment was performed at 100 K, where the soft mode frequency is measured to be \(\omega_{0}/2\pi\approx 1.669\) THz with a damping of \(\eta/2\pi\approx 0.9\) THz. The mode effective charge of SrTiO\({}_{3}\) is \(Z=2.6\) e\({}^{-}\)u\({}^{-1/2}\)[62; 23], with e\({}^{-}\) the elementary charge and u the atomic mass unit.
The measured spectral component of the time-domain X-ray data [23] is scaled against the computed amplitude of the soft mode according to Eq. (2) and shown in Fig. 2 (b). The soft mode contribution to the experimental spectrum is shaded in light blue. Data are used to compute the spectral entropy production, \(|\tilde{\sigma}|\), of the soft mode as a function of \(\omega\) and compared against our theory in Fig. 2 (c). \(|\tilde{\sigma}|\) computed from the experimental data exhibits a peak at frequency \(\omega_{\sigma_{1}}/2\pi\approx 2.33\) THz, that is reproduced by our model. Furthermore, we reconstruct the power spectrum, \(|\mathcal{F}[\langle u^{2}\rangle]_{neq}\), given in Fig. 2 (d), which is off-resonance with twice the soft-mode frequency. The shape of \(|\mathcal{F}[\langle u^{2}\rangle]_{neq}\) shows strong overlap with the computed entropy production. Due to nonlinear coupling between phonons, discussed in Ref. [23], a peak of the second optical mode at \(\approx 5.19\) THz can be clearly observed in Fig. 2 (b). We note that this mode (not considered in our model) is silent because there is no spectral overlap with the driving field, which is almost zero for \(\omega>3\) THz. As a result, the entropy production generated by the second optical mode and the laser field is negligible (compare Fig. 2 (b) and Fig. 2 (c), see also appendix).
To shed light on the process of entropy production due to laser fields, we compute the spectral entropy production for two different materials SrTiO\({}_{3}\) and KTaO\({}_{3}\), upon assuming a simple Gaussian laser pulse, \(F(t)\sim e^{2\pi\mathrm{i}\omega_{d}t}\) without higher-harmonic contribution. We fix the in-medium field strength to be \(\tilde{E}_{0}=100\) kV cm\({}^{-1}\). As before, the frequency of the driving field is \(\omega_{d}=0.75\) THz and the pulse width is \(\tau=1\) ps. The mode effective charges are, SrTiO\({}_{3}\): \(Z=2.5\)[62; 23], KTaO\({}_{3}\): \(Z=1.4\)[62; 63]. We focus on the soft mode only, where frequency and line width strongly depend on temperature [64; 65] (see appendix).
SrTiO\({}_{3}\) is a cubic perovskite with a tetragonal phase transition at \(\approx 105\) K [66]. Further, SrTiO\({}_{3}\) exhibits a diverging dielectric constant at low temperatures as well as an asymptotic vanishing of the soft-mode fre
Figure 2: Entropy production in SrTiO\({}_{3}\) after exposure to an intense THz laser pulse at 100 K. We compare an estimate computed from time-resolved X-ray scattering data taken from Kozima _et al._[23] with model data. (a) Fourier transform of the THz laser pulse (solid dark blue), compared with a theoretical Gaussian laser pulse (orange dashed) with frequency \(\omega_{d}=0.75\) THz, superposed with a higher-harmonic at \(2\omega_{d}\). (b) Comparison of experimental (solid dark blue) and computed (dashed orange) Fourier transform of the phonon normal mode amplitude, \(\hat{u}(\omega)\). The soft mode contribution is shaded in light blue. (c) Comparison of the spectral entropy production, \(|\hat{\sigma}|\), computed from the full experimental data of the phonon normal mode amplitude (solid dark blue) with our model taking into account the soft mode only (dashed orange). (d) Comparison of the power spectrum, \(|\mathcal{F}[\langle u^{2}\rangle]_{neq}\), computed from the full experimental data (solid dark blue) with our soft-mode-only model (dashed orange).
quency, both indicative of a ferroelectric phase transition [65; 67]. However, the transition is avoided due to quantum fluctuations, making SrTiO\({}_{3}\) a quantum critical paraelectric [67]. According to Ref. [64], the soft-mode frequency of SrTiO\({}_{3}\) is in resonance with the driving frequency, \(\omega_{0}=\omega_{d}=0.75\) THz at \(T\approx 52\) K. In Fig. 3 (a), we show the computed spectral entropy production for SrTiO\({}_{3}\) at various temperatures ranging from 30 K to 60 K. Due to the temperature dependence and softening of the damping, the real part becomes maximal slightly above 50 K. In contrast, the imaginary part of \(\hat{\sigma}(\omega)\) increases with decreasing temperature, showing a clear sign change below 52 K. The absolute value of the spectral entropy production shows a local maximum around this temperature. However, as low temperatures suppress the thermal noise, entropy production by the non-equilibrium force is enhanced. As a result, the absolute value will increase rapidly for very low temperatures (\(<\)15 K, not shown here). Due to the narrow width of the Gaussian laser field (1 ps), neither \(\mathrm{Re}\,\hat{\sigma}\), \(\mathrm{Im}\,\hat{\sigma}\), nor \(|\hat{\sigma}|\) have a peak at exactly \(2\omega_{d}\), but instead show a decreasing peak frequency with decreasing temperature. Plotting \(\mathcal{F}_{\omega}\langle u^{2}\rangle_{\mathrm{neq}}\) reveals clear peaks at twice the soft-mode frequency, which is indicated by dashed lines. A non-symmetric broadening of the peak for frequencies occurs in agreement with the spectral weight of the spectral entropy production \(\hat{\sigma}\). This becomes specifically apparent for the temperatures 30 K, 40 K and 60 K, in agreement with the notion of entropons [68].
In contrast to SrTiO\({}_{3}\), KTaO\({}_{3}\) remains cubic to liquid helium temperatures [69]. It is also regarded a quantum paraelectric, but outside the quantum critical regime [70]. As a result, the decrease of the soft-mode frequency and damping is slower compared to SrTiO\({}_{3}\), being in resonance with the driving frequency \(\omega_{d}=0.75\) THz at \(\approx 26.4\) K [64]. Therefore, we evaluate the spectral entropy production for temperatures between 10,...,40 K, plotted in Fig. 3 (b). The steady increase of \(\mathrm{Re}\,\hat{\sigma}\) with decreasing temperature shows that the entropy production process dominates the decrease of the soft-mode damping. As before, the sign change of \(\mathrm{Im}\,\hat{\sigma}\) for low temperatures can be clearly revealed. Due to the low temperatures regarded, \(|\hat{\sigma}|\) shows a rapid increase for decreasing temperatures. In agreement with the absence of a theoretical ferroelectric transition at low temperatures, the soft mode frequency remains finite at low temperatures. As a result, the soft-mode peaks at \(2\omega_{0}\) in \(\mathcal{F}_{\omega}\langle u^{2}\rangle_{\mathrm{neq}}\) remain at higher frequencies, compared to SrTiO\({}_{3}\). Furthermore, the peaks occur fairly close to the maxima of \(|\hat{\sigma}|\) making the entropon broadening less pronounced, in comparison to SrTiO\({}_{3}\).
## IV Conclusion
We provide one of the first studies of ultrafast thermodynamic processes, by deriving the ultrafast entropy production due to transient phonons in materials excited by a THz laser pulse. Specifically, the soft modes of SrTiO\({}_{3}\) and KTaO\({}_{3}\) are evaluated by comparing our theory to experimental data and simulation results. The entropy production takes place on the picosecond timescale and can be deduced from the collective ionic displacement as observed, e.g., by time-resolved X-ray scattering. While entropy production and sample heating are well-known concepts in general, our work sheds light on the microscopic mechanism behind entropy production in driven quantum materials, using the framework of stochastic thermodynamics. While the maximal energy transfer from the laser to the sample is determined by the laser intensity, the production of entropy strongly increases with decreasing temperature. Furthermore, the temporal signature of this process is tightly bound to the soft mode frequency.
More generally, we envision ultrafast thermodynamics to provide characteristic signatures of complex systems, beyond phononic processes. We showed, in particular, that, in the presence of uncorrelated noise, entropy production depends on the materials' response function. As a consequence, Eq. (4) can be applied to other systems, e.g., magnons [9; 71; 17], considering the corresponding magnetic susceptibility.
Figure 3: Ultrafast thermodynamics properties for SrTiO\({}_{3}\) (a) and KTaO\({}_{3}\) (b). In each case, real part \(\mathrm{Re}\,\hat{\sigma}(\omega)\), imaginary part \(\mathrm{Im}\,\hat{\sigma}(\omega)\) and modulus \(|\hat{\sigma}(\omega)|\) of the spectral entropy production, \(\hat{\sigma}(\omega)\), (see definition (4)), are shown together with the Fourier transform of the non-equilibrium contribution of the power spectrum \(\mathcal{F}_{\omega}\langle u^{2}(t)\rangle_{\mathrm{neq}}\). Temperature-dependent soft modes are considered. Dashed lines in the plot for \(\mathcal{F}_{\omega}\langle u^{2}\rangle_{\mathrm{neq}}\) denote twice the soft-mode frequency.
Furthermore, our theory for the power spectrum of the displacement-displacement correlation exhibits spectral weight besides a sharp peak at twice the eigenfrequency of the soft mode. We have shown that this part of the power spectrum can be associated with spectral entropy production. The emergence of such a feature is closely related to the concept of entropons recently introduced for intrinsic non-equilibrium systems reaching a steady-state [68]. In contrast, here, the system is away from the steady state, and entropy production is generated by the transient force due to laser pulses.
## Acknowledgements
We thank Jeremy Vachier for valuable insights and discussions. LC acknowledges support from the Alexander Von Humboldt foundation. HL acknowledges support by the Deutsche Forschungsgemeinschaft (DFG) through the SPP 2265 under the grant number LO 418/25-1. RMG acknowledges support from the Swedish Research Council (VR starting grant No. 2022-03350) and Chalmers University of Technology. Computational resources were provided by the Swedish National Infrastructure for Computing (SNIC) via the National Supercomputer Centre (NSC).
## Appendix
### Spectral entropy production
By applying the time Fourier transform to the equation of motion for \(u(t)\), Eq. (1) (see Fig. 4 for an example of its solution in real-time), we obtain the dynamics in the domain of frequency \(\omega\)
\[\left(-\omega^{2}+\omega_{0}^{2}+i\omega\eta\right)\hat{u}(\omega)=\sqrt{2\eta \,k_{B}T}\,\hat{\xi}(\omega)+\hat{F}(\omega)\,, \tag{9}\]
where the hat-symbol denotes the time-Fourier transform of a variable and \(\hat{\xi}(\omega)\) is a Gaussian noise with zero average and \(\langle\hat{\xi}(\omega)\hat{\xi}(\omega^{\prime})\rangle=\delta(\omega+ \omega^{\prime})\). By defining the vector \(v(t)=\hat{u}(t)\), so that \(\hat{v}(\omega)=i\omega\hat{u}(\omega)\), Eq. (9) can be expressed as
\[i\omega\hat{u}(\omega)=\hat{v}(\omega) \tag{10}\] \[\left(i\omega+\eta\right)\hat{v}(\omega)+\omega_{0}^{2}\hat{u}( \omega)=\sqrt{2\eta\,k_{B}T}\,\hat{\xi}(\omega)+\hat{F}(\omega)\,. \tag{11}\]
The path-probability of the phonon normal mode, \(P[\{u\}|u_{0}]\), conditioned to the initial value \(u_{0}\), can be estimated by the probability distribution of the noise history \(p[\{\xi\}|\xi_{0}]\), conditioned to the initial value \(\xi_{0}\). Here, curly brackets denote the time history from the initial to the final time. The Gaussian properties of the noise allows us to express \(p[\{\xi\}|\xi_{0}]\) as [61]
\[\mathrm{p}[\{\xi\}|\xi_{0}] \sim\exp\left(-\frac{1}{2}\int dt\,\xi(t)^{2}\right) \tag{12}\] \[=\exp\left(-\frac{1}{2}\int dt\,\int\frac{d\omega}{2\pi}e^{-i \omega t}\int ds\,e^{i\omega s}\xi(s)^{2}\right)\] \[=\exp\left(-\frac{1}{2}\int dt\,\int\frac{d\omega}{2\pi}e^{-i \omega t}\int\frac{d\omega^{\prime}}{2\pi}\hat{\xi}(\omega^{\prime})\hat{\xi} (\omega-\omega^{\prime})\right)\,,\]
where in the second and third equalities we have applied the properties of Fourier transforms. From here, we can switch to the probability of the trajectory for the phonon mode \(\{u\}\) by handling the change of variables \(\xi\to u\), i.e. by using the equation of motion in Fourier space
\[\hat{\xi}(\omega)=\frac{1}{\sqrt{2\eta k_{B}T}}\left[\left(i\omega+\eta\right) \hat{v}(\omega)+\omega_{0}^{2}\hat{u}(\omega)-\hat{F}(\omega)\right]\,. \tag{13}\]
Such a change of variables should involve the determinant of the transformation. We ignore this term because it is irrelevant to the calculation of the entropy production since it provides only an even term under time-reversal transformation [61]. As a consequence, the following relation holds
\[\mathrm{P}[\{u\}|u_{0}]\sim\mathrm{p}[\{\xi\}|\xi_{0}]\,. \tag{14}\]
The path-probability of the backward trajectory of the phonon normal mode, \(\mathrm{P}_{r}[\{u\}|u_{0}]\), can be obtained by simply applying the time-reversal transformation (TRT) to the particle dynamics. By denoting time-reversed variables by a subscript \(r\), the path-probability of the time-reversed noise history, \(\mathrm{p}_{r}[\{\xi\}|\xi_{0}]\), is still Gaussian and reads
\[\mathrm{p}_{r}[\{\xi\}|\xi_{0}] \sim\exp\left(-\frac{1}{2}\int dt\,\xi_{r}(t)^{2}\right)\] \[=\exp\left(-\frac{1}{2}\int dt\,\int\frac{d\omega}{2\pi}e^{-i \omega t}\int\frac{d\omega^{\prime}}{2\pi}\hat{\xi}_{r}(\omega^{\prime})\hat{ \xi}_{r}(\omega-\omega^{\prime})\right)\,. \tag{15}\]
To switch to \(\mathrm{P}_{r}[\{\xi\}|\xi_{0}]\), we first have to evaluate the backward dynamics, by simply applying the TRT to Eq. (1). By using \(u_{r}=u\) and \(v_{r}=-v\), we conclude that all the terms in Eq. (1) are invariant under TRT except for the friction force. Applying the Fourier transform to Eq. (1) and expressing the noise \(\hat{\xi}_{r}(\omega)\) as a function of \(u_{r}(\omega)\) and \(v_{r}(\omega)\), we can recur to the change of variable \(\xi_{r}\to u\) that allows us to use the following relation
\[\hat{\xi}_{r}(\omega)=\frac{1}{\sqrt{2\eta k_{B}T}}\left[\left(i\omega-\eta \right)\hat{v}(\omega)+\omega_{0}^{2}\hat{u}(\omega)-\hat{F}(\omega)\right]\,. \tag{16}\]
By neglecting again the determinant of the change of variables, \(\mathrm{P}_{r}[\{u\}|u_{0}]\) reads
\[\mathrm{P}_{r}[\{u\}|u_{0}]\sim\mathrm{p}_{r}[\{\xi\}|\xi_{0}]\,. \tag{17}\]
Figure 4: Illustration of an ensemble of solutions of the equation of motion with uncorrelated noise for generic parameters. The blue solid line represents the mean solution, while dashed lines are single trajectories that illustrate the standard deviation.
To calculate the entropy production \(\Sigma\), we use the definition (3), i.e. the log-ratio between the probabilities of forward and backward trajectories of the phonon normal mode,
\[(2T)\Sigma =(2k_{B}T)\log\frac{p(\{u\}|u_{0})}{p_{r}(\{u\}|u_{0})}\] \[=\int dt\int\frac{d\omega}{2\pi}e^{-i\omega t}\int\frac{d\omega^{ \prime}}{2\pi}\times\] \[\times\left(\langle\hat{v}(\omega^{\prime})\hat{F}(\omega-\omega ^{\prime})\rangle+\langle\hat{v}(\omega-\omega^{\prime})\hat{F}(\omega^{\prime })\rangle\right)\,. \tag{18}\]
By comparing Eq. (18) with the definition
\[\Sigma=\int dt\,\dot{s}(t)\,, \tag{19}\]
one can identify the entropy production rate, \(\dot{s}(t)\), as
\[\dot{s}(t) =\int\frac{d\omega}{2\pi}e^{-i\omega t}\int\frac{d\omega^{\prime} }{2\pi}\frac{1}{2T}\times\] \[\qquad\times\left(\langle\hat{v}(\omega^{\prime})\hat{F}(\omega- \omega^{\prime})\rangle+\langle\hat{v}(\omega-\omega^{\prime})\hat{F}(\omega ^{\prime})\rangle\right)\,. \tag{20}\]
Applying the Fourier transform, we introduce the spectral entropy production rate, \(\dot{s}(\omega)\), as
\[\dot{s}(t)=\int\frac{d\omega}{2\pi}e^{-i\omega t}\dot{s}(\omega) \tag{21}\]
and, by comparison with Eq. (20), we obtain
\[\dot{s}(\omega)=\int\frac{d\omega^{\prime}}{2\pi}\frac{1}{2k_{B}T}\left( \langle\hat{v}(\omega^{\prime})\hat{F}(\omega-\omega^{\prime})\rangle+\langle \hat{v}(\omega-\omega^{\prime})\hat{F}(\omega^{\prime})\rangle\right)\,. \tag{22}\]
We remark that expressions (18) and (22) do not depend on the choice of the force in the dynamics of \(\hat{u}(\omega)\). As a result, they are unchanged by adding a non-linear force, e.g., due to phonon-phonon coupling to Equation (9).
### Entropy spectral density
The formal solution of the equation of motion (1) in Fourier space is given by
\[\hat{u}(\omega)=\frac{\sqrt{2\eta\,k_{B}T}\,\hat{\xi}(\omega)+\hat{F}(\omega) }{\omega_{0}^{2}-\omega^{2}+i\omega\eta}=\chi(\omega)\hat{A}(\omega). \tag{23}\]
Here, \(\chi(\omega)\) is the (linear) susceptibility
\[\chi(\omega)=\frac{1}{\omega_{0}^{2}-\omega^{2}+i\omega\eta}\,, \tag{24}\]
and \(\hat{A}(\omega)=\sqrt{2\eta\,k_{B}T}\,\hat{\xi}(\omega)+\hat{F}(\omega)\). By using that \(\hat{v}(\omega)=i\omega\hat{u}(\omega)\) and \(\langle\hat{\xi}(\omega)\rangle\), the spectral entropy production, \(\hat{s}(\omega)\), can be expressed as
\[\hat{s}(\omega)=\frac{i}{T}\int\frac{d\omega^{\prime}}{2\pi}\,k\hat{F}(\omega -\omega^{\prime})\chi(\omega^{\prime})F(\omega^{\prime})\,. \tag{25}\]
By introducing the entropy spectral density, \(\hat{S}_{r}(\omega,\omega^{\prime})\), as
\[\hat{s}(\omega)=\int\frac{d\omega^{\prime}}{2\pi}\hat{S}_{r}(\omega,\omega^{ \prime})\,, \tag{26}\]
we can immediately identify
\[\hat{S}_{r}(\omega,\omega^{\prime})=\frac{(i\omega^{\prime})}{T}\hat{F}( \omega-\omega^{\prime})\chi(\omega^{\prime})F(\omega^{\prime})\,. \tag{27}\]
Equation (27) coincides with formula (5) of the main text. Non-linear force terms do not allow the system to have a formal solution in terms of \(\chi(\omega)\). Thus, formula (27) holds only in the linear case.
### Dynamical correlation of the normal phonon mode
By using Eq. (23) the Fourier transform of the dynamical correlation, \(\mathcal{F}\langle u^{2}(t)\rangle\), is given by
\[\mathcal{F}\langle u^{2}(t)\rangle =\int\frac{d\omega^{\prime}}{2\pi}\langle\hat{u}(\omega^{\prime} )\hat{u}(\omega-\omega^{\prime})\rangle \tag{28}\] \[=\int\frac{d\omega^{\prime}}{2\pi}\langle\hat{A}(\omega^{\prime} )\hat{A}(\omega-\omega^{\prime})\rangle\hat{\chi}(\omega^{\prime})\hat{\chi}( \omega-\omega^{\prime})\,.\]
First, we applied the convolution theorem and, second, we used Eq. (23). Using the definition of \(\hat{A}(\omega)\), \(\mathcal{F}_{\omega}\langle u^{2}(t)\rangle\) can be decomposed into two terms,
\[\mathcal{F}_{\omega}\langle u^{2}(t)\rangle=\mathcal{F}_{\omega}\langle u^{2}( t)\rangle_{eq}+\mathcal{F}_{\omega}\langle u^{2}(t)\rangle_{neq}\,. \tag{29}\]
The first term, \(\mathcal{F}_{\omega}\langle u^{2}(t)\rangle_{eq}\), in the right-hand side of Eq. (28), has an equilibrium origin: it arises from the Brownian noise and is given by the convolution of the susceptibility with itself. For uncorrelated noise, we have \(\langle\hat{\xi}(\omega^{\prime})\hat{\xi}(\omega-\omega^{\prime})\rangle= \delta(\omega)\), and this term reads
\[\mathcal{F}_{\omega}\langle u^{2}(t)\rangle_{eq} =2\eta k_{B}T\int\frac{d\omega^{\prime}}{2\pi}\langle\hat{\xi}( \omega^{\prime})\hat{\xi}(\omega-\omega^{\prime})\rangle\hat{\chi}(\omega^{ \prime})\hat{\chi}(\omega-\omega^{\prime}) \tag{30}\] \[=2\eta k_{B}T\delta(\omega)\int\frac{d\omega^{\prime}}{2\pi}\hat{ \chi}(\omega^{\prime})\hat{\chi}(-\omega^{\prime})\,.\]
As an equilibrium term, \(\mathcal{F}_{\omega}\langle u^{2}(t)\rangle_{eq}\) gives a DC contribution (\(\omega=0\)) to the dynamical correlation and does not prevent the system from reaching a steady state.
In contrast, the second term \(\mathcal{F}_{\omega}\langle u^{2}(t)\rangle_{neq}\) in the right-hand side of Eq. (28) has a non-equilibrium origin. It disappears when the non-equilibrium force vanishes and is given by
\[\mathcal{F}_{\omega}\langle u^{2}(t)\rangle_{neq}=\int\frac{d\omega^{\prime}}{2 \pi}\hat{F}(\omega^{\prime})\hat{F}(\omega-\omega^{\prime})\hat{\chi}(\omega ^{\prime})\hat{\chi}(\omega-\omega^{\prime})\,. \tag{31}\]
This term can be linked to the entropy spectral density \(S_{r}(\omega,\omega^{\prime})\), defined in Eq. (27). As a result, Eq. (31), can be written as follows,
\[\mathcal{F}_{\omega}\langle u^{2}(t)\rangle_{neq}=T\int\frac{d\omega^{\prime}}{2 \pi}\frac{\hat{\chi}(\omega-\omega^{\prime})}{(i\omega^{\prime})}\hat{S}_{r}( \omega,\omega^{\prime})\,, \tag{32}\]
which corresponds to Eq. (8) of the main text.
### Temperature dependence of the soft mode
The soft mode frequency and the damping or line width are strongly temperature dependent. We model the temperature dependence from data taken from Vogt [64] and fitting to a second-order polynomial,
\[x(T)=a_{0}+a_{1}T+a_{2}T^{2}\,. \tag{33}\]
Here, \(x=\omega_{0},\eta\) is either the soft mode frequency \(\omega_{0}\) or the damping \(\eta\). In the past, other parametrizations of the soft mode have been proposed, e.g., the four-parameter model by Barrett [72]. However, for our purpose, a fit according to Eq. (33) provides a reasonable accuracy within the discussed temperature range. The fitting parameters are given in Tab. 1, while a comparison of the quadratic fit with experimental data is reported in Fig. 5 for SrTiO\({}_{3}\) and KTaO\({}_{3}\) materials, showing good agreement both for the soft mode frequency and damping.
|
2301.04637 | A Systematic Study of Ia-CSM Supernovae from the ZTF Bright Transient
Survey | Among the supernovae (SNe) that show strong interaction with the
circumstellar medium, there is a rare subclass of Type Ia supernovae, SNe
Ia-CSM, that show strong narrow hydrogen emission lines much like SNe IIn but
on top of a diluted over-luminous Type Ia spectrum. In the only previous
systematic study of this class (Silverman et al. 2013), 16 objects were
identified, 8 historic and 8 from the Palomar Transient Factory (PTF). Now
using the successor survey to PTF, the Zwicky Transient Facility (ZTF), we have
classified 12 additional objects of this type through the systematic Bright
Transient Survey (BTS). In this study, we present and analyze the optical and
mid-IR light curves, optical spectra, and host galaxy properties of this
sample. Consistent with previous studies, we find the objects to have slowly
evolving light curves compared to normal SNe Ia with peak absolute magnitudes
between -19.1 and -21, spectra having weak H$\beta$, large Balmer decrements of
~7 and strong Ca NIR emission. Out of 10 SNe from our sample observed by
NEOWISE, 9 have $3\sigma$ detections, along with some showing a clear reduction
in red-wing of H$\alpha$, indicative of newly formed dust. We do not find our
SN Ia-CSM sample to have a significantly different distribution of equivalent
width of He I $\lambda5876$ than SNe IIn as observed in Silverman et al. 2013.
The hosts tend to be late-type galaxies with recent star formation. We also
derive a rate estimate of 29$^{+27}_{-21}$ Gpc$^{-3}$ yr$^{-1}$ for SNe Ia-CSM
which is ~0.02--0.2 % of the SN Ia rate. This work nearly doubles the sample of
well-studied Ia-CSM objects in Silverman et al. 2013, increasing the total
number to 28. | Yashvi Sharma, Jesper Sollerman, Christoffer Fremling, Shrinivas R. Kulkarni, Kishalay De, Ido Irani, Steve Schulze, Nora Linn Strotjohann, Avishay Gal-Yam, Kate Maguire, Daniel A. Perley, Eric C. Bellm, Erik C. Kool, Thomas Brink, Rachel Bruch, Maxime Deckers, Richard Dekany, Alison Dugas, Samantha Goldwasser, Matthew J. Graham, Melissa L. Graham, Steven L. Groom, Matt Hankins, Jacob Jencson, Joel P. Johansson, Viraj Karambelkar, Mansi M. Kasliwal, Frank J. Masci, Michael S. Medford, James D. Neill, Guy Nir, Reed L. Riddle, Mickael Rigault, Tassilo Schweyer, Jacco H. Terwel, Lin Yan, Yi Yang, Yuhan Yao | 2023-01-11T18:47:44Z | http://arxiv.org/abs/2301.04637v1 | # A Systematic Study of Ia-CSM Supernovae from the ZTF Bright Transient Survey
###### Abstract
Among the supernovae (SNe) that show strong interaction with the circumstellar medium, there is a rare subclass of Type Ia supernovae, SNe Ia-CSM, that show strong narrow hydrogen emission lines much like SNe IIn but on top of a diluted over-luminous Type Ia spectrum. In the only previous systematic study of this class (Silverman et al., 2013), 16 objects were identified, 8 historic and 8 from the Palomar Transient Factory (PTF). Now using the successor survey to PTF, the Zwicky Transient Facility (ZTF), we have classified 12 additional objects of this type through the systematic Bright Transient Survey (BTS). In this study, we present and analyze the optical and mid-IR light curves, optical spectra and host galaxy properties of this sample. Consistent with previous studies, we find the objects to have slowly evolving light curves compared to normal SNe Ia with peak absolute magnitudes between \(-19.1\) and \(-21\), spectra having weak H\(\beta\), large Balmer decrements of \(\sim 7\) and strong Ca NIR emission. Out of 10 SNe from our sample observed by NEOWISE, 9 have \(3\sigma\) detections, along with some showing a clear reduction in red-wing of H\(\alpha\), indicative of newly formed dust. We do not find our SN Ia-CSM sample to have significantly different distribution of equivalent width of He I \(\lambda 5876\) than SNe IIn as observed in Silverman et al. (2013). The hosts tend to be late-type galaxies with recent star formation. We also derive a rate estimate of 29\({}^{+27}_{-21}\) Gpc\({}^{-3}\) yr\({}^{-1}\) for SNe Ia-CSM which is \(\sim\)0.02-0.2% of the SN Ia rate. This work nearly doubles the sample of well studied Ia-CSM objects in Silverman et al. (2013), increasing the total number to 28.
keywords: circumstellar matter - supernovae: general - supernovae: individual (SN 1997cy, SN 2002ic, SN 2005gj, SN 2005ip, SN 2006jc, SN 2008J, SN 2009ip, SN 2010jl, PTF11kx, SN 2012ca, SN 2013dn, SN 2018crl, SN 2018gkx, SN 2018evt, SN 2019agi, SN 2019ibk, SN 2019rvb, SN 2020onv, SN 2020qxz, SN 2020uem, SN 2020xtg, SN 2020abfe, SN 2020aekp)
## 1 Introduction
When it comes to supernovae (SNe) interacting with circumstellar material (CSM), a number of sub-types of core-collapse SNe (CCSNe) show signs of strong interaction, like SNe IIn (Schlegel, 1990; Filippenko, 1997), SNe Ibn (Pastorello et al., 2008; Foley et al., 2007; Chugai, 2009; Hosseinzadeh et al., 2017) and most recently SNe Icn (Gal-Yam et al., 2021, 2022; Perley et al., 2022). SN IIn progenitors are generally thought to be massive stars (like Luminous Blue Variables, LBVs) that lose their hydrogen envelopes to wind-driven mass loss and outbursts (Gal-Yam et al., 2007; Gall-Yam & Leonard, 2009; Kiewe et al., 2012; Taddia et al., 2013; Smith, 2014). Helium-rich but hydrogen-deficient CSM in the case of SNe Ibn (Pastorello et al., 2008; Foley et al., 2007; Chugai, 2009) and both hydrogen and helium deficient CSM in SNe Icn (Gal-Yam et al., 2022; Perley et al., 2022; Pellegrino et al., 2022) are thought to arise from high-velocity wind mass loss or stripping of the envelope in binary configurations of massive Wolf-Rayet (WR) like stars. For SNe IIn in most cases, the mass-loss rate derived from the CSM velocity is consistent with estimates from LBV-like eruptive mass loss.
However, there exists a rare sub-type of thermonuclear supernovae (SNe Ia) which also interacts strongly with CSM i.e. SNe Ia-CSM. This class poses a challenge to the progenitor debate of SNe Ia. There is some consensus on there being at least two major progenitor channels for SNe Ia; the double-degenerate (DD) channel (Webbink, 1984; Iben & Tutukov, 1984) which is the merging of two C/O white dwarfs and the single-degenerate (SD) channel (Whelan & Iben, 1973) where the white dwarf accretes enough material from a non-degenerate companion to explode. Although there are more arguments for the DD scenario from observations of nearby SNe Ia (Nugent et al., 2011; Li et al., 2011; Brown et al., 2012; Bloom et al., 2011), the strongest observational evidence for the SD scenario are SNe Ia with CSM.
Indications of CSM around SNe Ia ranges from detection of time varying narrow Na iD absorption lines (Patat et al., 2007; Blondin et al., 2009; Simon et al., 2009) in high-resolution spectra (found in at least 20% of SNe Ia in spiral hosts, Sternberg et al., 2011; Maguire et al., 2013; Clark et al., 2021), to strong intermediate and narrow Balmer emission features in the spectra and large deviations of the light curves from the standard shape. The latter phenomena have been named SNe Ia-CSM (Silverman et al., 2013), but were earlier referred to as "SNe IIna" or "SNe Ian" due to the strong similarity between their spectra and those of SNe IIn. The first two examples of this class studied in detail were SNe 2002ic (Hamuy et al., 2003; Deng et al., 2004; Wang et al., 2004; Wood-Vasey et al., 2004; Kotak & Meikle, 2005; Chugai et al., 2004) and 2005gj (Aldering et al., 2006; Prieto et al., 2007), but for a long time there was ambiguity regarding their thermonuclear nature (Benetti et al., 2006). These SNe were dominated by interaction from the first spectrum and were quite over-luminous compared to normal SNe Ia. The first clear example of a thermonuclear SN Ia-CSM was PTF11kx (Dilday et al., 2012; Silverman et al., 2013). It looked like a luminous SN Ia (99aa-like) at early phases but started showing interaction at \(\sim 60\) days from explosion and thereafter strongly resembled SNe 2002ic and 2005gj at late times. Higher resolution spectra taken at early times indicated multiple shells of CSM with some evacuated regions in between. Dilday et al. (2012) suggested a symbiotic nova progenitor involving a WD and a red giant (similar to RS Ophiuchi) could produce such CSM distribution, however later studies argued that the massive CSM of PTF11kx was inconsistent with the mass-loss rates from symbiotic nova systems (Silverman et al., 2013; Soker et al., 2013).
Ever since, a handful of SNe of this class have been studied in detail to investigate their progenitors and to distinguish them from their spectroscopic cousins, the Type IIn SNe. Both SN Ia-CSM and SN IIn spectra share a blue quasi-continuum, a strong H\(\alpha\) feature with an intermediate and a narrow component, and often a broad Ca NIR triplet feature, but they differ with regards to the line strength of H\(\beta\), strength/presence of helium and presence of emission lines from intermediate mass elements often found in CCSNe. There are some individual SNe with unclear type often referred to as SN Ia-CSM/IIn, like SN 2012ca for which some papers argue for core-collapse (Inserra et al., 2014, 2016) and others for a thermonuclear origin (Fox et al., 2015). This ambiguity becomes more dominant as the underlying SN flux gets smaller compared to the interaction power (Leloudas et al., 2015). Silverman et al. (2013, hereafter S13) is the only study to analyze a sample of SNe Ia-CSM, 16 objects in total including 6 previously known, 3 re-discovered (re-classified SNe IIn) and 7 new from the Palomar Transient Factory (PTF). Their paper presents the common properties of optical light curves, spectra and host galaxies and contrast them against SN IIn properties. In this paper, we present 12 new SNe Ia-CSM discovered as part of the Zwicky Transient Facility's (ZTF; Bellm et al., 2019; Graham et al., 2019;
Dekany et al., 2020) Bright Transient Survey (BTS; Fremling et al., 2020; Perley et al., 2020) and analyze their optical light curves, spectra, hosts and rates. Throughout this paper, we have compared the results derived from our sample to the ones in S13.
This paper is organised as follows; we first discuss the sample selection criteria, the photometric and spectroscopic data collection in SS2, then the analysis of light- and color-curves and the bolometric luminosities is done in SS3.1. The analysis of early and late-time spectra and emission line identification is presented in SS3.2, and analysis of the host galaxies is provided in SS3.3. The rates are estimated from the BTS survey in SS3.4. We end with a discussion about the nature of SN Ia-CSM progenitors and a summary in SS4 and SS5.
## 2 Observations and Data Reduction
In this section, we outline our selection criteria, and present the optical photometry and spectroscopic observations of the 12 SNe Ia-CSM in our sample.
### Selection Criteria
To carefully curate our sample of SNe Ia-CSM, we used the BTS sample and its publicly available BTS Sample Explorer1 website to obtain the list of all classified Type Ia subtypes during the period 2018-05-01 to 2021-05-01. We then filter out oddly behaving Type Ia SNe based on their light-curve properties. We used two criteria; the primary being rest-frame duration considering flux above 20% of peak flux, and the second being change in magnitude after 30 days from peak (\(\Delta m_{30}\)). We calculated these two properties from either \(g\) or \(r\)-band light curves (whichever had maximum number of detections) grouped into 3-day bins and used Gaussian Process Regression2 to interpolate the light curves where coverage was missing. For the first filtering, we calculated the mean (\(\mu\approx 35\) days) and standard deviation (\(\sigma\approx 16\) days) of the duration distribution and selected everything that had a duration greater than \(\mu+3\sigma\). Given the large sample size (\(N=3486\)), the standard error on the mean is \(\sim 0.5\) days, hence our duration cut of \(3\sigma\) is suitable. This filtering selected 41 out of 3486 BTS SNe Ia. Then from these 41 SNe, we calculated the mean and standard deviation of the \(\Delta m_{30}\) distribution and removed SNe that were more than 1\(\sigma\) away from the mean on the higher side to reject the relatively steeply declining long SNe, which resulted in 35 SNe being kept. Again, the mean and standard deviation of \(\Delta m_{30}\) distribution of these 41 long duration SNe are 0.48 mag and 0.27 mag respectively and the standard error on mean is \(\sim 0.04\), making our 1\(\sigma\) cut suitable. Finally, we manually inspected the 35 selected SNe Ia to confirm their classification. 20 out of the 35 SNe that passed the above filtering criteria were just normal SNe Ia either caught late or missing some post-peak coverage in ZTF or had spurious detections that resulted in long duration estimates, 2 had incorrect duration estimate due to an interpolation error and were recalculated and 1 (AT2020ca; Soraisam et al., 2021) had some detections before the SN explosion which could be connected to a different SN (i.e. a sibling; Graham et al., 2022).
Footnote 1: [https://sites.astro.caltech.edu/zt/bts/explorer.php](https://sites.astro.caltech.edu/zt/bts/explorer.php)
Footnote 2: Pedregosa et al. (2011) [https://scikit-learn.org/stable/modules/gaussian.process.html](https://scikit-learn.org/stable/modules/gaussian.process.html)
The remaining 12 long-duration SNe Ia all turned out to be spectroscopically classified SNe Ia-CSM in BTS, and none of the classified BTS SNe Ia-CSM were missed in this filtering. No other SNe apart from these stood out in particular, indicating the classification reliability of the BTS sample. During the same period, 9 SNe Ia-CSM were reported to the Transient Name Server (TNS), out of which 7 are already in our sample, 1 was detected by ZTF but did not meet the BTS criteria, and 1 was not detected in ZTF as the transient location fell too close to the field edges and was masked by the automated image subtraction pipeline. Yao et al. (2019) presented early photometric observations of one SN Ia-CSM in our sample, SN 2018crl. Table 1 summarizes the coordinates, redshifts, peak absolute magnitudes, durations, host galaxy information and Milky Way extinction for the 12 SNe Ia-CSM in our sample.
Furthermore, we re-checked the classifications of 142 SNe IIn classified in BTS during the same period as above, in case any SN Ia-CSM was masquerading among them and found 6 to have ambiguous classifications. These are discussed further in Appendix A.
### Discovery
All SNe Ia-CSM were detected by the ZTF (Bellm et al., 2019; Graham et al., 2019; Dekany et al., 2020) and passed the criteria for the BTS (Fremling et al., 2020; Perley et al., 2020) automatic filtering, i.e. extra-galactic real transients with peak magnitudes brighter than 19 mag. These were saved and classified as part of BTS which aims to classify all transients brighter than 18.5 magnitude, and reported to the Transient Name Server3 (TNS) during the period 2018-05-01 to 2021-05-01. Out of the 12 SNe, 6 were first reported to TNS (i.e. discovered) by ZTF (AMPEL, Nordin et al., 2019; Soumagnac and Ofek, 2018 and BTS), 3 were first reported by GaiaAlerts (Hodgkin et al., 2021), 2 by ATLAS (Smith et al., 2020) and 1 by ASAS-SN (Shappee et al., 2014). For classification, 9 were classified by the ZTF group, 1 by ePESSTO (Smartt et al., 2015; Stein et al., 2018), 1 by SCAT (Tucker et al., 2018; Payne et al., 2019) and 1 by the Trinity College Dublin (TCD) group (Prentice et al., 2020). The follow-up spectral series for these SNe were obtained as part of the
BTS classification campaign as many were difficult to classify with the ultra-low resolution spectrograph P60/SEDM (Blagorodnova et al., 2018) and hence were followed up with intermediate resolution spectrographs. The SEDM spectra were helpful in determining an initial redshift but the template matches were unclear (matched to SN IIn as well as SN Ia-CSM and SN Ia-pec templates, some matched poorly to SN Ia/Ic at early times). SNe 2019agi (classification and spectrum taken from TNS), 2019rvb, 2020onv, 2020qxz and 2020uem were classified as Ia-CSM \(\sim 1-2\) month after discovery using spectra at phases of 42, 26, 38, 45 and 51 days respectively. SNe 2018crl, 2018gkx and 2019ibk were classified \(\sim 2-3\) months after discovery using spectra at phases of 92, 75 and 103 days respectively. SNe 2018evt, 2020abfe and 2020aekp were classified \(\sim 4-5\) months after discovery using the spectra at phases of 144, 146 and 132 days respectively. SN 2020xtg immediately went behind the sun after its first detection in ZTF therefore its first spectrum (using SEDM) was taken at 91 days since explosion which was dominated by strong H\(\alpha\) emission, and thus SN 2020xtg was initially classified as a Type II. As this SN was exhibiting a long lasting light curve, an intermediate resolution spectrum was taken at 340 days which matched very well to SN Ia-CSM and therefore its classification was updated. SNe 2020uem and 2020aekp showed peculiar features and were followed up for more optical spectroscopy for single object studies (to be presented in future papers).
### Optical photometry
To assemble our sample light curves, we obtained forced PSF photometry via the ZTF forced-photometry service (Masci et al., 2019; IRSA, 2022) in \(g\), \(r\) and \(i\) bands and also added data from ATLAS (Tonry et al., 2018; Smith et al., 2020) forced-photometry service in \(c\) and \(o\) bands. The high cadence ZTF partnership survey in \(i\) band contributed some photometry to SNe 2018crl, 2018gkx, 2019agi, 2019ibk and 2019rvb. The ZTF and ATLAS data were supplemented with data from the Rainbow camera (RC, Ben-Ami et al., 2012) on the robotic Palomar 60-inch telescope (P60, Cenko et al., 2006) and the Optical wide field camera (IO:O) on the Liverpool telescope (LT, Steele et al., 2004). The P60 data was processed with the automatic image subtraction pipeline FPipe(Fremling et al., 2016) using reference images from SDSS when available, and otherwise from Pan-STARRS1. The IO:O data was initially reduced with their standard pipeline4 then image subtraction was carried out using the method outlined in Taggart (2020). For SN 2018evt, some early time data available from ASAS-SN (Shappee et al., 2014; Kochanek et al., 2017) in the \(V\) band was obtained through their _Sky Patrol5_ interface.
Footnote 4: [https://telescope.livjm.ac.uk/Tellast/Pipelines/](https://telescope.livjm.ac.uk/Tellast/Pipelines/)
Footnote 5: [https://asas-sn.osu.edu/](https://asas-sn.osu.edu/)
We corrected all photometry for Milky Way extinction with the Python package extinction(Barbary, 2016) using the dust extinction function from Fitzpatrick (1999), the Schlafly & Finkbeiner (2011) dust map, and an R\({}_{V}\) of 3.1. Then we converted all measurements into flux units for analysis and considered anything less than a 3\(\sigma\) detection an upper limit. There is moderate to good coverage in \(g\), \(r\), \(c\) and \(o\) bands for all SNe in our sample. Figure 1 shows a multi-paneled figure of the light curves of the objects in our sample.
### Mid-IR photometry
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multicolumn{1}{c}{**ZTF Name**} & **IAU Name** & **z** & \(M_{r}^{\rm peak}\) & **Duration1** & **Host Name** & **Host Mag2** \\ & & & (mag) & (days) & & (\(m_{r}\)) \\ \hline ZTF18aaykjei & SN 2018crl & 0.097 & -19.66 & 130 & SDSS J161938.90+491104.5 & 18.89 \\ ZTF18abuatfp & SN 2018gx & 0.1366 & -20.07 & 322 & SDSS J135219.22+553830.2 & 18.23 \\ ZTF18actuhrs & SN 2018evt & 0.02378 & -19.10 & 447 & MCG-01-35-011 & 14.07 \\ ZTF19aaeotst & SN 2019agi & 0.0594 & \(<\)-18.76 & \(>\)303 & SDSS J162244.06+240113.4 & 17.82 \\ ZTF19abidbqp & SN 2019ibk & 0.04016 & \(<\)-17.55 & \(>\)576 & SDSS J014611.93-161701.1 & 15.55 \\ ZTF19acbjddp & SN 2019rvb & 0.1835 & -20.74 & 172 & WISEA J163809.90+682746.3 & 20.44 \\ ZTF20abmlxrx & SN 2020onv & 0.095 & \(<\)-20.36 & \(>\)154 & WISEA J231646.31-231839.9 & 17.95 \\ ZTF20abqkbfx & SN 2020qxz & 0.0964 & -20.00 & 166 & WISEA J180400.99+740050.0 & 17.65 \\ ZTF20accmutv & SN 2020uem & 0.041 & \(<\)-20.17 & \(>\)279 & WISEA J082423.32-032918.6 & 15.88 \\ ZTF20aciwcuz & SN 2020xtg & 0.0612 & \(<\)-19.60 & \(>\)336 & SDSS J153317.64+450022.8 & 15.42 \\ ZTF20acqikeh & SN 2020abfe & 0.093 & -20.24 & 171 & SDSS J200003.30+100904.2 & 20.18 \\ ZTF21aaabwzx & SN 2020aekp & 0.046 & -19.62 & 458 & SDSS J154311.45+174843.7 & 18.41 \\ \hline \end{tabular}
* Rest frame duration above 20% of \(r\)-band peak flux, uncertainty of \(\pm 2-3\) days from ZTF cadence.
* Corrected for Galactic extinction.
\end{table}
Table 1: Properties of the 12 BTS SNe Ia-CSM
Figure 1: Optical light curves of the ZTF BTS SN Ia-CSM sample. The SNe Ia-CSM have longer duration than the average SN Ia, with some variety like bumpy light curves or long plateaus. The one SN marked with an asterisk (SN 2020uem) has an unconstrained explosion time estimate (\(\sim\pm 50\) d). The decline rate from Cobalt decay is marked with black dashed line, the light curve decline rates measured from \(r\)-band data are shown in the subplot legends.
The transients were observed during the ongoing NEOWISE all-sky mid-IR survey in the \(W1\) (\(3.4\,\mu\)m) and \(W2\) (\(4.5\,\mu\)m) bands (Wright et al., 2010; Mainzer et al., 2014). We retrieved time-resolved coadded images of the field created as part of the unWISE project (Lang, 2014; Meisner et al., 2018). To remove contamination from the host galaxies, we used a custom code (De et al., 2020) based on the ZOGY algorithm (Zackay et al., 2016) to perform image subtraction on the NEOWISE images using the full-depth coadds of the WISE and NEOWISE mission (obtained during 2010-2014) as reference images. Photometric measurements were obtained by performing forced PSF photometry at the transient position on the subtracted WISE images until the epoch of the last NEOWISE data release (data acquired until December 2021). Further analysis of the mid-IR photometry is presented in SS3.1.4
### Optical spectroscopy
The main instruments used for taking spectra and the software used to reduce the data are summarized in Table 2. Additionally, the spectrum Reguiti (2020) obtained using the Asiago Faint Object Spectrograph and Camera (AFOSC) on the 1.8 m telescope at Cima Ekar, and the spectrum Stein et al. (2018) obtained using the ESO Faint Object Spectrograph and Camera version 2 (EFOSC2) on ESO New Technology Telescope (NTT) were taken from TNS.
The details for all optical spectra (61 for the sample in total) presented in this paper are provided in Table 3. Furthermore, all spectra were corrected for Milky Way extinction using extinction and the same procedure as for the photometry. The SN redshifts were derived using narrow host lines for the objects which did not already have a host redshift available in the NASA/IPAC Extragalactic Database6 (NED). Photometric calibration was done for all spectra i.e. they were scaled such that the synthetic photometry from the spectrum matched the contemporaneous host-subtracted ZTF \(r\)-band data. For SN 2018crl, a host galaxy spectrum taken using P200/DBSP was available, which was subtracted from the P200/DBSP SN spectrum taken at +92 days. For SN 2020aekp, more spectra beyond \(\sim 350\) days were obtained but will be presented in a future study of the object (34 additional spectra up to \(\sim\)600 day).
Footnote 6: [https://ned.ipac.caltech.edu/](https://ned.ipac.caltech.edu/)
These processed spectra were used for the rest of the analysis as detailed in SS3.2 and will be available on WISeREP7(Yaron and Gal-Yam, 2012).
Footnote 7: [https://www.wiserep.org/](https://www.wiserep.org/)
## 3 Analysis
### Photometry
#### 3.1.1 Explosion epoch estimates
For the purpose of this paper, the 'explosion time' simply refers to the time when optical flux rises above the zero-point baseline (i.e. first light). We used pre-peak \(g,r,i\)-band ZTF photometry and \(c,o\)-band ATLAS photometry (binned in 1-day bins), when available, for our analysis. For each SN, the light curve was interpolated using Gaussian process regression to obtain the peak flux epoch, then a power-law (PL) model was fit using epochs from baseline to 60% of peak brightness in each band following Miller et al. (2020). The PL fits converged in at least one band for 6 out of 12 BTS SNe Ia-CSM. For the rest, we simply took the middle point between the first \(5\sigma\) detection and the last upper limit before this detection as the explosion epoch with half of the separation between these two points as the uncertainty.
The explosion time estimates, light curve bands used for the PL fits and the \(1\sigma\) uncertainties on explosion times are listed in Table 4. The unfilled 'PL fitters' column in the table are the SNe for which the PL fit did not converge and averages were used. For the PL fits this typically constrains the time of explosion to within a fraction of a day. Given the high cadence of the ZTF survey, even in the cases where we
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Inst.** & **Telescope** & **Reduction Software** \\ \hline SEDM1 & Palomar 60-inch (P60) & pySEDM2 \\ ALFOSC3 & Nordic Optical Telescope & IRAF4 , PyNOT4 , Pyepit \\ DBSP5 & Palomar 200-inch (P200) & IRAF6 , DBSPDPR7 \\ KAST4 & Shane 3-m & IRAF \\ LRIS9 & Keck-I & Lpire10 \\ SPRAT11 & Liverpool Telescope & Barnsley et al. (2012) \\ DIS12 & APO13 & IRAF \\ \hline \end{tabular}
\end{table}
Table 2: Description of spectrographs used for follow-up and the corresponding data reduction pipelines
use only the last non-detection the uncertainty range is typically less than 3 days. Only for SN 2020uem is the date of explosion virtually unconstrained (\(\pm 57\) days) as it was behind the sun at the time of explosion.
Although for SN 2019ibk the explosion time is formally constrained with a \(\pm 3\) day uncertainty, this estimate was derived using only ATLAS \(o\)-band data right after the SN emerges from behind the sun. There is not a clear rise observed over a few epochs but two non-detections before a 5\(\sigma\) detection. It is possible that the actual peak of this SN occurred earlier while it was behind the sun and the rising \(o\)-band points after it emerged are due to a second peak or bump (similar to SN 2018evt, in that case the actual rise was caught before the SN went behind the sun in ASAS-SN data). If the former explosion epoch estimate from \(o\)-band is to be believed then SN 2019ibk would be the most sub-luminous among the SNe Ia-CSM, peaking at \(-17.5\).
#### 3.1.2 Duration and absolute magnitudes
Figure 2 shows the SNe Ia-CSM (colored squares) in our sample in the duration-luminosity and duration-\(\Delta m_{30}\) phase space. In the top panel, the x-axis is duration above half-max and the y-axis is the peak absolute magnitude (see Table 1) when we have photometric coverage both pre-peak and post-peak. For SNe missing the pre-peak coverage, their discovery magnitude is taken to be the upper limit to peak absolute magnitude and the duration from discovery the lower limit
\begin{table}
\begin{tabular}{c c c c c||c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**SN**}} & \multicolumn{1}{c}{**JD**} & \multicolumn{1}{c}{**Epoch**} & \multicolumn{1}{c}{**Telescope/Instrument**} & \multicolumn{1}{c}{**Int**} & \multicolumn{1}{c}{**SN**} & \multicolumn{1}{c}{**JD**} & \multicolumn{1}{c}{**Epoch**} & \multicolumn{1}{c}{**Tel./Instr.**} & \multicolumn{1}{c}{**Int**} \\ & \((-2450000)\) & (days) & & (sec) & & \((-2450000)\) & (days) & & (sec) \\ \hline SN 2018crl & 8282 & 9 & APO/DIS & 2400 & SN 2020uem & 9128 & 11 & P60/SEDM & 1800 \\ & 8288 & 15 & P60/SEDM & 2700 & & 9136 & 18 & P60/SEDM & 1800 \\ & 8295 & 21 & P60/SEDM & 2700 & & 9170 & 51 & Ekar/AFOSC & 1200 \\ & 8306 & 31 & P60/SEDM & 2700 & & 9222 & 101 & Lick-3m/KAST & 3600 \\ & 8373 & 92 & P200/DBSP & 600 & & 9252 & 130 & Lick-3m/KAST & 2700 \\ (Host) & 8627 & 324 & P200/DBSP & 900 & & 9263 & 140 & Lick-3m/KAST & 2400 \\ SN 2018gkx & 8457 & 75 & Keck1/LRIS & 300 & & 9291 & 167 & NOT/ALFOSC & 900 \\ SN 2018evt & 8343 & 9 & NTT/EFOSC2 & 300 & & 9481 & 349 & P60/SEDM & 2160 \\ & 8465 & 127 & P60/SEDM & 1200 & & 9492 & 360 & Keck1/LRIS & 600 \\ & 8481 & 143 & P60/SEDM & 1200 & & 9583 & 448 & P60/SEDM & 2160 \\ & 8481 & 144 & L1/SPFAT & 1000 & & 9586 & 451 & P60/SEDM & 2160 \\ & 8534 & 195 & P60/SEDM & 1200 & SN 2020xtg & 9226 & 91 & P60/SEDM & 2160 \\ SN 2019agi & 8547 & 42 & UH88/SNIFS & 1820 & & 9491 & 340 & Keck1/LRIS & 600 \\ SN 2019ibk & 8691 & 35 & P60/SEDM & 2250 & & 9606 & 448 & Keck1/LRIS & 1200 \\ & 8695 & 39 & P60/SEDM & 2250 & SN 2020abfe & 9189 & 27 & P60/SEDM & 2700 \\ & 8697 & 41 & P60/SEDM & 2250 & & 9319 & 146 & Keck1/LRIS & 400 \\ & 8748 & 90 & P60/SEDM & 2250 & SN 2020aekp & 9224 & 19 & P60/SEDM & 2160 \\ & 8761 & 103 & P200/DBSP & 600 & & 9342 & 132 & P60/SEDM & 2160 \\ SN 2019rvb & 8766 & 14 & P60/SEDM & 2250 & & 9343 & 132 & NOT/ALFOSC & 1200 \\ & 8780 & 26 & P200/DBSP & 600 & & 9362 & 151 & P60/SEDM & 2700 \\ SN 2020onv & 9058 & 23 & P60/SEDM & 1800 & & 9381 & 169 & NOT/ALFOSC & 2400 \\ & 9062 & 27 & P60/SEDM & 1800 & & 9404 & 191 & P60/SEDM & 2700 \\ & 9069 & 33 & P60/SEDM & 1800 & & 9425 & 211 & NOT/ALFOSC & 1800 \\ & 9070 & 34 & L1/SPRTAT & 750 & & 9434 & 220 & P60/SEDM & 2700 \\ & 9073 & 37 & P60/SEDM & 1800 & & 9448 & 233 & P60/SEDM & 2700 \\ & 9074 & 38 & NOT/ALFOSC & 450 & 9468 & 252 & P60/SEDM & 2700 \\ SN 2020qxz & 9076 & 13 & P60/SEDM & 2250 & & 9569 & 348 & P60/SEDM & 2700 \\ & 9087 & 22 & P60/SEDM & 2250 & & & & \\ & 9092 & 26 & NOT/ALFOSC & 1800 & & & & \\ & 9098 & 32 & P60/SEDM & 2250 & & & & \\ & 9101 & 34 & NOT/ALFOSC & 1200 & & & \\ & 9107 & 40 & P200/DBSP & 900 & & & & \\ & 9112 & 45 & Keck1/LRIS & 300 & & & & \\ & 9121 & 53 & P60/SEDM & 2250 & & & & \\ & 9141 & 71 & Keck1/LRIS & 399 & & & & \\ \hline \end{tabular}
\end{table}
Table 3: Summary of optical spectra
to duration above half-max (marked by arrows in Figure 2). The BTS SN Ia sample is shown in gray points, and we also show the SNe Ia-CSM presented in S13 with empty triangles for comparison in the top panel. In the bottom panel, the x-axis is duration above 20% of peak flux (\(\Delta t_{20}\)) and the y-axis is \(\Delta m_{30}\), the two parameters used in the selection criteria. Most of the SNe Ia-CSM lie on the longer duration and brighter luminosity side, and are even more distinctly separated in the \(\Delta t_{20}\)-\(\Delta m_{30}\) phase space. This makes the SN initial decline rate and duration useful tools for identifying thermonuclear SNe potentially interacting with CSM, if they have not revealed themselves already in their early time spectra. The gray points lying in the same phase space as SNe Ia-CSM are the false positive cases described in SS2.1. Also worth noting is that the duration calculated by taking the flux above half of peak flux value does not capture the true duration of the light curve when the plateau phase falls below half-max as is the case for SN 2020aekp (\(>500\) days light curve) but \(\Delta t_{20}\) and \(\Delta m_{30}\) do.
#### 3.1.3 Light and color curves
We have good pre-peak coverage in ZTF data for 8 of the 12 SNe in our sample8. SN 2018evt was discovered by ASAS-SN on JD 2458341.91 (Nicholls & Dong, 2018) and classified by ePESSTO the next day (Stein et al., 2018), around 115 days before the first detection in ZTF when the SN came back from behind the sun. Hence we have only one epoch of pre-peak photometry and one early spectrum for SN 2018evt.
Footnote 8: except for SNe 2018evt, 2019ibk, 2020onv and 2020uem.
Our mixed bag of SNe Ia-CSM show post-maximum decline rates ranging from 0.5 to 2.0 mag 100d\({}^{-1}\) in the \(r\) band from peak to \(\sim 100\) days post peak. The median decline rate is 1.07 mag 100d\({}^{-1}\), which is much slower than the decline rates of normal SNe Ia. We see a variety of changes in decline rates after around 100 days from peak. Two SNe (2020onv and 2020abfe) show no change and have a constant slow decline throughout. Four SNe (2018gkx, 2019agi, 2019ibk and 2019rvb) evolve to a shallower slope going from \(\sim 0.6\)-1 mag 100d\({}^{-1}\) to \(\sim 0.2\)-0.5 mag 100d\({}^{-1}\). Three SNe (2018crl, 2020qxz and 2020aekp) show a major change in decline rate with the light curves becoming
\begin{table}
\begin{tabular}{c c c c} \hline \hline IAU Name & PL fit filters & \(t_{o}\) & \(1\sigma\) interval \\ & & (MJD) & (days) \\ \hline SN 2018crl & \(g,r,o\) & 58271.83 & [\(-\)0.48,+0.38] \\ SN 2018gkx & \(r,o\) & 58371.34 & [\(-\)0.64,+0.53] \\ SN 2018evt & - & 58334.26 & [\(-\)2.00,+2.00] \\ SN 2019agi & - & 58502.48 & [\(-\)1.51,+1.51] \\ SN 2019ibk & - & 58654.61 & [\(-\)2.99,+2.99] \\ SN 2019rvb & \(g,r,i,o\) & 58749.16 & [\(-\)0.79,+0.60] \\ SN 2020onv & \(o\) & 59032.75 & [\(-\)2.49,+1.10] \\ SN 2020qxz & \(g,r,o\) & 59063.05 & [\(-\)0.51,+0.45] \\ SN 2020uem & - & 59117.03 & [\(-\)56.63,+56.63] \\ SN 2020xtg & - & 59130.14 & [\(-\)0.04,+0.04] \\ SN 2020abfe & \(g,r,o\) & 59159.36 & [\(-\)2.16,+2.23] \\ SN 2020aekp & - & 59204.53 & [\(-\)5.50,+5.50] \\ \hline \end{tabular}
\end{table}
Table 4: Explosion time epoch estimates derived from pre-peak multi-band light curves. For 6 out of 12 SNe Ia-CSM, we were able to fit a power-law model to multi-band data following Miller et al. (2020). For the remaining 6 SNe, the explosion epoch was estimated by taking the mean of the first \(5\sigma\) detection and last upper-limit before the first detection.
Figure 2: _Top:_ Location of our 12 SNe Ia-CSM in the peak absolute magnitude vs. rest-frame duration above half max phase space. The colored points are the BTS SNe Ia-CSM and the gray points are the rest of the BTS SNe Ia. Also shown with empty triangles are the SNe Ia-CSM from S13. The vertical arrows mark the upper limits to peak absolute magnitudes and horizontal arrows mark the lower limits to durations of SNe not having pre-peak coverage. _Bottom:_ Change in magnitude 30 days after peak (\(\Delta m_{30}\)) vs. rest-frame duration above 20% of peak-flux for BTS SNe Ia and SNe Ia-CSM. These criteria were used to filter out potential SNe Ia-CSM from all SNe Ia and demonstrate that SNe Ia-CSM occupy a distinct portion in this phase space. However some gray points (not SN Ia-CSM) remain on the longer duration side and are the false positive cases described in §2.1.
almost flat, and SN 2020aekp shifts back to a slow decline from this plateau after \(\sim 200\) days. In three cases, the decline rate actually becomes steeper, SN 2018evt goes from 0.52 mag 100d\({}^{-1}\) to 1.4 mag 100d\({}^{-1}\), SN 2020uem goes from 0.52 mag 100d\({}^{-1}\) to 1.25 mag 100d\({}^{-1}\) and SN 2020xtg seems to go from 0.61 mag 100d\({}^{-1}\) to 1.35 mag 100d\({}^{-1}\) (even though there is only one epoch at late times to measure this change). The 3 SNe with fastest initial decline rates (\(\gtrsim 1.5\) mag 100d\({}^{-1}\) in the \(r\) band) are similar to SN 2002ic (initial decline of 1.66 mag 100d\({}^{-1}\) in \(V\)) and PTF11kx (initial decline of 3.3 mag 100d\({}^{-1}\) in \(R\)) and coincidentally are also the ones that evolve into a plateau. The rest of the sample have initial decline rates comparable to SN 1997cy (0.75 mag 100d\({}^{-1}\)) and SN 2005gj (0.88 mag 100d\({}^{-1}\)) (Inserra et al. 2016). From these observations, we can conclude that SNe Ia-CSM exhibit a range of slow evolution indicating that there exists a continuum of phases at which strong CSM interaction begins to dominate the powering of the light curves for these SNe. It is, however, difficult to pinpoint the exact phase when interaction starts from the light curve without modeling. CSM interaction could be affecting the peak brightness significantly even in cases where interaction only appears to dominate after a few weeks (SNe 2018crl, 2020qxz 2020aekp). Considering the average peak phase to be \(\sim 20\) days past explosion from the light curves and assuming an ejecta velocity of \(\sim 20000\) km s\({}^{-1}\), the CSM is located at \(\sim 3.5\times 10^{15}\) cm. This estimate can be refined by considering the phase of the earliest spectrum that shows interaction signatures (see SS3.2). At late times, all the decline rates are slower than that expected from Cobalt decay (0.98 mag 100d\({}^{-1}\)), confirming that the power from CSM interaction dominates the light curve behaviour for a long time.
Figure 3 shows the \(g-r\) color evolution of our sample SNe as a function of phase (rest-frame days from \(r\)-band maximum), comparing them with some famous SNe Ia-CSM (SNe 2005gj, 1997cy, 1999E), and SNe 2012ca (Ia-CSM/In), 2010jl (IIn) and 1991T (over-luminous Type Ia). The color evolution of normal SNe Ia from ZTF (Dhawan et al. 2022) is shown in grey lines. We use \(g-r\) colors when available, otherwise we estimate the \(g-r\) color by fitting Planck functions to estimate the black-body temperatures from the \(V-R\) colors. Our SNe Ia-CSM show similar color evolution as the older Type Ia-CSM/In interacting SNe, i.e. the \(g-r\) color increases gradually for about 100 days and then settles onto a plateau or slowly declines, and one object (SN 2019ibk) becomes redder at late times similar to SN 2012ca. The interacting SNe are redder at late times compared to the normal SNe Ia.
#### 3.1.4 Mid-IR brightness comparison
Out of 12 SNe in our sample, only one observed (SN 2020abfe) did not have 3\(\sigma\) detections post explosion in the unWISE difference photometry light curves and two (SNe 2019rvb and 2020qxz) did not have coverage post explosion. The unWISE light curves for the rest of the SNe Ia-CSM having \(>3\sigma\) detections in W1 (3.3 \(\mu\)m) and W2 (4.6 \(\mu\)m) bands are shown in Figure 4 (black and red stars) along with _Spitzer_ IRAC survey data of SN 2008cg (indigo and magenta empty triangles), SN 2008J (indigo and magenta empty squares) (both Ia-CSM) and some SNe IIn (blue and orange crosses) taken from Fox et al. (2011). The most nearby SN in our sample, SN 2018evt, is among the brightest (\(\sim 17\) AB mag) in MIR at least until \(\sim\)1000 days after explosion and has a bumpy light curve. SNe 2019ibk and 2018crl however are the most luminous with an absolute magnitude of \(-18.7\) mag in the W1 band. The brightness of the BTS SNe Ia-CSM is comparable with other interacting SNe and span a similar range (\(-16\) to \(-19\)). However, SNe IIn have been detected until even later epochs (up to 1600 days) than SNe Ia-CSM, probably due to the larger number of SNe IIn at closer distances. SN 2020abfe has upper limits around \(\sim-18\) in W1 band and \(\sim-18.5\) in W2 band up to \(\sim\)300 days post explosion shown with upside down filled triangles. As the mid-IR luminosity can be fainter than these limits for SNe Ia-CSM (as can be seen for other nearby SNe in this sample) and SN 2020abfe is at a redshift of 0.093, it might just be out of reach for WISE.
This brightness of SNe Ia-CSM in mid-IR can be indicative of existing or newly formed dust. A clear signature of new dust is reduced flux in the red wing of the H\(\alpha\) emission line at late phases as the new dust formed in the cold dense shell behind forward shock absorbs the far-side (redshifted) intermediate and narrow line emission (see bottom panel of Fig. 7). For our sample, this reduction in H\(\alpha\) red wing is the most pronounced for SN 2018evt.
#### 3.1.5 Bolometric luminosity
As the SN Ia-CSM luminosity is dominated by CSM interaction, their spectra comprise of a pseudo-continuum on the blue side and strong H\(\alpha\) emission on the red side, hence a blackbody fit to multi-band photometric data is not appropriate to estimate the bolometric luminosity. Instead we calculate a pseudo-bolometric luminosity from the available multi-band optical data by linearly interpolating the flux between the bands and integrating over the optical wavelength range spanned by the ATLAS and ZTF bands. The individual band light curves are first interpolated using Gaussian process regression to fill in the missing epochs. This estimate places a strict lower limit on the bolometric luminosity.
In Figure 5 we show the pseudo-bolometric luminosity of our SN Ia-CSM sample in comparison with SN 1991T (Type Ia), SNe 1997cy, 1999E, 2002ic, 2005gj, 2013dn and PTF11kx (Ia-CSM). Multi-band photometric data were taken from the Open Supernova Catalog (Guillochon et al. 2017)
Figure 3: Color evolution (\(g-r\)) of BTS SNe Ia-CSM from \(r\)-band maximum (plotted in black) compared with SNe 2005gj, 1997cy, 1999E (Ia-CSM), SN 2012ca (IIn/Ia-CSM), SN 2010jl (IIn), SN 1991T (SN Ia) and ZTF SNe Ia (gray lines). As can be seen for up to \(\sim 150\) days, our SNe Ia-CSM tend to be redder than SNe Ia and at late times develop a plateau similar to other interacting SNe (IIn/Ia-CSM).
for SN 1991T (Filippenko et al., 1992; Ford et al., 1993; Schmidt et al., 1994) to generate the bolometric luminosity light curve through black body fitting. The pseudo-bolometric luminosity light curve for SN 1997cy was obtained from Germany et al. (2000), for SN 2013dn from Fox et al. (2015) and for SNe 2002ic, 2005gj, 1999E and PTF11kx from Inserra et al. (2016).
All BTS SNe Ia-CSM show a slow evolution in bolometric luminosity, inconsistent with the decay of \({}^{56}\)Co to \({}^{56}\)Fe. The sample's overall luminosity decline rates are comparable to those of SNe 1997cy and 2013dn, as shown in Figure 5. Only SNe 2018crl and 2020aekp seem to show early decline in their pseudo-bolometric light curves similar to SN 1991T for about 40 days after peak like SN 2002ic and PTF11kx. Another BTS interacting SN Ia, ZTF20aatxryt (Kool et al., 2022), was found to follow the PTF11kx light-curve evolution very closely and as its light curve fell into a plateau the SN started showing signs of interaction with a helium-rich CSM and evolved into a helium-rich SN Ia-CSM. We have excluded ZTF20aatxryt from the sample as we focus on typical SNe Ia-CSM interacting with hydrogen-rich CSM in this study. At late phases (\(\sim 300\) days), the SNe Ia-CSM are approximately 100 times brighter than normal SNe Ia at the same epoch. Therefore, at these late phases, the luminosity and spectral features of SNe Ia-CSM are entirely dominated by CSM-interaction with little emergent SN flux. From the pseudo-bolometric light curves, we place a lower limit on the total radiated energy for SNe Ia-CSM to be 0.1-1.5 \(\times 10^{50}\)erg. This is well below the thermonuclear budget (E\({}_{kin}\sim 10^{51}\) erg), but as this is a lower limit and some SNe in the sample have unconstrained peaks, the true total radiative energy might come close to the thermonuclear budget, requiring high conversion efficiency to achieve their luminosity.
### Spectroscopy
Figure 6 displays the spectral series obtained for the BTS SNe Ia-CSM. Most of the early time spectra were taken with the SEDM, the BTS workhorse instrument (R \(\sim\)100), which is not able to resolve the narrow CSM lines. Therefore, these SNe were followed up with higher resolution instruments to get more secure classifications. For each spectrum in Figure 6, the phase is provided with respect to the explosion epoch estimate given in Table 4. We have spectra ranging from a few to around 470 days from explosion. Considering the well constrained explosion times of SN 2018evt, presence of narrow H\(\alpha\) in its first spectrum at 8 days since explosion and assuming a typical ejecta velocity of \(\sim\)20000 km s\({}^{-1}\), this implies that the CSM interaction start as close as \(\sim\)1.4\(\times 10^{15}\) cm.
Figure 7 shows the early time (left) and late time (right) spectral behaviour of the BTS SNe Ia-CSM together with a few historical SNe for comparison, namely SNe Ia-CSM SN 2011jb (Silverman et al., 2013), SN 2005gj and PTF11kx, the Type Ia SN 1991T and the well-observed Type IIn SN 2010jl. Vertical gray regions mark typical SN Ia absorption features
Figure 4: unWISE detections in the W1 and W2 bands of BTS SNe Ia-CSM. The W1 and W2 points are marked with black and red filled stars respectively. Spitzer IRAC photometry of SNe IIn (blue and orange crosses) and two SNe Ia-CSM from Fox et al. (2011) (SNe 2008cg and 2008J in empty triangle and square) are also shown for comparison. 9 out of 12 BTS SNe Ia-CSM are as bright in mid-IR as other interacting SNe (\(\sim-16\) to \(\sim-19\)). The upper limits for SN 2020abfe are shown in black and red filled upside down triangles.
Figure 5: Pseudo-bolometric luminosity light curves of BTS SNe Ia-CSM compared with pseudo-bolometric light curves of SNe 1991T, 1997cy, 1999E, 2002ic, 2005gj, 2013dn, and PTF11kx from literature. The light curves in each filter having more than 10 epochs were interpolated using Gaussian process regression to fill in the missing epochs, and at each epoch the fluxes between the bands were linearly interpolated and integrated over the optical wavelength range spanned by ZTF and ATLAS filters to get the pseudo-bolometric luminosity. For BTS SNe, the phases are with respect to the estimated explosion epochs, while for comparison SNe the phases are with respect to discovery.
Figure 6: Spectral series of all SNe Ia-CSM presented in this paper. The rest-frame phases are shown alongside the spectra in each subplot and have been calculated using the explosion epoch estimate. The colors depict different instruments used to obtain this data. Major emission lines are marked with vertical dashed lines.
and [Fe II/III] line regions, and vertical dashed lines mark the Balmer emission lines. The sample spectra have been multiplied by a constant factor to magnify relevant spectral features. In the following paragraphs, we compare the observations of some of the spectral features with previous analysis of this class (Silverman et al., 2013; Fox et al., 2015; Inserra et al., 2016).
A few of our early time SNe Ia-CSM show underlying SN Ia absorption features like PTF11kx and SN 2002ic (most are, however, quite diluted and also affected by the low resolution and signal-to-noise ratio (SNR) of the SEDM spectra), the most notable being SNe 2018evt, 2020qxz and 2020aekp. SNe 2020qxz and 2020aekp also have among the fastest initial post-peak decline rates in the sample, similar to PTF11kx, while coverage around peak is not available for SN 2018evt. On the other hand, SNe with slower decline rates similar to SN 1997cy and SN 2005gj have more SN IIn-like early time spectra dominated by blue pseudo-continuum and Balmer emission. The faster decline rate suggests we are still seeing some of the emission from the ejecta at those phases. To unveil the nature of the progenitor of interacting SNe, it is therefore necessary to obtain some spectroscopic follow-up before peak light. Spectroscopic data at the phase of transition to interaction-dominated luminosity would also help in deducing the extent and density structure of the optically thick CSM.
Figure 7: Top left: Early-time spectra of BTS SNe Ia-CSM with phases between 0 and 30 days since explosion compared to spectra of SNe 2011jb, 2005gj, 1991T and PTF11kx (phases in days since discovery). Top right: Late-time spectra of BTS SNe Ia-CSM (phases ranging from 40 to 370 days since explosion) compared to spectra of SNe 2011jb, 2005gj, 2010jl and PTF11kx (phases in days since discovery).
The spectra were reduced and processed as outlined in SS2.5 for the emission line analysis, the results of which are described in the next section. We used only good SNR SEDM spectra and intermediate resolution spectra for line identification and analysis.
#### 3.2.1 H\(\alpha\), H\(\beta\) and He I emission lines
To analyze the H\(\alpha\) line emission, we first fit the continuum level using the fit_continuum function of the specutils Python package, where the continuum is estimated by a cubic function fitted on regions on each side of the line. We remove this continuum level and then fit the H\(\alpha\) line with a broad and a narrow component Gaussian function using the fit_lines function of specutils which returns the best fit Gaussian model and the 1\(\sigma\) uncertainty on the model parameters. We generate 1000 sample models within 1\(\sigma\) uncertainties of the parameters centered around the best-fit values and calculate the intensity, flux and velocity (FWHM) of the broad and narrow components for each model. Then we take the median and standard deviation of the intensity, flux and velocity FWHM distributions to get their final best value and 1\(\sigma\) uncertainty. The equivalent width was also calculated for the H\(\alpha\) line using the model fit as well as directly from the data, and the difference between the values derived from model and data is reported as the error on the EW. All values are reported in Table 5. For 3 SNe in our sample, we have a series of intermediate resolution spectra through which we can trace the evolution of the H\(\alpha\) line with phase. Figure 8 shows this trend of the H\(\alpha\) line parameters (integrated flux in the top panel and equivalent width in the bottom panel) versus phase for all SNe in our sample. The un-filled markers represent the narrow emission while the filled markers represent the broad emission. For SNe where this analysis could be done on multiple spectra, we see that the H\(\alpha\) equivalent width generally increase over time, with some SNe showing fluctuations up to 100 days possibly due to interaction of ejecta with multiple CSM shells of varying density. For SN 2018evt, Yang et al. (2022) analyzed H\(\alpha\) line properties from a comprehensive spectral series data, which are plotted in Figure 8 in gray circles and seem to agree well with our analysis at comparable epochs.
From the Gaussian profile line fitting analysis of the H\(\alpha\) emission line, we found that the broader component has velocities ranging from \(\sim\)1000 to \(\sim\)4000 km s\({}^{-1}\) (intermediate width) and the narrow component has velocities of about \(\sim\)200 km s\({}^{-1}\) to \(\sim\)1000 km s\({}^{-1}\) (see Figure 9). The narrow component could only be resolved down to \(\sim\)300 km s\({}^{-1}\) limited by the mediocre resolution of the spectrographs used (KeckI/LRIS R\(\sim\)800, P200/DBSP R\(\sim\)1000, NOT/ALFOSC has R\(\sim\)360). While we know that the narrow lines originate in the unshocked ionized CSM, the exact origin of the intermediate components is uncertain. They could arise from the post-shock gas behind the forward shock or from the shocked dense clumps in the CSM (Chugai & Danziger, 1994).
The luminosities of the H\(\alpha\) line measured from the BTS SNe Ia-CSM lie in the range 2.5-37\(\times 10^{40}\) erg s\({}^{-1}\) which are comparable to the values from S13 who reported most of their SNe in the 1-10\(\times 10^{40}\) erg s\({}^{-1}\) range except one object that had a luminosity of 39\(\times 10^{40}\) erg s\({}^{-1}\). From the broad H\(\alpha\) luminosity, we did a simple estimate of the mass-loss rate assuming spherically symmetric CSM deposited by a stationary wind \(\rho\propto r^{-2}\) having velocity \(v_{w}\)(Chugai, 1991; Salamanca et al., 1998). The mass-loss rate \(\dot{M}\) can be related to the broad H\(\alpha\) luminosity \(L_{H\alpha}^{Broad}\) as (Salamanca et al., 1998, their Eq. 2)
\[L_{H\alpha}^{Broad}=\frac{1}{4}\epsilon_{H\alpha}\frac{\dot{M}}{v_{w}}v_{s}^{3}\]
where \(v_{s}\) is the shock velocity (obtained from the broad component velocity of the H\(\alpha\) line). We used a value of 100 km s\({}^{-1}\) considering previous high resolution spectral studies of SNe Ia-CSM (Kotak & Meikle, 2005; Aldering et al., 2006; Dilday et al., 2012) for \(v_{w}\) as we cannot fully resolve the narrow component and a maximum value of 0.1 for the efficiency factor \(\epsilon_{H\alpha}\)(Salamanca et al., 1998). The mass-loss rates were estimated from the available spectra and are shown in Figure 10 as a function of years before explosion (\(t_{w}=\frac{v_{w}t}{v_{w}}\), where \(t\) is the phase of the spectra). For most SNe in the sample, the mass-loss rates lie between 0.001-0.02 \(M_{\odot}\) yr\({}^{-1}\), except for SN 2019rvb which has \(\sim\)0.07 \(M_{\odot}\) yr\({}^{-1}\) lost within 2 years prior the explosion. These rates are much higher than what could be attained from a red giant superwind (\(\sim 3\times 10^{-4}\) \(M_{\odot}\) yr\({}^{-1}\)) but are comparable to previous estimates (calculated through multiple methods) for SNe Ia-CSM and require some unusual mechanism to reach such persistently higher mass-loss rates in the decades prior to explosion. Also to consider is that the simplistic assumption of spherical symmetry likely does not apply for SNe Ia-CSM. Evidence of multiple thin shells and asymmetric CSM was observed for PTF11kx (Dilday et al., 2012) and light curve modeling of SNe 1997cy and 2002ic suggested a better fit to a flat density profile rather than stationary wind (Chugai & Yungelson, 2004). An asymmetric or clumpy CSM might be the norm for SNe Ia-CSM (and some SNe IIn) rather than the exception.
The same analysis as for the H\(\alpha\) line was also carried out for H\(\beta\) and He I \(\lambda\)5876 with a one component Gaussian fit. For cases where a Gaussian model could not fit the data, we integrate the flux value in a 100 A region centered at 5876 A for He I. The Na ID absorption lines are also prevalent in some spectra and blend with the He I line, resulting in positive EWs for some SNe. The cumulative distributions of H\(\beta\) and He I equivalent widths are shown in the top and bottom panels of Figure 11 respectively.
The H\(\beta\) median EW measured from the BTS SN Ia-CSM sample is 7.1 A, close to the S13 value of \(\sim\)6 A and quite weak compared to what S13 measured for SNe IIn (\(\sim\)13 A ). The overall cumulative distribution of H\(\beta\) EW is also comparable to the S13 SNe Ia-CSM rather than to the S13 SNe IIn. For the He i \(\lambda\)5876 line, the median EW measured for our BTS SN Ia-CSM sample, considering only significant emission features, is 2.4 A. This is close to the value of \(\sim\)2 A reported in S13, and again significantly different from their SN IIn value of \(\sim\)6 A (\(\sim\)4 A with upper limits), however the overall distribution seems to be closer to the S13 SNe IIn (but still weaker) rather than to the S13 SNe Ia-CSM. This indicates that perhaps He i is not as good a discriminant between the populations compared to H\(\beta\). Among the most He-rich SNe in our sample are SNe 2019ibk, 2020uem, 2020xtg, 2020aekp and 2018evt, and these SNe also have the higher H\(\alpha\) equivalent widths in the sample.
Figure 12 plots the cumulative distribution of the Balmer decrements (\(\frac{F_{H\alpha}}{F_{H\beta}}\)) measured for our sample SNe. The higher Balmer decrement values (\(>\)15) have large errors associated to them because of low SNR of the spectra from which they were derived, particularly near the H\(\beta\) line. Consistent with the results of S13, the SNe Ia-CSM from this sample also have a high median Balmer decrement value of \(\sim\)7 (\(\sim\)5 in S13), indicating that the emission line mechanism is probably collisional excitation or self-absorption rather than recombination, from which the expected Balmer decrement value is \(\sim\)3. In the case of SNe Ia-CSM, if the CSM distribution consists of multiple shells as suggested for PTF11kx, moderately high densities could be created when fast moving ejecta overtake slowly moving thin dense CSM shells creating large enough optical depth in the H\(\alpha\) line which results in the H\(\beta\) transition decaying as Pa\(\alpha\)\(+\) H\(\alpha\) (Xu et al. 1992). For some individual SNe where multiple spectra are available, the Balmer decrement is observed to first increase and later on decrease with phase.
### Host galaxies
We retrieved science-ready co-added images from the _Galaxy Evolution Explorer_ (GALEX) general release 6/7 (Martin et al. 2005), the Sloan Digital Sky Survey DR 9 (SDSS; Ahn et al. 2012), the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS, PS1) DR1 (Chambers et al. 2016), the Two Micron All Sky Survey (2MASS; Skrutskie et al. 2006), and preprocessed WISE im
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline
**SN Name** & **Phase** & **Broad Flux** & **Narrow Flux** & **Total Flux** & **Broad Velocity** & **Narrow Velocity** \\ & (days) & (\(10^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\)) & (\(10^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\)) & (\(10^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\)) & FWHM (km s\({}^{-1}\)) & FWHM (km s\({}^{-1}\)) \\ \hline SN 2018crl & 92 & 135.4\(\pm\)10.0 & 32.8\(\pm\)2.0 & 168.2\(\pm\)12.0 & 4137\(\pm\)312 & \(<214\) \\ SN 2018gkx & 75 & 9.9\(\pm\)0.7 & 3.9\(\pm\)0.2 & 13.7\(\pm\)0.9 & 2640\(\pm\)398 & \(<375\) \\ SN 2018evt & 144 & 2020.3\(\pm\)128.5 & 1247.4\(\pm\)52.8 & 3267.7\(\pm\)181.3 & 6465\(\pm\)997 & 1816\(\pm\)973 \\ SN 2019agi & 42 & 52.7\(\pm\)3.6 & 23.7\(\pm\)1.1 & 76.4\(\pm\)4.7 & 3836\(\pm\)349 & 464\(\pm\)301 \\ SN 2019ibk & 103 & 85.6\(\pm\)1.7 & 17.0\(\pm\)0.5 & 102.6\(\pm\)2.3 & 2431\(\pm\)217 & 272\(\pm\)214 \\ SN 2019rvb & 26 & 22.0\(\pm\)3.0 & 10.4\(\pm\)1.0 & 32.5\(\pm\)4.1 & 2321\(\pm\)298 & 374\(\pm\)216 \\ SN 2020onv & 38 & 32.8\(\pm\)5.2 & 33.3\(\pm\)2.0 & 66.1\(\pm\)7.2 & 2714\(\pm\)879 & \(<\)834 \\ SN 2020qxz & 26 & 76.6\(\pm\)6.2 & 13.8\(\pm\)1.7 & 90.4\(\pm\)7.9 & 11294\(\pm\)1106 & \(<836\) \\ SN 2020qxz & 34 & 55.1\(\pm\)5.0 & 10.8\(\pm\)1.8 & 65.9\(\pm\)6.8 & 8252\(\pm\)1039 & 1070\(\pm\)845 \\ SN 2020qxz & 40 & 12.9\(\pm\)1.7 & 7.6\(\pm\)0.5 & 20.5\(\pm\)2.2 & 2049\(\pm\)284 & 245\(\pm\)215 \\ SN 2020qxz & 45 & 20.7\(\pm\)1.6 & 9.1\(\pm\)0.4 & 29.8\(\pm\)2.1 & 3429\(\pm\)419 & \(<375\) \\ SN 2020qxz & 71 & 39.1\(\pm\)1.3 & 10.4\(\pm\)0.4 & 49.5\(\pm\)1.7 & 5013\(\pm\)395 & 400\(\pm\)375 \\ SN 2020uem & 51 & 246.3\(\pm\)47.2 & 151.1\(\pm\)16.8 & 397.4\(\pm\)64.0 & 6520\(\pm\)1163 & 1178\(\pm\)840 \\ SN 2020uem & 101 & 655.2\(\pm\)28.9 & 241.2\(\pm\)9.6 & 896.4\(\pm\)38.4 & 7456\(\pm\)309 & 1066\(\pm\)217 \\ SN 2020uem & 130 & 552.9\(\pm\)17.6 & 281.8\(\pm\)6.2 & 834.8\(\pm\)23.8 & 7465\(\pm\)265 & 1269\(\pm\)215 \\ SN 2020uem & 140 & 545.4\(\pm\)20.0 & 283.4\(\pm\)6.8 & 828.8\(\pm\)26.7 & 7457\(\pm\)275 & 1308\(\pm\)216 \\ SN 2020uem & 167 & 424.3\(\pm\)19.0 & 312.0\(\pm\)7.7 & 736.3\(\pm\)26.6 & 6852\(\pm\)854 & 1439\(\pm\)834 \\ SN 2020uem & 360 & 179.8\(\pm\)4.0 & 77.4\(\pm\)1.4 & 257.2\(\pm\)5.4 & 5377\(\pm\)382 & 1170\(\pm\)375 \\ SN 2020xtg & 340 & 129.2\(\pm\)4.2 & 52.1\(\pm\)1.6 & 181.3\(\pm\)5.8 & 4242\(\pm\)382 & 1258\(\pm\)376 \\ SN 2020xtg & 448 & 131.7\(\pm\)7.7 & 96.3\(\pm\)3.2 & 228.0\(\pm\)10.9 & 4452\(\pm\)395 & 1566\(\pm\)377 \\ SN 2020abfe & 146 & 33.6\(\pm\)1.1 & 3.0\(\pm\)0.3 & 36.6\(\pm\)1.4 & 4411\(\pm\)389 & \(<376\) \\ SN 2020aekp & 132 & 149.5\(\pm\)4.0 & 33.0\(\pm\)1.0 & 182.5\(\pm\)5.0 & 7728\(\pm\)846 & \(<833\) \\ SN 2020aekp & 169 & 231.0\(\pm\)4.5 & 32.3\(\pm\)1.3 & 263.3\(\pm\)5.8 & 6775\(\pm\)839 & \(<834\) \\ SN 2020aekp & 211 & 251.0\(\pm\)9.5 & 58.6\(\pm\)3.4 & 309.6\(\pm\)12.8 & 7422\(\pm\)852 & 1342\(\pm\)836 \\ \hline \end{tabular}
\end{table}
Table 5: Summary of H\(\alpha\) line properties obtained from two-component Gaussian fitting.
ages (Wright et al., 2010) from the unWISE archive (Lang, 2014)9.
Footnote 9: [http://unwisc.me](http://unwisc.me)
We used the software package LAMBDAR (Lambda Adaptive Multi-Band Deblending Algorithm in R) (Wright et al., 2016) and tools presented in Schulze et al. (2021), to measure the brightness of the host galaxy. The spectral energy distribution (SED) was modelled with the software package Prospector10(Johnson et al., 2021). We assumed a linear-exponential star-formation history, the Chabrier (2003) initial mass function, the Calzetti et al. (2000) attenuation model, and the Byler et al. (2017) model for the ionized gas contribution. The priors were set as described in Schulze et al. (2021).
Footnote 10: [https://github.com/bd-j/prospector](https://github.com/bd-j/prospector) version 0.3
Figure 13 shows the log of star formation rate (SFR) as a function of stellar mass for hosts of BTS SNe Ia-CSM. We also use a Galaxy-zoo (Lintott et al., 2011) sample of elliptical and spiral galaxies (randomly sampled in the redshift range \(z=0.015-0.05\)), and BTS SN Ia hosts as comparison samples collected by and used for comparison in Irani et al. (2022). We find the SN Ia-CSM host galaxy population to be consistent with late-type spirals and irregulars with recent star formation history. 4 out of 12 SNe have clearly spiral hosts, 3 have edge-on host galaxies, 4 seem to have irregulars as hosts and 1 has an unclear host type. Host galaxies of 10 out of 12 SNe have \(w2-w3\) measurements available which are all \(>1\) mag, putting them in late-type category (Irani et al., 2022), 1 (SN 2019rvb) does not have W3 measurement but has \(NUV-PS1_{r}~{}\sim 1\) mag again putting it towards late-type and 1 (SN 2020abfe) does not have any of the above information available except the \(PS1_{r}\) band magnitude of 20.766, which is the faintest host galaxy (absolute SDSS \(r\)-band magnitude of \(-17.4\)) in our BTS SN Ia-CSM sample. As noted in S13, the SN Ia-CSM hosts of their sample had generally low luminosities (\(-19.1<M_{r}<-17.6\)) except MW like spiral hosts. Our BTS SN Ia-CSM host luminosities lie in the range of \(-21.8<M_{r}<-17.4\) covering low to MW like luminosities.
### Rates
Following the methodology for calculating the volumetric rate of transients found in the Bright Transient Survey from Perley et al. (2020), we use their equation 2 to calculate the
Figure 8: Integrated fluxes and equivalent widths of H\(\alpha\) emission line with respect to SN phases for the BTS SN Ia-CSM sample. Broad component values are shown with filled markers and narrow component values with un-filled markers. SN 2018evt H\(\alpha\) luminosities and EWs presented in Yang et al. (2022) are also shown in gray circles.
Figure 10: Mass-loss rates estimated from the luminosity of the broad component of H\(\alpha\) for the BTS SNe Ia-CSM. A value of 100 km s\({}^{-1}\) was assumed for the wind velocity.
Figure 9: Velocity of H\(\alpha\) emission line with respect to SN phases for the BTS SN Ia-CSM sample. Broad component values are shown with filled markers and narrow component values with un-filled markers.
SN Ia-CSM rate:
\[R=\frac{1}{T}\sum_{i=1}^{N}\frac{1}{(\frac{4\pi}{3}D_{max,i}^{3})f_{sky}f_{ext}f _{rec}f_{cl,i}}\]
where \(T\) is the duration of the survey, \(N\) is the number of transients that pass the quality cut, \(D_{max,i}\) is the distance out to which the \(i^{th}\) transient with peak absolute magnitude \(M_{i}\) can be detected above the survey magnitude limit \(m_{lim}\) (=19 mag for BTS SNe Ia-CSM) at peak light without any extinction, \(f_{sky}\) is the average active survey coverage as a fraction of full sky, \(f_{ext}\) is average reduction in effective survey volume due to Galactic extinction, \(f_{rec}\) is the average recovery efficiency for a detectable transient within the survey coverage area, and \(f_{cl,i}\) is the classification efficiency dependent on apparent magnitude.
The duration of the survey in which these 12 SNe Ia-CSM were detected is from 2018-05-01 to 2021-05-01, i.e. \(T=3\) years. We calculate \(f_{sky}\) during this time period by averaging the sky area coverage of the public MSIP survey considering 3 day cadence for ZTF Phase I (2018-05-01 to 2020-10-31) and 2 day cadence for ZTF Phase II (since 2020-11-01), which turns out to be 12505 deg\({}^{2}\) for Phase I and 14831 deg\({}^{2}\) for Phase II, giving a mean \(f_{sky}=0.32\). We use the same value of 0.82 for \(f_{ext}\) as calculated in Perley et al. (2020) given there has not been any change in the number and positions of ZTF fields.
To estimate \(f_{rec}\), we consider SNe Ia-CSM brighter than \(-18.5\) peak absolute magnitude and brighter than 18 apparent magnitude (total 5) of which 4 pass the quality cut, giving an \(f_{rec}\) of 0.8. We take classification completeness of 0.75 at
Figure 11: Cumulative distributions of equivalent width of H\(\beta\) and He I \(\lambda\)5876 emission lines calculated from the BTS SNe Ia-CSM (in grey) compared with the respective distributions presented in S13 for SNe Ia-CSM (blue) and SNe IIn (red). Vertical dashed lines mark the median EW of the distributions.
Figure 12: Cumulative distribution of \(H\alpha/H\beta\) intensity ratio (Balmer decrement) calculated from intermediate resolution spectra of BTS SN Ia-CSM sample (grey shaded region). The red line is the distribution of Balmer decrement of SNe IIn measured in S13, the blue line is the SN Ia-CSM Balmer decrement distribution from S13. The black circles are a few representative points indicating the high Balmer decrement values and the uncertainties on them. The vertical dashed line is the median Balmer decrement measured from BTS SNe Ia-CSM.
19 mag, 0.9 at 18.5 mag and 1 at 17.2 mag and linearly interpolate in between these values to get \(f_{cl,i}\).
Then using \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), ignoring cosmological effects11 as in Perley et al. (2020) and applying a uniform K-correction (K = 2.5\(\times log_{10}(1+z)\)), we get a rate of 29.35\({}^{+27.53}_{-21.37}\) Gpc\({}^{-3}\) yr\({}^{-1}\) for SNe Ia-CSM. We also calculate a SN Ia rate of 2.88\({}^{+0.28}_{-0.25}\times 10^{4}\) Gpc\({}^{-3}\) yr\({}^{-1}\) from SNe Ia observed in the same period following the same method, which is close to the value of 2.35\(\times 10^{4}\) Gpc\({}^{-3}\) yr\({}^{-1}\) calculated in Perley et al. (2020). This puts SNe Ia-CSM to be 0.02-0.2% of SNe Ia. However this rate estimate should be considered a lower limit given various caveats in the correct identification of SNe Ia-CSM (see discussion SS4.3). If the ambiguous classification cases outlined in Appendix A are considered to be SN Ia-CSM and included in the rate calculation, we obtain a rate upper limit of \(97.7^{+135.8}_{-77.3}\) Gpc\({}^{-3}\) yr\({}^{-1}\), which is 0.07-0.8% of SNe Ia.
Footnote 11: Contraction of control time window approximately compensated by increase in the star-formation rate density in the low redshift regime for redshift dependent SN rates.
### Precursor rates
The ZTF precursor rates were calculated following the method in Strotjohann et al. (2021) which studied the frequency of precursors in interacting SNe found in ZTF. Strotjohann et al. (2021) included 6 of the SNe Ia-CSM presented in this paper in addition to 4 other SNe Ia-CSM not in this paper (see Appendix A for details) for their search but did not find any robust 5\(\sigma\) precursor detections. This non-detection was concluded to be due to the small sample size of SNe Ia-CSM (or that they are more distant) compared to the SN IIn sample, so even if the precursors were as bright or frequent as for SNe IIn, it would be difficult to detect them.
The same search was here carried out for our larger sample by taking the ZTF forced photometry multi-band (\(g,r,i\)) light curves generated by the pipeline outlined in Masci et al. (2019) and stacking them in 1, 3 and 7-day long bins to search for faint outbursts. There were 7389 total available pre-explosion epochs for BTS SNe Ia-CSM, the earliest epoch being 1012 days prior to the explosion and the median phase 340 days prior. Hence the results are valid for typical SN Ia-CSM progenitors at about \(\sim\)1 year before the SN. We did not find any robust 5\(\sigma\) precursor detections. The upper limits for the precursor rates in different bands are shown in Figure 14, where the solid lines indicate up to what fraction of the time a precursor of a given brightness could have been detected while being consistent with the ZTF non-detections. A precursor of \(-15\) magnitude could occur as frequently as \(\sim\)10% of the time given the ZTF non-detections. A continuous search for the precursors as more SNe Ia-CSM are found and classified and their sample size increases could yield a detection if the precursors are as frequent and bright as for SNe IIn. The dense and massive CSM around these objects is close enough to have been deposited within decades prior to the SN but the lack of precursors within 1 year indicates that there is likely no violent event that ejects a lot of mass in that period. Probing for precursors could potentially constrain the progenitor in at least some cases. For example, Soker et al. (2013) predicts for their core degenerate (CD) model for PTF11kx-like SNe release of significant energy (\(\sim\)10\({}^{49}\) erg) before explosion over timescale of several years, implying a precursor 3-7 magnitudes fainter than the SN explosion spread over several years, peaking in the near-IR.
## 4 Discussion
### Fraction of SNe Ia-CSM with delayed interaction
The fastest declining SNe in our sample (SNe 2018crl, 2020qxz and 2020aekp) are also the ones that develop a plateau and show relatively stronger SN Ia-like absorption features in their early spectra. They seem to have a delayed start for the interaction like PTF11kx but not as fast a decline, and thus bridge the gap between PTF11kx and the rest of the strongly interacting SNe Ia-CSM. It remains to be seen how many SNe Ia are weakly interacting where the CSM interaction starts in earnest at timescales of \(\sim\)year or more after explosion, this requires searching for faint detections in care
Figure 13: Host galaxies of BTS SN Ia-CSM (black circles) on SFR vs stellar mass plot with Galaxy-zoo spiral (blue contours) and elliptical (red contours) galaxies for comparison. BTS SN Ia hosts are also shown for comparison in green circles. Equal sSFR lines are marked with grey dashed lines.
fully calibrated forced photometry light curves (stacked to go fainter), a study currently undertaken by Tervel et al. (in prep). From the current sample, it appears that in addition to SNe Ia-CSM being intrinsically rare, delayed interaction SNe Ia-CSM are even rarer and only constitute about a quarter of all SNe Ia-CSM. This delayed interaction behaviour could also be an effect of asymmetric or clumpy CSM wherein part of the SN ejecta shine through depending on the viewing angle. Observational campaigns that capture the inner boundary of the CSM and the geometry robustly could shed light on the distribution of the inner CSM radius and reveal if it is a continuous distribution or if there are multiple progenitor scenarios within the SN Ia-CSM class.
### Implications for progenitor based on observed mass loss
From Figure 10, the estimated mass-loss rates from a simple spherical treatment of the CSM and a stationary wind lie between \(\sim 10^{-3}\) to \(10^{-1}\) M\({}_{\odot}\) yr\({}^{-1}\) over a period of less than \(\sim 60\) years before explosion. That gives a total mass loss of \(\sim 0.1\) to \(\sim 1\) M\({}_{\odot}\). Dilday et al. (2012) estimated \(\sim 5\) M\({}_{\odot}\) of CSM around PTF11kx while Graham et al. (2017) revised it to be \(\sim 0.06\) M\({}_{\odot}\). Light curve modeling of SN 1997cy and SN 2002ic by Chugai & Yungelson (2004) resulted in \(\sim 5\) M\({}_{\odot}\) estimates for both SNe. Inserra et al. (2016) also fit analytical models to some SNe Ia-CSM and found the CSM mass to lie between 0.4 and 4.4 M\({}_{\odot}\). Since from Figure 5, the pseudo-bolometric luminosities of our SNe Ia-CSM lie somewhere between PTF11kx and SNe 1997cy, 2002ic and 2005gj, with SN 1999E somewhere in the middle, we can say that the total CSM mass in our sample of SN Ia-CSM should also be several solar masses. A WD\(+\)AGB star system has typically been suggested for historical SNe Ia-CSM to explain this massive CSM. The WD could either gain mass through Roche Lobe overflow (RLOF) from the companion that drives an optically thick wind (OTW) or merge with the core of the AGB star that then explodes in or soon after the common envelope phase. Meng & Podsiadlowski (2019) model WD\(+\)MS systems for their common envelope wind (CEW) model and find \(\sim 1\) M\({}_{\odot}\) CSM around SNe Ia-CSM. Thus, given the large observed CSM mass range, the nature of the companion cannot be solely determined from total mass lost. High resolution spectroscopy that can resolve the narrow unshocked CSM wind velocity is also needed to determine the compactness of the companion.
### Implications for progenitor based observed volumetric rate
Robust observed rate estimates for SNe Ia-CSM have been few and far between. Dilday et al. (2010) found 1 interacting SN Ia (SN 2005gj) in a sample of 79 SNe Ia at \(z<0.15\) in the SDSS-II SN survey, giving a rate of \(\sim\)1%. After the PTF11kx discovery in the Palomar Transient Factory (PTF) survey, the SN Ia-CSM rate was estimated to be \(\sim\)0.1% (1 in 1000 classified SNe Ia; Dilday et al. 2012) but without spectroscopic completeness determination. S13 identified 7 more SNe Ia-CSM from the PTF SN IIn sample, bumping up the estimate to \(\sim\)0.8%. With this sample we have improved the rate estimate, providing a robust value (along with an uncertainty estimate on that value) from an unbiased survey with high spectroscopic completeness up to 18.5 magnitude. However this rate quite possibly still underestimates the true value for two reasons. The first being possible thermonuclear SNe that are enshrouded so completely by CSM interaction that they are misclassified as SNe IIn in the absence of good early time data. In the BTS SN IIn sample, we found 6 SNe IIn to have ambiguous classifications which could possibly be SNe Ia-CSM and these are described in Appendix A. Including these ambiguous cases in rate estimation results in a rate upper limit of 0.07-0.8% for strongly interacting thermonuclear SNe, while excluding them gives an underestimated rate of 0.02-0.2%.
The second issue with the rates is if there is indeed a continuum of delayed interaction SNe Ia-CSM like PTF11kx, interaction in SNe Ia may present itself hundreds of days later at magnitudes fainter than ZTF's limit (\(\sim\)20.5) resulting in those SNe not being counted when they may be sharing the same progenitor as the rest of the interacting SNe Ia-CSM. Lastly in some rare cases, the SN might appear normal in its light curve shape and duration (and thus would be missed by the selection criteria used in this paper) but seem to have peculiar narrow H\(\alpha\) in its spectrum or bright mid-IR flux (like in the case of SN 2020aaym; Thevenot et al. 2021).
Figure 14: Precursor rate as a function of magnitude calculated from BTS SN Ia-CSM pre-explosion ZTF forced photometry stacked in 7-day bins. The different colored shaded regions correspond to different ZTF bands (\(r\)-red, \(g\)-green, \(i\)-grey). The solid lines depict the upper limits on fraction of the time a precursor of the corresponding magnitude would have been detected which is consistent with the ZTF non-detections.
Han & Podsiadlowski (2006) predicted a rate of 0.1-1% for 02ic-like events for their delayed dynamical instability SD model but could not naturally explain the delayed interaction and multiple CSM shells in PTF11kx (which is relevant for some SNe in our sample). A symbiotic nova-like progenitor was suggested by Dilday et al. (2012) for PTF11kx and they quoted the theoretical rates for the same to lie between 1-30%, however the model could not explain the massive CSM. Soker et al. (2013) suggested a core degenerate (CD) scenario in which the explosion is set by the violent prompt merger of the core of the giant companion on to the WD and could naturally explain the massive CSM of PTF11kx (Livio & Riess, 2003). Soker et al. (2013) estimated the occurrence of such SNe (M\({}_{core}+\) M\({}_{WD}\gtrsim 2\) M\({}_{\odot}\) and M\({}_{env}\gtrsim 4\) M\({}_{\odot}\)) through population synthesis and found it to be 0.002 per 1000 M\({}_{\odot}\) stars formed. Assuming \(\sim\)1-2 SNe Ia occur per 1000 M\({}_{\odot}\) stars formed (Maoz et al., 2012), this corresponds to 0.1-0.2%, which compares well with our observed rate estimate.
The CEW model by Meng & Podsiadlowski (2019) predicts that the SNe Ia-CSM like objects could arise in the SD CEE scenario when CONe White Dwarfs (WD) steadily accrete material at the base of the CE without quickly spiraling in due to the driving of a CEW wind (10-100 km s\({}^{-1}\)). The WD explodes when it reaches the Chandrasekhar mass (1.38 M\({}_{\odot}\)) and could possibly explode within the CE before it is ejected. The CEW model predicts that 25-40% of the SNe Ia from CONe WD in Common envelope evolution with a Main Sequence (MS) companion will show SN Ia-CSM like properties. Meng & Podsiadlowski (2019) also give the ratio of SNe Ia from CONe WDs to normal SNe Ia from CO WDs to be between 1/9 and 1/5 (considering normal SNe Ia only come from CO WD + MS systems). Combining that with the estimate that roughly 10-20% of all SNe Ia may come from the SD scenario (Hayden et al., 2010; Bianco et al., 2011), SNe Ia-CSM from CONe WD according to the CEW model should be 0.28% to 1.6% of all SNe Ia. A spin-down before explosion of the WD (Justham, 2011; Di Stefano & Kilic, 2012) could also explain the time delay between explosion and interaction.
Soker (2022) estimated the common envelope to explosion delay time distribution (CEEDTD) shortly after the CEE (t\({}_{CEED}<10^{4}\) yr) from SN in planetary nebula rates and SN Ia-CSM observed rates to be roughly constant rather than having a t\({}^{-1}\) dependence, that is the SN explosion could occur very soon after the CEE as well. Our observed rates are on the lower side compared to these theoretical model estimates but compare well within the observational uncertainties, though the CEW model seems to best account for the overall SNe Ia-CSM properties.
## 5 Summary
In this paper, we have presented optical and mid-IR photometry, optical spectra and detailed analysis of 12 new SNe Ia-CSM identified in the Zwicky Transient Facility Bright Transient Survey, nearly doubling the total number of such objects discussed previously by Silverman et al. (2013). The properties of the sample extracted in this paper agree very well with similar analysis conducted in S13, particularly the median EW of H\(\beta\) is found to be significantly weaker in SNe Ia-CSM compared with SNe IIn and consequently the Balmer decrements are ubiquitously higher in SNe Ia-CSM. The brightness of SNe Ia-CSM in mid-IR is comparable to SNe IIn and observations of reduced flux in the red side of the H\(\alpha\) wing together with the mid-IR brightness points to formation of new dust in the cooling post-shock gas. The host galaxies of SNe Ia-CSM lie towards late-type galaxies with recent star formation. Unlike SNe IIn, no precursors were found within \(\sim\)1000 days before explosion for SNe Ia-CSM, which could be an observational bias (less number of SNe Ia-CSM compared to SNe IIn). We provide a robust rate estimate of 0.02-0.2% of all SNe Ia for SNe Ia-CSM on account of the BTS survey being unbiased and spectroscopically highly complete. The simple mass-loss rate estimates from broad H\(\alpha\) luminosity of \(\sim 10^{-2}\) M\({}_{\odot}\) yr\({}^{-1}\) are similar to previous estimates from various methods and indicate several solar masses of CSM around these SNe. The observed rate agrees well within the observational uncertainties with the CEW model by Meng & Podsiadlowski (2019) which can also explain the interaction delay and massive CSM.
There are still many unanswered questions about the nature of the progenitors and if we are accurately identifying all potential members of this class. As ZTF Phase II continues, we are identifying more and more SNe Ia-CSM (interacting with hydrogen rich and helium rich CSM) and looking further to the future, if ZTF continues for a Phase III and when LSST survey operations begins, a larger sample would further improve upon the observed rate calculation. However, individual object studies are as important and detailed spectroscopic and multi-wavelength follow-up is essential to capture the CSM configuration and mass.
## 6 Acknowledgment
Based on observations obtained with the Samuel Oschin Telescope 48-inch and the 60-inch Telescope at the Palomar Observatory as part of the Zwicky Transient Facility project. ZTF is supported by the National Science Foundation under Grants No. AST-1440341 and AST-2034437 and a collaboration including current partners Caltech, IPAC, the Weizmann Institute of Science, the Oskar Klein Center at Stockholm University, the University of Maryland, Deutsches Elektronen-Synchrotron and Humboldt University, the TANGO Consortium of Taiwan, the University of Wisconsin at Milwaukee, Trinity College Dublin, Lawrence
-Livermore National Laboratories, IN2P3, University of Warwick, Ruhr University Bochum, Northwestern University and former partners the University of Washington, Los Alamos National Laboratories, and Lawrence Berkeley National Laboratories. Operations are conducted by COO, IPAC, and UW. The ZTF forced-photometry service was funded under the Heising-Simons Foundation grant #12540303 (PI: Graham). This work was supported by the GROWTH project (Kasliwal et al. 2019) funded by the National Science Foundation under PIRE Grant No 1545949. The Oskar Klein Centre was funded by the Swedish Research Council. Partially based on observations made with the Nordic Optical Telescope, operated by the Nordic Optical Telescope Scientific Association at the Observatorio del Roque de los Muchachos, La Palma, Spain, of the Instituto de Astrofisica de Canarias. Some of the data presented here were obtained with ALFOSC. Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and NASA; the observatory was made possible by the generous financial support of the W. M. Keck Foundation. The SED Machine is based upon work supported by the National Science Foundation under Grant No. 1106171. This work has made use of data from the Asteroid Terrestrial-impact Last Alert System (ATLAS) project. The Asteroid Terrestrial-impact Last Alert System (ATLAS) project is primarily funded to search for near earth asteroids through NASA grants NN12AR55G, 80NSSC18K0284, and 80NSSC18K1575; byproducts of the NEO search include images and catalogs from the survey area. The ATLAS science products have been made possible through the contributions of the University of Hawaii Institute for Astronomy, the Queen' s University Belfast, the Space Telescope Science Institute, the South African Astronomical Observatory, and The Millennium Institute of Astrophysics (MAS), Chile. This research has made use of the NASA/IPAC Infrared Science Archive, which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology. The Liverpool Telescope is operated on the island of La Palma by Liverpool John Moores University in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias with financial support from the UK Science and Technology Facilities Council. Y. Sharma thanks the LSSTC Data Science Fellowship Program, which is funded by LSSTC, NSF Cybertraining Grant #1829740, the Brinson Foundation, and the Moore Foundation; her participation in the program has benefited this work. S. Schulze acknowledges support from the G.R.E.A.T research environment, funded by _Vetenskapsradet_, the Swedish Research Council, project number 2016-06012. This work has been supported by the research project grant "Understanding the Dynamic Universe" funded by the Knut and Alice Wallenberg Foundation under Dnr KAW 2018.0067, The research of Y. Yang is supported through a Bengier-Winslow-Robertson Fellowship. Fritz (van der Walt et al., 2019; Duev et al., 2019) and GROWTH marshal (Kasliwal et al., 2019) (dynamic collaborative platforms for time-domain astronomy) were used in this work. LAMBDAR (Wright et al., 2016), Prospector (Johnson et al., 2021), pySEDM (Rigault et al., 2019), IRAF (Tody, 1986, 1993), pyNOT ([https://github.com/jkrogager/PyNOT](https://github.com/jkrogager/PyNOT)), LPipe (Perley, 2019), pypeit (Prochaska et al., 2020), extinction (Barbary, 2016), pyraf-dbsp (Bellm & Sesar, 2016), FPipe (Fremling et al., 2016), DBSP_DRP (dbs, 2021), ztfquery (Rigault, 2018), astropy (Astropy Collaboration et al., 2013, 2018, 2022), matplotlib (Hunter, 2007).
|
2302.05079 | Output tracking based on extended observer for nonlinear uncertain
systems | A high-gain extended observer is designed for a class of nonlinear uncertain
systems. This observer has the ability of estimating system uncertainty, and it
can be used to estimate the derivatives of signal up to order n. The controller
based on this extended observer can make the tracking error and its derivatives
converge to zero rapidly even when uncertainties and disturbances exist. The
result of simulation indicates that this method has satisfactory control
performance for nonlinear uncertain systems. | Xinhua Wang, Zengqiang Chen, Zhuzhi Yuan | 2023-02-10T06:42:19Z | http://arxiv.org/abs/2302.05079v1 | # Output tracking based on extended observer for nonlinear uncertain systems
###### Abstract
A high-gain extended observer is designed for a class of nonlinear uncertain systems. This observer has the ability of estimating system uncertainty, and it can be used to estimate the derivatives of signal up to order \(n\). The controller based on this extended observer can make the tracking error and its derivatives converge to zero rapidly even when uncertainties and disturbances exist. The result of simulation indicates that this method has satisfactory control performance for nonlinear uncertain systems.
_Keywords:_ nonlinear uncertain system, high-gain extended observer, estimating uncertainty, tracking error
## 1 Introduction
Nonlinear control is a main research field in control theory and engineering [1-5]. Output tracking problem of nonlinear uncertain systems is a hot topic of current research, and many control methods have been proposed. In practice, the derivatives of the tracked signal are generally unknown, which increases the difficulty of controller design. Output tracking problems with the assumption of known tracked signal and its derivatives were considered in [3-5]. Many state observers are poor at observing nonlinear systems and tend to converge slowly. In [3], a high-gain observer and a sliding mode control are used for output feedback of nonlinear systems, and the derivatives of tracked signal are assumed to be known. However, for this method, the obvious tracking error exists for tracking control of uncertain systems. In [6,7], the proposed extended state observer has precise estimation performance and strong ability of disturbance rejection. However, the system stability was not considered.
In this paper, for a class of nonlinear uncertain systems, a high-gain extended observer is presented to estimate the system uncertainty and the unknown states. The observer has rapid convergence rate and accurate estimation, and the system stability of the extended observer is proved. Furthermore, a controller is designed based on the extended observer to make the convergence rate and accuracy of output tracking errors meet the control requirements.
## 2 Problem analysis
The following nonlinear uncertain system is considered:
\[\left\{\begin{array}{l}x^{(n)}=f\left(x,\dot{x},\cdots,x^{(n-1)},t\right)+b \cdot u,\\ y=x\end{array}\right. \tag{1}\]
where, the function \(f\left(x,\dot{x},\cdots,x^{(n-1)},t\right)\) includes the uncertainties and disturbances, and its first-order derivative exists; \(u\) is the control input; \(b\) is a non-zero constant; the reference signal is \(y_{d}\), and its derivatives are unknown.
Define \(x_{1}=x\), \(x_{2}=\dot{x}\), \(x_{3}=\dot{x}_{2}=x^{(2)}\), and \(x_{n}=\dot{x}_{n-1}=x^{(n-1)}\). Then, \(\dot{x}_{n}=x^{(n)}=f\left(x_{1},x_{2},\cdots,x_{n},t\right)+b\cdot u\).
Also, define \(x_{n+1}=f\left(x_{1},x_{2},\cdots,x_{n},t\right)\), \(\dot{x}_{n+1}=f^{(1)}\left(x_{1},x_{2},\cdots,x_{n},t\right)=g\left(x_{1},x_{ 2},\cdots,x_{n},t\right)\), and \(\left|g\left(x_{1},x_{2},\cdots,x_{n},t\right)\right|+\left|y_{d}^{(n+1)} \right|\leq M\), where, \(0\leq M<+\infty\). Therefore, the system (1) can be expressed
###### Abstract
We consider the problem of finding the number of users in a given set of users in a given set of users. We consider the problem of finding the number of users in a given set of users in a given set of users. We consider the problem of finding the number of users in a given set of
\[\left\{\begin{array}{l}\hat{\hat{e}}_{1}=\widehat{e}_{2}-\frac{h_{1}}{\varepsilon} \left(\widehat{e}_{1}-e_{1}\right)\\ \vdots\\ \hat{\hat{e}}_{n}=\widehat{e}_{n+1}-\frac{h_{n}}{\varepsilon^{n}}\left( \widehat{e}_{1}-e_{1}\right)+b\cdot u\\ \hat{\hat{e}}_{n+1}=-\frac{h_{n+1}}{\varepsilon^{n+1}}\left(\widehat{e}_{1}-e _{1}\right)\end{array}\right. \tag{6}\]
Therefore, we get the following conclusions:
1)
\[\lim_{\varepsilon\to 0^{+}}\left\|\delta\left(t\right)\right\|=0 \tag{7}\]
2) When a \(\varepsilon\in\left(0,1\right)\) is selected, we get
\[\lim_{t\rightarrow+\infty}\left\|\delta\left(t\right)\right\|\leq\frac{ \varepsilon M}{\lambda}\left\|T\right\|\left\|T^{-1}\right\| \tag{8}\]
where,
\[\delta\left(t\right) = \left[\begin{array}{ccc}\delta_{1}&\cdots&\delta_{n+1}\end{array} \right]^{T}\] \[= \left[\begin{array}{ccc}\widehat{e}_{1}-e_{1}&\cdots&\widehat{ e}_{n+1}-e_{n+1}\end{array}\right]^{T},\] \[\widehat{e}\left(t\right) = \left[\begin{array}{ccc}\widehat{e}_{1}&\cdots&\widehat{e}_{n+1 }\end{array}\right]^{T},\]
\(\varepsilon\in\left(0,1\right)\) is the perturbation parameter.
_Proof:_ The system error between (4) and (3) is
\[\dot{\delta}\left(t\right)=A\delta\left(t\right)+B\left(-g\left(x_{1},x_{2}, \cdots,x_{n},t\right)-y_{d}^{\left(n+1\right)}\right) \tag{9}\]
Then, the solution to (9) can be expressed by
\[\delta\left(t\right)=\exp\left(A\cdot t\right)\delta\left(0\right)+\int_{0}^{ t}\exp\left(A\left(t-\tau\right)\right)\left(-g\left(x_{1},x_{2},\cdots,x_{n},t \right)-y_{d}^{\left(n+1\right)}\right)d\tau B \tag{10}\]
Therefore, we get
\[\delta\left(t\right) = \left\|\exp\left(A\cdot t\right)\right\|\left\|\delta\left(0 \right)\right\|+M\left\|\int_{0}^{t}\exp\left(A\left(t-\tau\right)\right)d \tau\right\|\left\|B\right\| \tag{11}\] \[\leq \left\|T\right\|\left\|T^{-1}\right\|\exp\left(-\frac{\lambda}{ \varepsilon}t\right)\left\|\delta\left(0\right)\right\|\] \[+\left\|T\right\|\left\|T^{-1}\right\|M\int_{0}^{t}\exp\left(- \frac{\lambda}{\varepsilon}\left(t-\tau\right)\right)d\tau\left\|B\right\|\] \[\leq \left\|T\right\|\left\|T^{-1}\right\|\left[\exp\left(-\frac{ \lambda}{\varepsilon}t\right)\left\|\delta\left(0\right)\right\|+M\frac{ \varepsilon}{\lambda}\left(1-\exp\left(\frac{\lambda}{\varepsilon}t\right) \right)\left\|B\right\|\right]\]
Because \(\left\|B\right\|=1\) and \(\left\|\delta\left(0\right)\right\|\) is bounded, \(\lim\limits_{\varepsilon\to 0^{+}}\left\|\delta\left(t\right)\right\|=0\). When \(\varepsilon\in\left(0,1\right)\) is selected, from (11), we can get \(\lim\limits_{t\rightarrow+\infty}\left\|\delta\left(t\right)\right\|\leq\frac{ \varepsilon M}{\lambda}\left\|T\right\|\left\|T^{-1}\right\|\). This concludes the proof. \(\blacksquare\)
The aim of observer design is to make \(\widehat{e}_{1}\to e_{1}\), \(\cdots\), \(\widehat{e}_{n+1}\to e_{n+1}\). This extended observer can be used to estimate the system uncertainty and signal derivatives up to order \(n\). Although the convergence speed of common linear extended observers are slower than that of nonlinear extended observers in the neighborhood of the equilibrium, the use of high gains in the observer can speed up the convergence. In addition to estimation of unknown error variables and system uncertainty in (3), the controller \(u\) is designed according to the observer estimation to implement the system tracking.
**4. Controller design**
**Theorem 2:** For the system error (3) and the extended observer (6), a sliding variable is select as
\[\sigma\left(t\right)=\widehat{e}_{n}+a_{n-1}\widehat{e}_{n-1}+\cdots+a_{1} \widehat{e}_{1} \tag{12}\]
where, the polynomial \(s^{n-1}+a_{n-1}s^{n-2}+\cdots+a_{1}=0\) is Hurwitz. The controller is designed as
\[u = -b^{-1}\left(U_{0}\mbox{sign}\left(\sigma\left(t\right)\right)- \left(\frac{h_{n}}{\varepsilon^{n}}+a_{n-1}\frac{h_{n-1}}{\varepsilon^{n-1}}+ \cdots+a_{1}\frac{h_{1}}{\varepsilon^{1}}\right)\left(\widehat{e}_{1} \to e_{1}\right)\right. \tag{13}\] \[\left.+\widehat{e}_{n+1}+a_{n-1}\widehat{e}_{n}+\cdots+a_{1} \widehat{e}_{2}\right)\]
Then, we can get
\[\lim\limits_{t\rightarrow\infty}\left\|e\left(t\right)\right\|\leq k_{p}\sqrt {\varepsilon} \tag{14}\]
where, \(k_{p}\) and \(U_{0}\) are the positive constants.
_Proof:_ Select a Lyapunov function candidate as \(V=\frac{1}{2}\sigma^{2}\left(t\right)\). Then, we get
\[\dot{V} = \sigma\left(t\right)\left(\hat{\widehat{e}}_{n}+a_{n-1}\hat{ \widehat{e}}_{n-1}+\cdots+a_{1}\hat{\widehat{e}}_{1}\right) \tag{15}\] \[= \sigma\left(t\right)\left(\widehat{e}_{n+1}-\frac{h_{n}}{ \varepsilon^{n}}\left(\widehat{e}_{1}-e_{1}\right)+bu+a_{n-1}\left(\widehat{ e}_{n}-\frac{h_{n-1}}{\varepsilon^{n-1}}\left(\widehat{e}_{1}-e_{1}\right) \right)+\cdots+a_{1}\left(\widehat{e}_{2}-\frac{h_{1}}{\varepsilon}\left( \widehat{e}_{1}-e_{1}\right)\right)\right)\] \[= -U_{0}\left|\sigma\left(t\right)\right|=-\sqrt{2}U_{0}V^{\frac{1} {2}}\]
Therefore, there exist a finite time \(T_{0}\), for \(t\geq T_{0}\), the variables are in the sliding surface \(\sigma\left(t\right)=0\)[8]. From (3) and \(\sigma\left(t\right)=0\), for \(t\geq T_{0}\), we get
\[\dot{e}_{n-1} = e_{n}=\widehat{e}_{n}-\delta_{n}=-\left(a_{n-1}\widehat{e}_{n-1 }+\cdots+a_{1}\widehat{e}_{1}\right)-\delta_{n} \tag{16}\] \[= -\left\{a_{1}\left(e_{1}+\delta_{1}\right)+\cdots+a_{n-1}\left(e _{n-1}+\delta_{n-1}\right)\right\}-\delta_{n}\] \[= -a_{1}e_{1}-\cdots-a_{n-1}e_{n-1}-a_{1}\delta_{1}-\cdots-a_{n-1} \delta_{n-1}-\delta_{n}\]
From (3) and (16), we get
\[\dot{\widehat{e}}=\widetilde{A}\cdot\widetilde{e}+H\cdot\delta \left(t\right) \tag{17}\]
where,
\[\widetilde{e} = \left[\begin{array}{cccc}e_{1}&\cdots&e_{n-1}\end{array}\right]^{T}, \text{ }\delta\left(t\right)=\left[\begin{array}{cccc}\delta_{1}&\cdots&\delta_{n+1} \end{array}\right]^{T},\] \[\widetilde{A} = \left[\begin{array}{cccc}0&1&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&\cdots&&&1\\ -a_{1}&-a_{2}&\cdots&-a_{n-1}\end{array}\right],\] \[H = \left[\begin{array}{cccc}0&\cdots&0&0&0\\ \vdots&\ddots&\vdots&\vdots&\vdots\\ 0&\cdots&0&0&0\\ -a_{1}&\cdots&-a_{n-1}&-1&0\end{array}\right] \tag{18}\]
Because both \(\widetilde{A}\) and \(A\) are Hurwitz, for the given positive-definite matrices \(Q_{1}\) and \(Q_{2}\), there exist the positive-define matrices \(P_{1}\) and \(P_{2}\), such that
\[P_{1}\widetilde{A}+\widetilde{A}^{T}P_{1}=-Q_{1},\text{ }P_{2}A+A^{T}P_{2}=-Q_{2} \tag{19}\]
Define \(\Phi\left(\widetilde{e},\text{ }\delta\left(t\right)\right)=\widetilde{e}^{T}P_{1} \widetilde{e}+\delta^{T}\left(t\right)P_{2}\delta\left(t\right)\). Taking derivative for \(\Phi\left(\widetilde{e},\text{ }\delta\left(t\right)\right)\) along the solutions of equations (9) and (17), we get
\[\dot{\Phi}\left(\widetilde{e},\text{ }\delta\left(t\right)\right)<-\eta_{1} \Phi\left(\widetilde{e},\text{ }\delta\left(t\right)\right),\text{ }\Phi\left( \widetilde{e},\text{ }\delta\left(t\right)\right)>r_{1}\varepsilon \tag{20}\]
where, \(\eta_{1}\) and \(r_{1}\) are the positive constants. Select \(r_{2}>r_{1}\), and define
\[\Omega=\left\{\widetilde{e},\text{ }\delta\left(t\right)\right|\Phi\left( \widetilde{e},\text{ }\delta\left(t\right)\right)\leq r_{2}\varepsilon\right\} \tag{21}\]
Then, we can find that, there exists a finite time \(t_{1}\), for \(t>t_{1}\), such that \(\Phi\left(\widetilde{e},\text{ }\delta\left(t\right)\right)\in\Omega\). Therefore, from (16) and \(\lim\limits_{t\rightarrow+\infty}\left\|\delta\left(t\right)\right\|\leq \frac{\varepsilon M}{\lambda}\left\|T\right\|\left\|T^{-1}\right\|\), we can get \(\lim\limits_{t\rightarrow\infty}\left\|e\left(t\right)\right\|\leq k_{p} \sqrt{\varepsilon}\), where, \(k_{p}\) is a positive constant. This concludes the proof. \(\blacksquare\)
**5. Simulation example**
The following system is considered:
\[\dot{x}_{1} = x_{2}\] \[\dot{x}_{2} = \cos\frac{\pi}{2}x_{1}-x_{1}^{1/3}-4x_{2}^{1/3}+u\] \[y = x_{1}\]
The reference signal \(y_{d}=2\sin t\). From (2), (3), (6) and \(e_{1}=y-y_{d}\), the designed extended observer is as follows:
\[\begin{array}{l}\dot{\widetilde{e}}_{1}=\widetilde{e}_{2}-\frac{6}{0.1} \left(\widetilde{e}_{1}-e_{1}\right)\\ \dot{\widetilde{e}}_{2}=\widehat{e}_{3}-\frac{11}{0.1^{2}}\left(\widehat{e}_ {1}-e_{1}\right)+u\\ \dot{\widetilde{e}}_{3}=-\frac{6}{0.1^{3}}\left(\widetilde{e}_{1}-e_{1}\right) \end{array}\]
Select the sliding variable \(\sigma\left(t\right)=\widehat{e}_{2}+\widehat{e}_{1}\), and the controller is designed as
\[u=-\left(4\text{sign}\left(\sigma\left(t\right)\right)-\left(\frac{11}{0.1^{2}}+ \frac{6}{0.1}\right)\left(\widehat{e}_{1}-e_{1}\right)+\widehat{e}_{3}+\widehat {e}_{2}\right)\]
The plot of output tracking errors is shown in Figure 1.
## 6 Conclusion
This paper has presented a method of output tracking control based on extended observer. From the theoretical analysis and simulation, the convergence rate and precision of estimation and control are satisfactory. The future job is to design observer and controller for nonlinear non-minimum-phase systems using extended observer and the centre-manifold theory.
|
2304.01191 | Fast Numerical Multivariate Multipoint Evaluation | We design nearly-linear time numerical algorithms for the problem of
multivariate multipoint evaluation over the fields of rational, real and
complex numbers. We consider both \emph{exact} and \emph{approximate} versions
of the algorithm. The input to the algorithms are (1) coefficients of an
$m$-variate polynomial $f$ with degree $d$ in each variable, and (2) points
$a_1,..., a_N$ each of whose coordinate has value bounded by one and
bit-complexity $s$.
* Approximate version: Given additionally an accuracy parameter $t$, the
algorithm computes rational numbers $\beta_1,\ldots, \beta_N$ such that
$|f(a_i) - \beta_i| \leq \frac{1}{2^t}$ for all $i$, and has a running time of
$((Nm + d^m)(s + t))^{1 + o(1)}$ for all $m$ and all sufficiently large $d$.
* Exact version (when over rationals): Given additionally a bound $c$ on the
bit-complexity of all evaluations, the algorithm computes the rational numbers
$f(a_1), ... , f(a_N)$, in time $((Nm + d^m)(s + c))^{1 + o(1)}$ for all $m$
and all sufficiently large $d$. .
Prior to this work, a nearly-linear time algorithm for multivariate
multipoint evaluation (exact or approximate) over any infinite field appears to
be known only for the case of univariate polynomials, and was discovered in a
recent work of Moroz (FOCS 2021). In this work, we extend this result from the
univariate to the multivariate setting. However, our algorithm is based on
ideas that seem to be conceptually different from those of Moroz (FOCS 2021)
and crucially relies on a recent algorithm of Bhargava, Ghosh, Guo, Kumar &
Umans (FOCS 2022) for multivariate multipoint evaluation over finite fields,
and known efficient algorithms for the problems of rational number
reconstruction and fast Chinese remaindering in computational number theory. | Sumanta Ghosh, Prahladh Harsha, Simão Herdade, Mrinal Kumar, Ramprasad Saptharishi | 2023-04-03T17:57:17Z | http://arxiv.org/abs/2304.01191v1 | # Fast Numerical Multivariate Multipoint Evaluation
###### Abstract
We design nearly-linear time numerical algorithms for the problem of multivariate multipoint evaluation over the fields of rational, real and complex numbers. We consider both _exact_ and _approximate_ versions of the algorithm. The input to the algorithms are (1) coefficients of an \(m\)-variate polynomial \(f\) with degree \(d\) in each variable, and (2) points \(\mathbf{a}_{1},\ldots,\mathbf{a}_{N}\) each of whose coordinate has value bounded by one and bit-complexity \(s\).
Approximate version: Given additionally an accuracy parameter \(t\), the algorithm computes rational numbers \(\beta_{1},\ldots,\beta_{N}\) such that \(|f(\mathbf{a}_{i})-\beta_{i}|\leq\nicefrac{{1}}{{2^{t}}}\) for all \(i\), and has a running time of \(\left((Nm+d^{m})(s+t)\right)^{1+o(1)}\) for all \(m\) and all sufficiently large \(d\).
Exact version (when over rationals): Given additionally a bound \(c\) on the bit-complexity of all evaluations, the algorithm computes the rational numbers \(f(\mathbf{a}_{1}),\ldots,f(\mathbf{a}_{N})\), in time \(\left((Nm+d^{m})(s+c)\right)^{1+o(1)}\) for all \(m\) and all sufficiently large \(d\)..
Our results also naturally extend to the case when the input is over the field of real or complex numbers under an appropriate standard model of representation of field elements in such fields.
Prior to this work, a nearly-linear time algorithm for multivariate multipoint evaluation (exact or approximate) over any infinite field appears to be known only for the case of univariate polynomials, and was discovered in a recent work of Moroz [13]. In this work, we extend this result from the univariate to the multivariate setting. However, our algorithm is based on ideas that seem to be conceptually different from those of Moroz [13] and crucially relies on a recent algorithm of Bhargava, Ghosh, Guo, Kumar & Umans [2] for multivariate multipoint evaluation over finite fields, and known efficient algorithms for the problems of rational number reconstruction and fast Chinese remaindering in computational number theory.
Introduction
In this paper, we study the problem of designing fast algorithms for the following natural computational problem.
Given an \(m\) variate polynomial \(f\) of degree less than \(d\) in each variable over an underlying field \(\mathbf{F}\) as a list of coefficients, and (arbitrary) evaluation points \(\mathbf{a}_{1},\mathbf{a}_{2},\ldots,\mathbf{a}_{N}\in\mathbb{F}^{m}\), output \(f(\mathbf{a}_{i})\) for every \(i\).
This computational task is referred to as _Multivariate Multipoint Evaluation (MME)_ in literature and fast algorithms for MME are of fundamental interest in computational algebra, not only due to the evident natural appeal of the problem but also due to potential applications of MME as an important subroutine for algorithms for many other algebraic problems (see [11] for a detailed discussion on these applications).
The input for MME is specified by \((d^{m}+Nm)\) field elements, or alternatively \((d^{m}+Nm)\cdot s\) bits, where \(s\) is an upper bound on the bit complexity of any field constant in the input. For finite fields, \(s\) can be taken to be \(\log|\mathbf{F}|\). Clearly, there is an algorithm for this problem that takes roughly \((d^{m}\cdot Nm)^{1+o(1)}\) many field operations or about \((d^{m}\cdot Nm\cdot s)^{1+o(1)}\) bit operations: we iteratively evaluate the polynomial one input point at a time. Obtaining significantly faster and ideally _nearly-linear_1 time algorithms for MME is the main question motivating this work. Here the time complexity of an algorithm could be measured either in terms of the number of field operations (in case the algorithm is "algebraic2" in the sense that only uses field operations over the underlying field, e.g. like the trivial algorithm outlined above) or the number of bit operations.
Footnote 1: We say that an algorithm has time complexity nearly-linear in the input size if for all sufficiently large \(n\), the algorithms runs on inputs of size \(n\) in time \(n^{1+o(1)}\).
Footnote 2: Algorithms for MME that only need arithmetic over the underlying field in their execution, or in other words can be modelled as an arithmetic circuit over the underlying field are referred to as algebraic.
### Prior work
Before describing the precise problem studied in this work and our main results, we start with a brief discussion of the current state of art of algorithms for MME. While the results in this paper are over infinite fields like reals, rationals and complex numbers, we begin our discussion of prior work on MME by recalling the state of affairs over finite fields.
#### 1.1.1 Multipoint evaluation over finite fields
Multipoint evaluation of polynomials is a non-trivial problem even for the case of univariate polynomials, and a non-trivial algorithm is unclear even for this case over any (sufficiently large) field. When the set of input points have additional structure, for instance, they are all roots of unity of some order over the underlying field, the Fast Fourier Transform (FFT) gives us a nearly-linear time algorithm for this problem. However, it is not immediately clear whether ideas based on FFT can be easily extended to the case of arbitrary evaluation points.
In a beautiful work in 1974, Borodin and Moenck [14] designed a significantly faster algorithm for univariate multipoint evaluation by building on FFT and a fast algorithm for division with remainder for univariate polynomials. The algorithm of Borodin and Moenck worked over all fields and was algebraic, in the sense mentioned earlier, the number of field operations needed by the algorithm was \((N+d)^{1+o(1)}\), nearly-linear in the number of field elements in the input.
Extending these fast algorithms for multipoint evaluation from the univariate to the multivariate case proved to be quite challenging, even for the bivariate case. Nusken and Ziegler [13] gave a non-trivially fast algorithm for this problem over all fields, although the precise time complexity of their algorithm was not nearly linear in the input size. The state of art for this problem saw a significant improvement in the works of Umans [12] and Kedlaya & Umans [11] (see also [11]) who gave fast algorithms for MME for the case when the number of variables \(m\) is significantly smaller than the degree parameter \(d\), i.e. \(m=d^{o(1)}\), over fields of small characteristic and all finite fields respectively.
This case of large number of variables was addressed recently in works of Bhargava, Ghosh, Kumar & Mohapatra [1] and Bhargava, Ghosh, Guo, Kumar & Umans [1] who gave fast3 algorithms for MME over fields of small characteristic and over all finite fields respectively, for all sufficiently large \(m,d\).
Footnote 3: Strictly speaking, these algorithms do not run in nearly-linear time, since the running time has \((\log|\mathbf{F}|)^{c}\) factor where \(c\) is a fixed constant that can be greater than one. However, the dependence of the running time on the term \((d^{m}+Nm)\) is nearly-linear.
We also note that the algorithms of Kedlaya & Umans [11] and those of Bhargava, Ghosh, Guo, Kumar & Umans [1] for MME over all finite fields are not algebraic, and in particular rely on bit access to the input field elements and bit operations on them. This is in contrast to the algorithms of Umans [12] and Bhargava, Ghosh, Kumar & Mohapatra [1] for MME over finite fields of small characteristic that are algebraic in nature. Designing algebraic algorithms for MME over all finite fields continues to be a very interesting open problem in this line of research. '
#### 1.1.2 Multipoint evaluation over infinite fields
As we shall see, our understanding of the problem here is rather limited compared to that over finite fields. However, before moving on to the results, we first discuss some subtleties with the definition of this problem itself over infinite fields.
Field operations vs bit complexity:Field arithmetic over finite fields preserves the worst case bit complexity of the constants generated, but this is not the case over infinite fields. This increase in bit-complexity in intermediate computations leads to some issues that we discuss next.
The first issue is that even the bit complexity of the output may not be nearly-linear in the input bit complexity, thereby ruling out any hope of having an algorithm with time complexity nearly-linear in the bit complexity of the input. The second issue is that even for inputs where the
bit complexity of the input field elements and the output field elements are promised to be small, it might be the case that in some intermediate stage of its run, an algorithm for MME generates field elements of significantly large bit complexity. For instance, the classical algorithm of Borodin & Moenck for univariate multipoint evaluation has near linear complexity in terms of the number of field operations, but it is not clear if the bit complexity of the algorithm is also nearly-linear in the input and output bit complexities.
The input and output model:For fields such as real or complex numbers, we need to specify a model for providing the inputs which potentially require infinite precision. The standard model used in numerical algorithms is via black-boxes that we refer to as _approximation oracles_ (formally defined in Definition2.7). Informally an approximation oracle for a real number \(\alpha\in(-1,1)\) provides, for every \(k\in\mathbb{N}\), access to the first \(k\) bits of \(\alpha\) after the decimal, and its sign in time \(\tilde{O}(k)\) (for complex numbers, we will assume the real and imaginary parts are provided via such oracles).
For the output, we could either ask to compute the evaluations to the required precision, or compute the evaluations exactly when, say, in the case of rational numbers. In this paper, we consider both versions of these problems.
Note that computing a real number \(\alpha\in(-1,1)\) within a given error \(\varepsilon<1\) is essentially the same as computing the most significant \(\Omega(\log\nicefrac{{1}}{{\varepsilon}})\) bits of the output correctly. In this sense, \(O(\log\nicefrac{{1}}{{\varepsilon}})\) provides a natural upper bound on the bit complexity of the output for an instance of approximate multipoint evaluation. Perhaps a bit surprisingly, we did not know an algorithm for multipoint evaluation with bit complexity nearly-linear in input size and \((\log\nicefrac{{1}}{{\varepsilon}})\) even for the setting of univariate polynomials till very recently. This is in contrast to the result of Borodin & Moenck [1] that obtains an upper bound on the number of field operations (but not the number of bit operations) for (exact) univariate multipoint evaluation over all fields.
In a beautiful recent work, Moroz [13] designed such an algorithm for the approximation version of univariate multipoint evaluation. Formally, he proved the following theorem.
**Theorem 1.1** (Moroz [13]).: _There is a deterministic algorithm that takes as input a univariate polynomial \(f(x)=\sum_{i=0}^{d}f_{i}x^{i}\in\mathbb{C}[x]\) as a list of complex coefficients, with \((|f|_{1}:=\sum_{i=0}^{d}|f_{i}|\leq 2^{\tau})\) and inputs \(a_{1},a_{2},\ldots,a_{d}\in\mathbb{C}\) of absolute value less than one, and outputs \(\beta_{1},\beta_{2},\ldots,\beta_{d}\in\mathbb{C}\) such that for every \(i\),_
\[|f(a_{i})-\beta_{i}|\leq|f|_{1}\cdot 2^{-t}\,,\]
_and has bit complexity at most \(\tilde{O}(d(\tau+t))\)._
As our main result in this paper, we prove a generalization of Theorem1.1 to the multivariate setting.
### Our results
Before stating our results, we formally define the problems that we study. The first question of approximate-MME is essentially the generalization of the univariate version of the problem studied by Moroz [13]. For convenience, we state the problem for the fields of rational and real numbers, but they extend in a straightforward manner to complex numbers and other natural subfields of it.
**Problem 1.2** (Approximate multivariate multipoint evaluation (approximate-MME)).: _We are given as input an \(m\)-variate polynomial \(f\in\mathbb{R}[\mathbf{x}]\) of degree at most \((d-1)\) in each variable as a list of coefficients, points \(\mathbf{a}_{1},\ldots,\mathbf{a}_{N}\in\mathbb{R}^{m}\), and an accuracy parameter \(t\in\mathbb{N}\). Here every field element is assumed to be in \((-1,1)\) and is given via an approximation oracle._
_Compute rational numbers \(\beta_{1},\ldots,\beta_{N}\) such that \(|f(\mathbf{a}_{i})-\beta_{i}|<1/2^{t}\) for all \(i\in[N]\)._
We also study the following variant of MME in the paper.
**Problem 1.3** (Exact multivariate multipoint evaluation (exact-MME)).: _We are given as input an \(m\)-variate polynomial \(f\in\mathbb{Q}[\mathbf{x}]\) of degree at most \((d-1)\) in each variable as a list of coefficients, points \(\mathbf{a}_{1},\ldots,\mathbf{a}_{N}\in\mathbb{Q}^{m}\) and an integer parameter \(s>0\), such that all rational numbers in the input and output are expressible in the form \(p/q\) for integers \(p,q\) with \(|p|,|q|<2^{s}\) and every rational number in the input has absolute value less than one._
_Compute \(f(\mathbf{a}_{1}),\ldots,f(\mathbf{a}_{N})\)._
The restriction that the absolute value of all constants is at most one requires a short discussion. The restriction on the coefficients of \(f\) is without loss of generality (by scaling) but the restriction on the coordinates of points is _not_ without loss of generality, but is nevertheless well-motived. See Remark 5.1 for details.
Our main result is fast algorithms for Problem 1.2 and Problem 1.3 for all sufficiently large \(d\).
**Theorem 1.4** (Approximate multipoint evaluation - informal).: _There is a deterministic algorithm for approximate-MME (Problem 1.2) that runs in time_
\[((Nm+d^{m})t)^{1+o(1)}\,\]
_for all sufficiently large \(d,t\) and all \(m\)._
**Theorem 1.5** (Exact multipoint evaluation - informal).: _There is a deterministic algorithm for exact-MME over rational numbers (Problem 1.3) that runs in time_
\[((Nm+d^{m})s)^{1+o(1)}\]
_for all sufficiently large \(d,s\) and all \(m\)._
Theorem 1.4 is a generalization (by scaling coefficients) of Theorem 1.1 of Moroz in the sense that it handles an arbitrarily large number of variables. Interestingly, our proof is _not_ an extension of the proof of Theorem 1.1 to larger number of variables. It relies on a different set of ideas and appears to be conceptually different from the proof of Moroz [13]. Moroz's algorithm relies on geometric ideas, and does not involve any modular arithmetic, whereas ours crucially relies on various reductions from an instance of MME (approximate or exact) over rational, real or complex numbers to instances of MME over finite fields. In fact, a generalization of Moroz's univariate algorithm to higher dimensions is not immediately clear to us, and would be interesting to understand.
As discussed in the introduction, while measuring the complexity of algorithms for MME over the field of rational numbers in terms the number of bit operations, the dependence of the running time on the bit complexity of the output, as in Theorem 1.5 is quite natural and essentially unavoidable. However, the fact that Theorem 1.5 takes the bit complexity of the output as a part of its input does not seem very natural and desirable. It would be very interesting to have an algorithm for exact-MME over rationals that does not need a bound on the output complexity as a part of the input, but runs in time nearly-linear in the input and output bit complexity.
### Overview of the proofs
In this section, we outline the main ideas in the proofs of Theorem 1.4 and Theorem 1.5. For this discussion, we assume for simplicity that the input is over the field of rational numbers, and the field constants in the input are given to us exactly. The ideas here generalize to the setting of real inputs (for approximate-MME) by some clean and simple properties of approximation oracles.
#### 1.3.1 A naive first attempt
We start by setting up some notation. Let \(f\in\mathbb{Q}[\mathbf{x}]\) be an \(m\) variate polynomial of degree at most \((d-1)\) in each variable and let \(\mathbf{a}_{1},\mathbf{a}_{2},\ldots,\mathbf{a}_{N}\in\mathbb{Q}^{m}\) be the input points of interest. For now, let us assume that our goal is to output the value of \(f\) on each \(\mathbf{a}_{i}\) exactly. We are also given the positive integer \(t\) such that the numerator and the denominator of each of the field constants in the input, and the output are at most \(2^{t}\).
From the prior work of Bhargava, Ghosh, Guo, Kumar and Umans [1] we have fast algorithms for MME over all finite fields. Therefore, a natural strategy for solving MME over rational numbers is to somehow reduce the problem over rationals to instances of the problem over finite fields, and use the known algorithms for this problem over finite fields to solve these instances. A first step towards such a reduction would be to clear all the denominators in the input instance by taking their LCM and obtain an instance of MME over the ring of integers, and then work modulo a large enough prime (or several small enough primes if needed for efficiency reasons), thereby reducing to instances of MME over finite fields. However, this seemingly natural approach runs into fundamental issues even for the simpler setting where each evaluation point
has integer coordinates, and the only rational numbers appear in the coefficients of the polynomial \(f\). We now briefly elaborate on this issue.
Let us consider an input instance where every denominator in the coefficient vector of \(f\) is a distinct prime. For instance, we can get such an instance where each of the first \(d^{m}\) primes appears as a denominator of some coefficient of \(f\). Note that the input bit complexity parameter \(t\) needs to be at most \(\operatorname{poly}(\log d,m)\) for this case. Since the length of this coefficient vector is \(d^{m}\), the LCM of these denominators is a natural number that is at least as large as the product of the first \(d^{m}\) primes, which is at least \(2^{d^{m}}\), and hence has bit complexity at least \(d^{m}\). Thus, if we clear out the denominators of the coefficients of \(f\) to obtain a polynomial \(\hat{f}\) with integer coefficients, each of the coefficients of \(\hat{f}\) can have bit complexity as large as \(d^{m}\). In this case, the total bit complexity of the coefficient vector of \(\hat{f}\) is at least \(d^{2m}\), which is roughly quadratic in the original input size, and thus, any algorithm obtained via this approach will have prohibitively large time complexity.
In both our algorithms for approximate-MME and exact-MME, we indeed crucially rely on the algorithms for MME over finite fields due to Bhargava et al [1]. However, this reduction is somewhat delicate and needs some care. On the way we rely on some well known tools from computational algebra and number theory, like fast Chinese remaindering, fast rational reconstruction, Kronecker and inverse Kronecker maps. Perhaps a bit surprisingly, our algorithm for exact-MME uses the algorithm for approximate-MME as a subroutine.
We now give an overview of the main ideas in these algorithms. We start with a very simple algorithm for exact-MME for the special case of integer inputs that serves as a crucial subroutine for the algorithm for approximate-MME.
#### 1.3.2 Algorithm for exact-MME over integers
For this problem, all the field elements in the input are integers and the absolute value of each of these input field elements and those in the output is at most \(2^{s}\) for a given parameter \(s\).
The algorithm for MME for this case simply does this computation by working modulo a large
Figure 1: Overview of reductions
enough prime (based on the given input and output complexities), thereby giving us a reduction from the problem over integers to that over a large enough finite field. At this point, we essentially invoke the algorithm of Bhargava et al for MME over finite fields to solve this problem. One subtlety here is that as stated in their paper [1], the algorithm does not quite run in nearly-linear time due to two factors. The first issue is that the running time has a \(\operatorname{poly}(\log|\mathbb{F}|)\) term, where the degree of \(\operatorname{poly}()\) term can be strictly larger than one. The other issue is that even in terms of the dependence on \(d,m\), their algorithm is nearly-linear time only when \(m\) is growing. So, for constant \(m\), we cannot directly invoke the algorithm in [1] for our applications.
We get around both these issues using some simple ideas. To address the issue of a constant number of variables, we artificially increase the number variables, while reducing the individual degree bound by applying an inverse-Kronecker map to the polynomial. Then, to deal with the issue of dependence of the running time on the field size, we first do a lift to integers and a Chinese remaindering to reduce this problem to many such instances of MME over smaller finite fields. This is essentially the same as the reduction used by Kedlaya & Umans in [13]. To keep the running time nearly-linear, we do the Chinese remaindering using the well known nearly-linear time algorithms. The details can be found in Section3 and Section4.
#### 1.3.3 Algorithm for approximate-MME
Recall that for approximate-MME, we do not need to compute the value of the polynomial on the input points exactly, but only require the outputs to be within some error of the actual evaluations. For simplicity, let us assume that the input polynomial and the evaluation points are all rational numbers, and are given exactly. As alluded to earlier in this section, it seems difficult to simply clear the denominators (via an LCM) and reduce to the integer case since there are instances, like when the denominators are all distinct primes, where this process prohibitively blows up the size of the coefficients. However, working with approximations gives us the necessary wiggle room to make something close to this idea work.
As the first step of the algorithm, we approximate all the field constants, the coefficients of the given polynomial as well as the coordinates of the input points by truncating their decimal representation to \(k\) bits after decimal (for some sufficiently large \(k\) to be chosen carefully). Rounding a real number \(\alpha\) of absolute value at most \(1\) like this gives us a rational number \(\hat{\alpha}\) of the form \(\nicefrac{{a}}{{2^{k}}}\) for some integer \(a\) with \(|a|\leq 2^{k}\). Moreover, we have that \(|\alpha-\hat{\alpha}|<\nicefrac{{1}}{{2^{k}}}\). We now solve MME on this instance obtained after rounding. The crucial advantage now is that since all the denominators in this rounded instance are \(2^{k}\), their LCM is just \(2^{k}\), and clearing the denominator no longer incurs a prohibitive increase in the bit complexity. We now invoke the algorithm for exact-MME for integer instances described in the earlier subsection. The details can be found in Section5.
#### 1.3.4 Algorithm for exact-MME over rationals
For our algorithm for exact-MME, we start by first invoking the algorithm for approximate-MME on the instance for a sufficiently good accuracy parameter \(t\). The choice of \(t\) depends upon the output bit complexity that is given to us as a part of the input. From the guarantees on the output of the approximate-MME algorithm, we know that the approximate-MME outputs rational numbers that are at most \(\nicefrac{{1}}{{2^{t}}}\) away from the true evaluations. If we can somehow recover the true evaluations from these approximations, we would be done! What we have here are instances of the following problem: our goal is to find a hidden rational number, denoted by \(\nicefrac{{a}}{{b}}\) (the true evaluation) and we have access to another rational number, denoted by \(A/B\) (an approximation to the true evaluation), with the guarantee that \(\left|\nicefrac{{A}}{{B}}-\nicefrac{{a}}{{b}}\right|<\nicefrac{{1}}{{2^{t}}}\) and \(|A|,|B|<2^{O(t)}\). Crucially, we also have a parameter \(s\) (given to us as a part of the input) and a guarantee that \(|a|,|b|<2^{s}\).
This is essentially an instance of rational number reconstruction, which is a well-studied and classical problem of interest in computational algebra and number theory. We rely on these results (essentially in a black-box manner), and in particular the notion and properties of continued fractions to solve this problem efficiently. We observe that our choice of the parameter \(t\) (as a function of \(s\)) implies that \(\nicefrac{{a}}{{b}}\) is a _convergent_ (a rational number obtained by a truncation of the continued fraction representation of \(\nicefrac{{A}}{{B}}\)). This observation along with some of the properties of convergents lets us find \(\nicefrac{{a}}{{b}}\) in nearly-linear time given \(\nicefrac{{A}}{{B}}\). The details can be found in Section6.
## 2 Preliminaries
### Notation
* We will use boldface letters \(\mathbf{a},\mathbf{b}\) etc. for finite-dimensional vectors. We will also use this to denote tuples of variables \(\mathbf{x}=(x_{1},\ldots,x_{n})\) etc. Usually the dimension of the vectors would be clear from context.
* For _exponent vectors_\(\mathbf{e}=(e_{1},\ldots,e_{n})\in\mathbb{Z}_{\geq 0}^{n}\) and a vector \(\mathbf{x}=(x_{1},\ldots,x_{n})\), we will use \(\mathbf{x^{e}}\) to denote the monomial \(x_{1}^{e_{1}}\cdots x_{n}^{e_{n}}\).
* For a real number \(\alpha\), we use \(\left\lfloor\alpha\right\rceil\) to denote the closest integer to \(\alpha\). When \(\alpha=a+\frac{1}{2}\) for some integer \(a\), \(\left\lfloor\alpha\right\rceil\) is defined as \(a\).
### Useful inequalities
**Lemma 2.1** (Bounds on binomial series).: _For \(d\in\mathbb{N}\) and \(\varepsilon>0\) with \(|\varepsilon|<\nicefrac{{1}}{{d^{2}}}\), we have_
\[1+d\varepsilon\leq(1+\varepsilon)^{d}\leq 1+d\varepsilon+d^{2}\varepsilon^{2}.\]
Proof.: The inequalities are clearly true for \(d=1,2\), so for the rest of this discussion, we assume without loss of generality that \(d\geq 3\).
For any \(i\geq 3\), we have \(\binom{d}{i}\leq\binom{d}{2}\cdot d^{i-2}\). Hence,
\[\left|\sum_{i=3}^{d}\binom{d}{i}\varepsilon^{i}\right|\leq\sum_{i=3}^{d}\binom{d} {i}\left|\varepsilon^{i}\right|\leq\binom{d}{2}\cdot\varepsilon^{2}\cdot\sum_{i= 1}^{d-2}|d\varepsilon|^{i}<\binom{d}{2}\cdot\varepsilon^{2}\]
where the last inequality uses \(\varepsilon<\nicefrac{{1}}{{d^{2}}}\). Therefore,
\[(1+\varepsilon)^{d}=1+d\varepsilon+\binom{d}{2}\varepsilon^{2}+\sum_{i=3}^{d} \binom{d}{i}\varepsilon^{i}\geq 1+d\varepsilon\]
and
\[(1+\varepsilon)^{d} =1+d\varepsilon+\binom{d}{2}\varepsilon^{2}+\sum_{i=3}^{d} \binom{d}{i}\varepsilon^{i}\] \[\leq 1+d\varepsilon+\binom{d}{2}\varepsilon^{2}+\binom{d}{2} \varepsilon^{2}\] \[\leq 1+d\varepsilon+d^{2}\varepsilon^{2}.\qed\]
### Kronecker map for base-\(d\)
The Kronecker map is a commonly used tool used to perform a variable reduction without changing the underlying sparsity. This map is defined formally as follows.
**Definition 2.2** (Kronecker map for base-\(d\)).: _The \(c\)-variate Kronecker map for base-\(d\), denoted by \(\Phi_{d,m;c}\) maps cm-variate polynomials into a \(c\)-variate polynomials via_
\[\Phi_{d,m;c}(f(x_{1,1},\ldots,x_{1,m},\ldots,x_{c,1},\ldots,x_{c,m}))=f\left(1, y_{1}^{d},y_{1}^{d^{2}},\ldots,y^{d^{m-1}},\ldots,1,y_{c}^{d},y_{c}^{d^{2}}, \ldots,y_{c}^{d^{m-1}}\right).\]
_If \(f\) is a polynomial of individual degree less than \(d\), then the monomial \(\mathbf{x}_{1}^{\mathbf{e}_{1}}\cdots\mathbf{x}_{c}^{\mathbf{e}_{c}}\) is mapped to the monomial \(y_{1}^{e_{1}}\cdots y_{c}^{e_{c}}\) where \(\mathbf{e}_{i}\) is the base-\(d\) representation of \(e_{i}\)._
_In the same spirit, we define the inverse Kronecker, denoted by \(\Phi_{d,m;c^{\prime}}^{-1}\) that maps a \(c\)-variate polynomial of individual degree less than \(d^{m}\) into a cm-variate polynomial of individual degree less than \(d\), given via extending the following map linearly over monomials:_
\[\Phi_{d,m;c}^{-1}(y_{1}^{e_{1}}\cdots y_{c}^{e_{c}})=\mathbf{x}_{1}^{\mathbf{ e}_{1}}\cdots\mathbf{x}_{m}^{\mathbf{e}_{m}}\]
_where \(\mathbf{x}_{i}=(x_{i,1},\ldots,x_{i,m})\) and \(\mathbf{e}_{i}\in\{0,\ldots,d-1\}^{m}\) is the base-\(d\) representation of \(e_{i}<d^{m}\)._
_Associated with the inverse Kronecker map, we also define \(\psi_{d,m;c}:\mathbb{F}^{c}\rightarrow\mathbb{F}^{cm}\) that acts on points, given by_
\[\psi_{d,m;c}:(a_{1},\ldots,a_{c})\mapsto(1,a_{1}^{d},\ldots,a_{1}^{d^{m-1}}, \ldots,1,a_{c}^{d},\ldots,a_{c}^{d^{m-1}}).\]
The inverse Kronecker map is defined so that we have the following observation.
**Observation 2.3** (Kronecker maps and evaluations).: _If \(f(x_{1},\ldots,x_{c})\) is a polynomial of individual degree less than \(d^{m}\), then for any \(\mathbf{a}\in\mathbb{F}^{c}\), we have that \(\Phi_{d,m;c}^{-1}(f)(\psi_{d,m;c}(\mathbf{a}))=f(\mathbf{a})\). _
The above observation would be useful to _trade-off_ degree with the number of variables as needed in some of our proofs.
### Computing all primes less than a given number
The classical Prime Number Theorem [10, 11] asserts that there are \(\Theta(N/\log N)\) primes numbers less than \(N\), asymptotically. We can compute all prime numbers less than \(N\) in deterministic \(\tilde{O}(N)\) time.
```
Input : An integer \(N>1\) Output : All prime numbers less than \(N\).
1 Initialise an array \(S\) indexed with \(2,3,\ldots,N\) with all values set to True.
2for\(i\gets 2\) to \(\sqrt{N}\)do
3if\(S[i]\) is Truethen
4 Set \(j\gets 2i\)
5while\(j\leq N\)do
6 Set \(S[j]\) to False.
7\(j\gets j+i\).
8
9return\(\{i\ :\ S[i]\text{ is True}\}\).
```
**Algorithm 1**PrimeSieve
**Lemma 2.4** (Computing primes less than a given number).: _There is a deterministic algorithm (Algorithm 1) that computes the set of all primes less \(N\) in deterministic time \(\tilde{O}(N)\). _
### Fast Chinese Remaindering
We also rely on the following two theorems concerning fast algorithms for questions related to the Chinese Remainder Theorem (CRT). We refer the reader to the book by von zur Gathen and Gerhard [1] for proofs.
**Lemma 2.5** (Fast-CRT: moduli computation).: _There is an algorithm that, when given as input coprime positive integers \(p_{1},\ldots,p_{r}\) and a positive integer \(N\) with \(N<\prod p_{i}<2^{c}\), computes the remainders \(a_{i}=N\bmod p_{i}\) for \(i=1,\ldots,r\) in deterministic \(\tilde{O}(c)\) time._
For proof of the above lemma see [1, Theorem 10.24].
**Lemma 2.6** (Fast-CRT: reconstruction).: _There is an algorithm that, when given as input coprime positive integers \(p_{1},\ldots,p_{r}\) and \(a_{1},\ldots,a_{r}\) such that \(0\leq a_{i}<p_{i}\) outputs the unique integer \(0\leq N<\prod p_{i}\) such that \(N=a_{i}\bmod p_{i}\) for \(i=1,\ldots,r\) in deterministic \(\tilde{O}(c)\) time where \(\prod p_{i}<2^{c}\)._
For proof of the above lemma see [1, Theorem 10.25]
### Input model for arbitrary precision reals
Throughout this section, we will assume that all real numbers that are "inputs" (namely the coefficients of the polynomial and the coordinates of the evaluation points) are in the range \((-1,1)\) and are provided via _approximation oracles_ with the following guarantees:
**Definition 2.7** (Approximation oracle).: _The approximation oracle for \(\alpha\in(-1,1)\), can provide the "sign" \(\alpha\) in \(O(1)\) time, and on input \(k\) returns an integer \(b_{k}\in[-2^{k},2^{k}]\) satisfying_
\[\left|\alpha-\nicefrac{{b_{k}}}{{2^{k}}}\right|<\nicefrac{{1}}{{2^{k}}}.\]
_We will use \(\left|\alpha\right|_{k}\) to refer to the fraction \(\nicefrac{{b_{k}}}{{2^{k}}}\) obtained from the approximation oracle._
_The running time of the approximation oracle is the time taken to output \(b_{k}\). We will say that the approximation oracle is efficient if the running time is \(\tilde{O}(k)\). \(\Diamond\)_
Such efficient approximation oracles can be obtained for any "natural" real number from any sufficiently convergent series. For algebriac reals of the form \(\sqrt{2}\) etc., the standard Taylor series is sufficient. Even for "natural" transcendental numbers, we may have such approximation oracles:
\[e =1+\frac{1}{1!}+\frac{1}{2!}+\cdots,\] \[\pi =4\cdot\tan^{-1}(1)\] \[=4\cdot\left(\tan^{-1}(\nicefrac{{1}}{{2}})+\tan^{-1}(\nicefrac{ {1}}{{3}})\right)\] \[=4\cdot\left(\nicefrac{{1}}{{2}}-\frac{\nicefrac{{1}}{{2^{3}}}}{ 3}+\frac{\nicefrac{{1}}{{2^{5}}}}{5}-\cdots\right.\qquad+\qquad\nicefrac{{1}} {{3}}-\frac{\nicefrac{{1}}{{3^{3}}}}{3}+\frac{\nicefrac{{1}}{{3^{5}}}}{5}- \cdots\right).\]
Any explicit series with \(\tilde{O}(k)\) terms of the series having an error less than \(\nicefrac{{1}}{{2^{k}}}\) would qualify as an efficient approximation oracle for the purposes of the approximate-MME algorithm over reals.
**Lemma 2.8** (Repeated exponentiation for approximation oracles).: _Given an approximation oracle \(A\) for a real number \(\alpha\in(-1,1)\) with running time \(T(k)\), and any positive integer \(D\), we can build an approximation oracle \(A^{D}\) for \(\alpha^{D}\) with running time \(T(k+O(\log D))+\tilde{O}(k\log D)\)._
Proof.: On an input \(k\), we wish to find an integer \(r_{k}\in[-2^{k},2^{k}]\) such that \(\left|\alpha^{D}-\nicefrac{{r_{k}}}{{2^{k}}}\right|<\nicefrac{{1}}{{2^{k}}}\).
Let us first consider the case when \(D\) is even. Let \(t=k+3\) and suppose we recursively compute an integer \(a_{t}\in[-2^{t},2^{t}]\) such that \(\left|\alpha^{D/2}-\nicefrac{{a_{t}}}{{2^{t}}}\right|<\nicefrac{{1}}{{2^{t}}}\). Let \(\delta=\nicefrac{{a_{t}}}{{2^{t}}}-\alpha^{D/2}\).
\[\left|\alpha^{D}-\nicefrac{{a_{t}^{2}}}{{2^{t}}}\right|=\left|(\alpha^{D/2})^ {2}-(\alpha^{D/2}+\delta)^{2}\right|<4\cdot\nicefrac{{1}}{{2^{t}}}\leq \nicefrac{{1}}{{2^{k+1}}}\]
Thus, if \(R_{k}=r_{k}\cdot 2^{2t-k}\) is the multiple of \(2^{2t-k}\) that is closest to \(a_{t}^{2}\), then
\[\left|\alpha^{D}-r_{k}/2^{k}\right| \leq\left|\alpha^{D}-a_{t}^{2}/2^{2t}\right|+\left|a_{t}^{2}/2^{2t }-r_{k}2^{2t-k}/2^{2t}\right|\] \[<\nicefrac{{1}}{{2}}+2^{2t-k-1}/2^{2t}\leq\nicefrac{{1}}{{2}}.\]
If \(D\) is odd, then let \(t=k+4\). We use the approximation oracle \(A\) to obtain an integer \(b_{t}\in[-2^{t},2^{t}]\) such that \(|\alpha-\nicefrac{{b_{t}}}{{2}}|<\nicefrac{{1}}{{2^{t}}}\), and recursively compute an integer \(a_{t}\in[-2^{t},2^{t}]\) such that \(\left|\alpha^{(D-1)/2}-a_{t}/2^{t}\right|<\nicefrac{{1}}{{2^{t}}}\). Then,
\[\left|\alpha^{D}-a_{t}^{2}b_{t}/2^{k}\right| \leq|\alpha|\left|\left(\alpha^{(D-1)/2}\right)^{2}-a_{t}^{2}/2^{ 2t}\right|+\left|a_{t}^{2}/2^{2t}\right|\left|\alpha-\nicefrac{{b_{t}}}{{2^{t }}}\right|\] \[<4\cdot\nicefrac{{1}}{{2^{t}}}+\nicefrac{{1}}{{2^{t}}}\leq \nicefrac{{1}}{{2^{k+1}}}.\]
Similarly, if \(R_{t}=r_{t}\cdot 2^{3t-k}\) is the multiple of \(2^{3t-k}\) that is closest to \(a_{t}^{2}b_{t}\), then
\[\left|\alpha^{D}-r_{t}/2^{k}\right|<\nicefrac{{1}}{{2^{k}}}.\]
If \(\mathcal{T}(k,D)\) is the running time of this algorithm (namely Algorithm 2) to compute \(r_{k}\in[-2^{k},2^{k}]\) such that \(\left|\alpha^{D}-r_{k}/2^{k}\right|\leq\nicefrac{{1}}{{2^{k}}}\), then we have
\[\mathcal{T}(k,D) \leq\mathcal{T}(k+4,D/2)+\tilde{O}(k)\] \[\leq\mathcal{T}(k+O(\log D),1)+\tilde{O}(k\log D)\] \[=T(k+O(\log D))+\tilde{O}(k\log D).\qed\]
```
Input : An approximation oracle \(A\) for a real number \(\alpha\), an integer \(D>0\), and an integer \(k>0\). Output : An integer \(r_{k}\in[-2^{k},2^{k}]\) such that \(\left|\alpha^{D}-r_{k}/2^{k}\right|<\nicefrac{{1}}{{2^{k}}}\).
1if\(D=1\)then
2return\(r_{k}=A(k)\).
3if\(D\) is eventhen
4 Let \(t=k+3\).
5 Compute \(a_{t}=\textsc{Approximation Oracle-Powering}(A,\nicefrac{{D}}{{2}},t)\).
6return\(\left|a_{t}^{2}/2^{2t-k}\right|\).
7else
8 Let \(t=k+4\).
9 Compute \(b_{t}=A(t)\). Compute \(a_{t}=\textsc{Approximation Oracle-Powering}(A,\nicefrac{{(D-1)/2}}{{2}},t)\).
10return\(\left|a_{t}^{2}\cdot\nicefrac{{b_{t}}}{{2^{3t-k}}}\right|\).
```
**Algorithm 2**Approximation Oracle-Powering
We also note that this notion of approximation oracles naturally extends to representation of
complex numbers. Here, each complex number is given by two such oracles, corresponding to the real and the imaginary part respectively.
## 3 Revisiting MME over prime fields
We recall the result of Bhargava, Ghosh, Guo, Kumar and Umans [1].
**Theorem 3.1** (Fast multivariate multipoint evaluation over finite fields [1]).: _There is a deterministic algorithm that when given as input the coefficient vector of an \(m\) variate polynomial \(f\) of degree less than \(d\) in each variable over some finite field \(\mathbb{F}\), and \(N\) points \(\mathbf{a}_{1},\mathbf{a}_{2},\ldots,\mathbf{a}_{N}\in\mathbb{F}^{m}\) outputs \(f(\mathbf{a}_{1}),f(\mathbf{a}_{2}),\ldots,f(\mathbf{a}_{N})\) in time_
\[(d^{m}+Nm)^{1+o(1)}\cdot\operatorname{poly}(m,d,\log|\mathbb{F}|),\]
_for all \(m\in\mathbb{N}\) and sufficiently large \(d\in\mathbb{N}\)._
The above running time is not _quite_ nearly-linear in the input considered as bits due to the factor of \(\operatorname{poly}(\log|\mathbb{F}|)\). Also, in the setting when \(m\) is a constant, we can no longer absorb \(\operatorname{poly}(d)\) within \((Nm+d^{m})^{o(1)}\). However, we show below that for the case of prime fields, we can get around these issues and obtain the following nearly linear-time bound.
**Theorem 3.2** (Nearly-linear time MME over prime fields).: _There is a deterministic algorithm (namely Algorithm 3) that, when given as input the coefficient vector of an \(m\)-variate polynomial \(f\) of degree less than \(d\) in each variable over a prime field \(\mathbb{F}_{p}\), and \(N\) points \(\mathbf{a}^{(1)},\ldots,\mathbf{a}^{(N)}\in\mathbb{F}^{m}\), outputs \(f(\mathbf{a}^{(1)}),\ldots,f(\mathbf{a}^{(N)})\) in time_
\[((d^{m}+Nm)\cdot\log p)^{1+o(1)}\]
_for all \(m\in\mathbb{N}\) and sufficiently large \(d\in\mathbb{N}\)._
We first discuss how we handle the two cases when the number of variables is constant and growing with the input respectively in the following two subsections and then prove Theorem 3.2.
We first discuss how we handle the two cases when the number of variables is constant and growing with the input respectively in the following two subsections and then prove Theorem 3.2.
### Handling cases when the number of variables is too small
As mentioned above, in the setting when the number of variables is too small (say \(m\leq c\) for a constant \(c\)), we may no longer have that \(\operatorname{poly}(d)=d^{o(m)}\). However, we can use the inverse-Kronecker map (Definition 2.2) to trade-off degree with the number of variables.
To make the parameters more informative, we rename them and let \(f\) be a \(c\)-variate polynomial of individual degree less than \(D\), and let \(\mathbf{a}^{(1)},\ldots,\mathbf{a}^{(N)}\in\mathbb{F}_{p}^{c}\) be the points at which we wish to evaluate the polynomial.
Let \(d=\lfloor\log D\rfloor\) and \(m\) be the smallest integer such that \(d^{m}>D\). Note that \(d^{m}>D>d^{m-1}\) and \(m=\Theta(\nicefrac{{\log D}}{{\log\log D}})\). If \(f(x_{1},\dots,x_{c})=\sum_{\mathbf{e}}f_{\mathbf{e}}\cdot\mathbf{x^{e}}\), define the polynomial \(g(y_{1,1},\dots,y_{c,m})=\Phi_{d,m\varepsilon}^{-1}(f)\), as defined in Definition 2.2.
For all \(i\in[N]\), define \(\widetilde{\mathbf{a}^{(i)}}=\psi_{d,m\varepsilon}(\mathbf{a}^{(i)})\), as defined in Definition 2.2. Then, from Observation 2.3, we have that \(f(\mathbf{a}^{(i)})=g(\mathbf{a}^{(i)})\) for all \(i\in[N]\). The following observation shows that \(\widetilde{\mathbf{a}^{(i)}}\) can be computed efficiently from \(\mathbf{a}^{(i)}\).
**Observation 3.3**.: _Given \(\mathbf{a}\in\mathbb{F}_{p}^{c}\), the point \(\widetilde{\mathbf{a}}:=\psi_{d,m}^{(c)}(\mathbf{a})\in\mathbb{F}_{p}^{cm}\) can be computed in \(\operatorname{poly}(d,m,c)\cdot\tilde{O}(\log p)\) time._
Proof.: The running time bound follows from repeated exponentiation as \(a^{dk}\bmod p=(a^{d^{k-1}}\bmod p)^{d}\bmod p\) and the fact that additions and multiplications modulo \(p\) can be performed in \(\tilde{O}(\log p)\) time.
Thus, the task of computing \(f(\mathbf{a}^{(1)}),\dots,f(\mathbf{a}^{(N)})\) reduces to the task of computing the evaluations \(g(\widetilde{\mathbf{a}^{(1)}}),\dots,g(\widetilde{\mathbf{a}^{(N)}})\) where \(\widetilde{\mathbf{a}^{(i)}}=\psi_{d,m}^{(c)}(\mathbf{a})\). Also, the reduction runs in time \(((D^{c}+Nc)\cdot\log p)^{1+o(1)}\) since \(d,m=D^{o(1)}\).
### When individual degree and number of variables are moderately growing
We return to the familiar variable convention of \(f(x_{1},\dots,x_{m})\in\mathbb{F}_{p}[x_{1},\dots,x_{m}]\) with degree in each variable less than \(d\). From the previous section, may assume without loss of generality that \(d,m=\omega(1)\) and hence \(\operatorname{poly}(d,m)=(d^{m}+Nm)^{o(1)}\). Let \(f\) be written as a sum of monomials as follows.
\[f(x_{1},\dots,x_{m})=\sum_{\mathbf{e}}f_{\mathbf{e}}\cdot x_{1}^{\varepsilon_ {1}}\cdots x_{m}^{\varepsilon_{m}}.\]
Interpreting the above as a polynomial over integers with each coefficient in \(\{0,1,\dots,p-1\}\), and for any \(\mathbf{a}\in\{0,\dots,p-1\}^{m}\), the integer \(f(\mathbf{a})\) is bounded by \(d^{m}\cdot p\cdot p^{dm}\). The idea is to use Chinese Remainder Theorem to reduce the problem to MME over smaller prime fields.
Proof of Theorem 3.2.: The correctness of Algorithm 3 is evident.
As for the running time, Lines 1 to 3 takes \((d^{m}+Nm)^{1+o(1)}\) time by Observation 3.3 and reduces to the case when \(m\geq\log\log d\). In this case, Lines 4 and 5 require \(\tilde{O}(\tilde{L})\) time (Lemma 2.4), which is \(\tilde{O}(\log p)\cdot\operatorname{poly}(d,m)\).
Using Lemma 2.5, we have that Lines 6 to 9 require time \((d^{m}+Nm)\cdot\tilde{O}(\log M)=((d^{m}+Nm)\cdot\log p)^{1+o(1)}\).
From Theorem 3.1, we have that Line 13 runs in time \((d^{m}+Nm)^{1+o(1)}\cdot\operatorname{poly}(d,m,\log p_{i})\), and since \(p_{i}<\tilde{O}(\tilde{L})=\tilde{O}(dm\log p)\), the entire loop in Lines 10 to 13 takes time \((d^{m}+Nm)^{1+o(1)}\cdot\tilde{O}(\log p)=((d^{m}+Nm)\log p)^{1+o(1)}\).
And finally, from Lemma 2.6 we have that the entire loop in Lines 14 to 15 takes time \((Nm)\cdot\tilde{O}(\log M)=((d^{m}+Nm)\cdot\log p)^{1+o(1)}\). Hence, Algorithm 3 runs in time \(((d^{m}+Nm)\log p)^{1+o(1)}\).
**Input:** An integer \(s>0\), a polynomial \(f(x_{1},\ldots,x_{m})\in\mathbb{Z}[x_{1},\ldots,x_{m}]\) of individual degree less than \(d\), given as a list of \(d^{m}\) integer coefficients, a set of points \(\mathbf{a}^{(1)},\ldots,\mathbf{a}^{(N)}\in\mathbb{Z}^{m}\) with each coordinate of magnitude at most \(2^{s}\), with the guarantee that all coefficients of \(f\), coordinates of \(\mathbf{a}^{(i)}\)'s, and evaluations \(f(\mathbf{a}^{(i)})\) are bounded in magnitude by \(2^{s}\).
**Output:** Integers \(b_{1},\ldots,b_{N}\) that are the evaluations, i.e. \(b_{i}=f(\mathbf{a}^{(i)})\) for \(i\in[N]\).
**Theorem 4.1** (Exact-MME over integers).: _There is a deterministic algorithm (namely Algorithm 4) that on input as mentioned above returns the required output as mentioned above and runs in deterministic time \(((d^{m}+Nm)\cdot s)^{1+o(1)}\) for all \(m\in\mathbb{N}\) and sufficiently large \(d\in\mathbb{N}\)._
The main idea is to use the Chinese Remainder Theorem and reduce to the case of MME over finite fields. Since we wish to obtain a nearly-linear time algorithm, we would once again need to use Chinese Remainder Theorem implemented in nearly-linear time (Lemmas 2.5 and 2.6) and make use of the nearly-linear time algorithm for MME over prime fields (Theorem 3.2).
```
Input :\(f(x_{1},\ldots,x_{m})\in\mathbb{Z}[x_{1},\ldots,x_{m}]\) and \(\mathbf{a}^{(1)},\ldots,\mathbf{a}^{(N)}\in\mathbb{Z}^{n}\), and an integer \(s>0\) such that all coefficients of \(f\), coordinates of \(\mathbf{a}^{(i)}\) and evaluations \(f(\mathbf{a}^{(i)})\) have magnitude bounded by \(2^{s}\). Output: Evaluations \(b_{i}=f(\mathbf{a}^{(i)})\) for \(i\in[N]\).
1 Compute the first \(s\) primes numbers \(\{p_{1},\ldots,p_{s}\}\).
2 Let \(L\leq s\) be the smallest integer such that \(p_{1}\cdots p_{L}=:M>2^{s+1}\).
3for\(\mathbf{e}\in\{0,\ldots,d-1\}^{m}\)do
4 Compute \(f_{\mathbf{e}}^{(\ell)}=f_{\mathbf{e}}\bmod p_{\ell}\) for \(\ell\in L\) using Lemma 2.5.
5for\(i\in[N],k\in[m]\)do
6 Compute \(a_{i,k,\ell}=\mathbf{a}_{k}^{(i)}\bmod p_{\ell}\) for \(\ell\in L\) using Lemma 2.5.
7for\(\ell\in[L]\)do
8 Let \(f^{(\ell)}(x_{1},\ldots,x_{m})=\sum_{\mathbf{e}}f_{\mathbf{e}}^{(\ell)} \mathbf{x}^{\mathbf{e}}\in\mathbf{F}_{p_{i}}[\mathbf{x}]\).
9 Let \(\mathbf{a}^{(i,\ell)}=(a_{i,1,\ell},\ldots,a_{i,m,\ell})\in\mathbb{F}_{p_{\ell }}^{m}\) for each \(i\in[N]\).
10 Compute \(b_{i,\ell}=f^{(\ell)}(\mathbf{a}^{(i,\ell)})\) for all \(i\in[N]\) using Algorithm 3.
11for\(i\in[N]\)do
12 Compute the unique \(b_{i}\in[-\nicefrac{{M}}{{2}},\nicefrac{{M}}{{2}}]\) such that \(b_{i}=b_{i,\ell}\bmod p_{\ell}\) for all \(\ell\in[L]\), using Lemma 2.6. return\(\{b_{i}\::\:i\in[N]\}\).
```
**Algorithm 4**ExactMME-integers
Proof of Theorem 4.1.: We are guaranteed that \(\left|f(\mathbf{a}^{(i)})\right|<2^{s}\) for all \(i\in[N]\). Hence, by the Chinese Remainder Theorem, it is sufficient to compute \(f(\mathbf{a})\bmod p_{i}\) for each \(i\in[L]\) since \(p_{1}\cdots p_{L}>2^{s+1}\). Hence, the correctness of Algorithm 4 is evident. As for the running time, we will do an analysis very similar to the analysis for Algorithm 3.
Using Lemma 2.5, we have that Lines 1 to 2 require time \(O(\tilde{s})\). By the Prime Number Theorem [12, 13], we also have that each \(p_{i}=\tilde{O}(s)\) and hence \(p_{1}\cdots p_{L}<2^{s+1}\cdot\tilde{O}(s)\).
From Theorem 3.1, we have that Line 10 runs in time \(((d^{m}+Nm)\cdot\log p_{\ell})^{1+o(1)}\) the entire loop in Lines 7 to 10 takes time \(((d^{m}+Nm)(\sum_{\ell}\log p_{\ell}))^{1+o(1)}=((d^{m}+Nm)\cdot s)^{1+o(1)}\).
And finally, from Lemma 2.6 we have that the entire loop in Lines 11 to 12 takes time \((Nm)\cdot\tilde{O}(\log M)=((d^{m}+Nm)\cdot s)^{1+o(1)}\). Hence, Algorithm 4 runs in time \(((d^{m}+Nm)\cdot s)^{1+o(1)}\) as claimed.
**Remark 4.2**.: If we are only given that all coefficients of \(f\) and all coordinates of the points are integers bounded in magnitude by \(2^{s}\) with no a-priori bound on the bit complexity of the evaluations, a naive bound on the size of evaluations is
\[|f(\mathbf{a})|\leq d^{m}\cdot 2^{s}\cdot 2^{sdm}\leq 2^{sdm+s+m\log d}.\]
Thus, we may use \(s^{\prime}=(sdm+s+m\log d)\) in Theorem 4.1 to get the time complexity bounded by \(((d^{m}+Nm)\cdot(sdm))^{1+o(1)}\). If \(m\) is a growing function, then the output complexity is nearly-linear in the input complexity since \(\operatorname{poly}(d)=(d^{m}+Nm)^{o(1)}\). But, in the regime when \(m\) is a constant, this is super-linear in the input size \((d^{m}+Nm)\cdot s\) because of the additional factor of \(d\). However, a slightly worse running time is to be expected in this case since the output complexity is \(\Omega(N\cdot sdm)\) in the worst case. \(\Diamond\)
## 5 Approximate-MME over reals
Throughout this section, we will assume that all real numbers as part of the input are in the interval \((-1,1)\).
**Remark 5.1** (On the restriction on absolute value of constants).: _Given any arbitrary polynomial \(f(\mathbf{x})\in\mathbb{R}[\mathbf{x}]\), we can scale the polynomial by the largest coefficient to obtain and run the approximate-MME on the scaled polynomial \(\tilde{f}\). If we have \(|\tilde{f}(\mathbf{a})-\beta_{i}|\leq\varepsilon\), then we immediately have \(|f(\mathbf{a})-(\max|f_{\mathbf{e}}|)\,\beta_{i}|\leq\varepsilon\cdot(\max|f_ {\mathbf{e}}|)\). Thus, we may assume without loss of generality that all coefficients of \(f\) have absolute value at most \(1\)._
_However, the assumption that coordinates of all evaluation points have absolute value bounded by one is not without loss of generality but is well-motivated nevertheless. Even in the case of univariate integer polynomials, the evaluation \(f(\mathbf{a})\) could be as large as \(|\mathbf{a}|^{d}\) where \(|\mathbf{a}|=\max|\mathbf{a}_{i}|\). Therefore, the output bit-complexity for MME is potentially \(O(d\cdot N)\) which is super-linear in the input bit-complexity._
_The restriction of insisting that evaluation points consist of coordinates with absolute value at most \(1\) ensures that the evaluations are never prohibitively large in magnitude, thereby making the quest for approximate-MME in nearly-linear time more meaningful. \(\Diamond\)_
### The problem statement and algorithm
We now state the precise problem statement and our results for approximate-MME over the field of real numbers.
* **Input:** A polynomial \(f(x_{1},\ldots,x_{m})\in\mathbb{R}_{(-1,1)}[x_{1},\ldots,x_{m}]\) of individual degree less than \(d\), given as a list of \(d^{m}\) efficient approximation oracles for each coefficient, a set of points \(\mathbf{a}^{(1)},\ldots,\mathbf{a}^{(N)}\in(-1,1)^{m}\) each of whose coordinates are also provided via efficient approximation oracles, and an accuracy parameter \(t\).
* **Output:** Rational numbers \(b_{1},\ldots,b_{N}\) such that \(\left|f(\mathbf{a}^{(i)})-b_{i}\right|<\nicefrac{{1}}{{2^{t}}}\) for all \(i\in[N]\).
**Theorem 5.2** (approximate-MME over reals).: _There is a deterministic algorithm (namely Algorithm 5) that on input as mentioned above returns the required output as mentioned above and runs in time \(((d^{m}+Nm)\cdot t)^{1+o(1)}\) for all \(m\in\mathbb{N}\) and sufficiently large \(d\in\mathbb{N}\)._
The rest of the section is devoted to the proof of the above theorem.
High-level idea:The algorithm is a suitable reduction to the task of exact-MME over integers (Theorem 4.1). We will replace each of the real numbers by appropriately chosen approximations of the form \(\nicefrac{{a_{i}}}{{2^{k}}}\) (for a suitable large \(k=O(t)\)) so that the evaluations of the perturbed polynomial at the perturbed points are not too far from the original evaluations. Since we now have all denominators of the form \(2^{k}\), we can _clear_ the denominators and reduce to the case of computing MME over integers.
As expected, there are some subtleties that need to be addressed to make sure that the entire algorithm runs in nearly-linear time.
#### Rounding coefficients of \(f\)
Let \(k\) be a parameter to be chosen shortly. Define the polynomial \(\left\lfloor f\right\rceil_{k}\) as
\[\left\lfloor f\right\rceil_{k}(x_{1},\ldots,x_{m}):=\sum_{\mathbf{e}}\left\lfloor f _{\mathbf{e}}\right\rceil_{k}\cdot\mathbf{x^{e}}.\]
**Observation 5.3** (Error due to rounding coefficients of \(f\)).: _For any \(\mathbf{a}\in(-1,1)^{m}\), we have that_
\[\left|f(\mathbf{a})-\left\lfloor f\right\rceil_{k}(\mathbf{a})\right|\leq \nicefrac{{1}}{{2^{k-m\log d}}}.\]
Proof.: \[f(\mathbf{a})-\left\lfloor f\right\rceil_{k}(\mathbf{a}) =\sum_{\mathbf{e}}(f_{\mathbf{e}}-\left\lfloor f_{\mathbf{e}} \right\rceil_{k})\cdot\mathbf{a^{e}}\] \[\implies\left|f(\mathbf{a})-\left\lfloor f\right\rceil_{k}( \mathbf{a})\right| \leq\sum_{\mathbf{e}}\left|f_{\mathbf{e}}-\left\lfloor f_{\mathbf{e}} \right\rceil_{k}\right|\cdot\left|\mathbf{a^{e}}\right|\leq d^{m}\cdot \nicefrac{{1}}{{2^{k}}}.\qed\]
### Rounding points
Let \(k\) be a parameter to be chosen shortly. For any \(\mathbf{a}=(a_{1},\ldots,a_{m})\in(-1,1)^{m}\), define \(\lfloor\mathbf{a}\rceil_{k}\) as
\[\lfloor\mathbf{a}\rceil_{k}:=\left(\left\lfloor a_{1}\right\rceil_{k},\ldots, \left\lfloor a_{m}\right\rceil_{k}\right).\]
**Observation 5.4** (Error due to rounding points).: _Let \(\mathbf{e}=(e_{1},\ldots,e_{m})\in\{0,\ldots,d-1\}^{m}\) and \(\mathbf{a}\in(-1,1)^{m}\). Suppose \(k\in\mathds{N}\) such that \(2^{k}>4d^{2}m^{2}\). Then,_
\[\left\lfloor\mathbf{a}^{\mathbf{e}}-\lfloor\mathbf{a}\rceil_{k}^{\mathbf{e}} \right\rfloor\leq\nicefrac{{1}}{{2^{k-\log(4d{m})}}}\]
Proof.: Note that all \(a_{i}\in(-1,1)\). Let \(\delta_{i}=\left\lfloor a_{i}\right\rceil_{k}-a_{i}\) for \(i\in[m]\); we have that \(\left\lvert\delta_{i}\right\rvert\leq\nicefrac{{1}}{{2^{k}}}\leq\nicefrac{{ 1}}{{4d^{2}m^{2}}}\). Hence,
\[\left\lfloor a_{1}\right\rceil_{k}^{e_{1}}\cdots\left\lfloor a_{m }\right\rceil_{k}^{e_{m}} =(a_{1}+\delta_{1})^{e_{1}}\cdots(a_{m}+\delta_{m})^{e_{m}}\] \[=a_{1}^{e_{1}}\cdots a_{m}^{e_{m}}+\sum_{\begin{subarray}{c}j_{1 }\leq e_{1},\ldots,j_{m}\leq e_{m}\\ \text{not all }j_{i}=0\end{subarray}}\binom{e_{1}}{j_{1}}\cdots\binom{e_{m}}{j_{m}} \cdot\prod_{i=1}^{m}\left(a_{i}^{e_{i}-j_{i}}\cdot\delta_{i}^{j_{i}}\right)\]
\[\implies\left\lfloor\left\lfloor a_{1}\right\rceil_{k}^{e_{1}} \cdots\left\lfloor a_{m}\right\rceil_{k}^{e_{m}}-a_{1}^{e_{1}}\cdots a_{m}^{e_ {m}}\right\rvert \leq\left\lvert\sum_{\begin{subarray}{c}j_{1}\leq e_{1},\ldots,j_{m}\leq e_{m}\\ \text{not all }j_{i}=0\end{subarray}}\binom{e_{1}}{j_{1}}\cdots\binom{e_{m}}{j_{m}} \cdot\prod_{i=1}^{m}\delta_{i}^{j_{i}}\right\rvert\] \[\leq\left\lvert\prod_{i=1}^{m}\left(1+\delta_{j_{i}}\right)^{d}-1\right\rvert\] \[\leq(1+2d(\nicefrac{{1}}{{2^{k}}}))^{m}-1\leq 4dm(\nicefrac{{ 1}}{{2^{k}}}).\qed\]
### Handling the case when number of variables is too small
To make the variables suggestive, we will rename them and say \(f(x_{1},\ldots,x_{c})\) is a \(c\)-variate polynomial in \(\mathds{R}_{(-1,1)}[x_{1},\ldots,x_{c}]\) with degree in each variable less than \(D\). We wish to evaluate the polynomial on points \(\mathbf{a}^{(1)},\ldots,\mathbf{a}^{(N)}\in(-1,1)^{c}\).
Once again, let \(d=\lfloor\log D\rfloor\) and let \(m\) be the smallest integer such that \(d^{m}>D\). Note that \(d^{m}>D\geq d^{m-1}\) and \(m=\Theta(\log D/\log\log D)\). Define the polynomial \(g(y_{1},\ldots,y_{c,m})=\Phi_{d,m\times c}^{-1}(f)\), as defined in Definition 2.2. Define \(\widehat{\mathbf{a}^{(i)}}=\psi_{d,m\times c}(\mathbf{a}^{(i)})\). From Observation 2.3, we have that \(f(\mathbf{a}^{(i)})=g(\widehat{\mathbf{a}^{(i)}})\) for all \(i\in[N]\).
Even if \(\mathbf{a}^{(i)}\) consisted of only rational numbers, unlike the setting in Theorem 3.2 where we could use Observation 3.3, the rational numbers in \(\widehat{\mathbf{a}^{(i)}}\) have much larger bit complexity due to the exponentiation. However, by Lemma 2.8, we have efficient approximation oracles for \(\widehat{\mathbf{a}^{(i)}}\) and that suffices for our algorithm.
### Reduction to exact-MME over integers
From the previous subsection, we may now assume without loss of generality that we are working with an \(m\)-variate polynomial \(f(x_{1},\ldots,x_{n})\) of individual degree less than \(d\), with both \(m,d\) as growing parameters, and wish to evaluate this polynomial on \(N\) points \(\mathbf{a}^{(1)},\ldots,\mathbf{a}^{(N)}\in(-1,1)^{m}\), with all coefficients and coordinates provided via approximation oracles running in time \(\mathcal{O}(k+O(m\log d))\). We wish to compute integers \(b_{1},\ldots,b_{N}\) such that \(\left|f(\mathbf{a}^{(i)})-\nicefrac{{b_{i}}}{{2^{i}}}\right|<\nicefrac{{1}}{{2^ {i}}}\). We now describe the algorithm (Algorithm 5).
```
Input : An \(m\)-variate polynomial \(f(x_{1},\ldots,x_{m})\in\mathbb{R}_{(-1,1)}[\mathbf{x}]\) of individual degree less than \(d\), and points \(\mathbf{a}^{(1)},\ldots,\mathbf{a}^{(N)}\in\mathbb{R}_{(-1,1)}^{m}\) (with all real numbers provided via approximation oracles) and an integer \(t>0\). Output : Integers \(b_{1},\ldots,b_{N}\) such that \(\left|f(\mathbf{a}^{(i)})-\nicefrac{{b_{i}}}{{2^{i}}}\right|<\nicefrac{{1}}{{ 2^{i}}}\) for all \(i\in[N]\).
1if\(m<\log\log d\)then
2 Let \(d^{\prime}=\lfloor\log d\rfloor\) and \(m^{\prime}\) be the smallest integer such that \((d^{\prime})^{m^{\prime}}>d\).
3 Replace \(f\) by \(\Phi_{d^{\prime},m^{\prime};m}^{-1}(f)\) and each \(\mathbf{a}^{(i)}\) by \(\psi_{d^{\prime},m^{\prime};m}(\mathbf{a}^{(i)})\).
4 Let \(k_{1}=\lceil t+m\log d+2\rceil\) and \(k_{2}=\lceil t+m\log d+\log(4md)+2\rceil\); let \(k=\max(k_{1},k_{2})=k_{2}\).
5 Compute \(\left|f\right|_{k_{1}}=\sum_{\mathbf{e}}g_{\mathbf{e},k_{1}}/\nicefrac{{2^{ 1}}}{{2^{1}}}\cdot\mathbf{x}^{\mathbf{e}}=\nicefrac{{1}}{{2^{1}}}\cdot\sum_{ \mathbf{e}}g_{\mathbf{e},k_{1}}\cdot\mathbf{x}^{\mathbf{e}}\).
6for\(i\in[N]\)do
7 Compute \(\left|\mathbf{a}^{(i)}\right|_{k_{2}}=\left(\nicefrac{{a_{i},k_{2}}}{{2^{k_ {2}}}},\ldots,\nicefrac{{a_{i},m,k_{2}}}{{2^{k_{2}}}}\right)=\nicefrac{{1}}{{ 2^{k_{2}}}}\cdot(a_{i,1,k_{2}},\ldots,a_{i,m,k_{2}})\).
8 Let \(\widehat{\mathbf{a}^{(i)}}=(a_{i,1,k_{2}},\ldots,a_{i,m,k_{2}})\).
9 Compute the polynomial \(G(x_{1},\ldots,x_{m})\) defined as \[G(x_{1},\ldots,x_{n})=\sum_{\mathbf{e}\in\{0,\ldots,d-1\}^{m}}g_{\mathbf{e},k _{1}}\cdot 2^{(k_{2}dm)-k_{2}|\mathbf{e}|}\cdot\mathbf{x}^{\mathbf{e}}\] where \(|\mathbf{e}|\) refers to the sum of the coordinates (i.e., the degree of the monomial \(\mathbf{x}^{\mathbf{e}}\)).
10 Run Algorithm 4 (Exact-MME-integers) with inputs \(\left(G,\left(\widehat{\mathbf{a}^{(1)}},\ldots,\widehat{\mathbf{a}^{(N)}} \right),s=3kdm\right)\) to obtain \(B_{1},\ldots,B_{N}\) such that, for all \(i\in[N]\), we have \[B_{i}=G(\widehat{\mathbf{a}^{(i)}}).\] Let \(b_{i}=\left\lfloor\nicefrac{{B_{i}}}{{2^{k_{1}+k_{2}dm-t}}}\right\rfloor\) for each \(i\in[N]\). return\((b_{1},\ldots,b_{N})\).
```
**Algorithm 5**approximate-MME-Reals
Proof of correctness:Without loss of generality, we may assume that \(d,m\) are growing parameters (from Lines 1 to 3).
Note that for any \(\mathbf{a}^{(i)}\), we have
\[\left|f(\mathbf{a}^{(i)})-\left|f\right|_{k_{1}}(\lfloor\mathbf{a}^{(i)} \rceil_{k_{2}})\right|\leq\left|f(\mathbf{a}^{(i)})-\left|f\right|_{k_{1}}( \mathbf{a}^{(i)})\right|+\left|\left|f\right|_{k_{1}}(\mathbf{a}^{(i)})- \left|f\right|_{k_{1}}(\lfloor\mathbf{a}^{(i)}\rceil_{k_{2}})\right|\]
\[\leq\nicefrac{{1}}{{2^{i+2}}}+\nicefrac{{1}}{{2^{i+2}}}\leq\nicefrac{{1}}{{2^{i+1}}}.\]
where the last inequality uses Observation 5.3 and Observation 5.4 with our choice of \(k_{1}\) and \(k_{2}\). Thus, it suffices to compute \(\left\lfloor f\right\rceil_{k_{1}}\left(\left\lfloor\mathbf{a}^{(i)}\right \rceil_{k_{2}}\right)\) for each \(i\in[N]\). The polynomial \(\left\lfloor f\right\rceil_{k_{1}}\) is computed in Line 5 and the points \(\left\lfloor\mathbf{a}^{(i)}\right\rceil_{k_{2}}\) are computed in Lines 6 to 8. Let \(\widehat{\mathbf{a}^{(i)}}=2^{k_{2}}\lfloor\mathbf{a}^{(i)}\rceil_{k_{2}}\in( -2^{k_{2}},2^{k_{2}})^{m}\). Since each coefficient of \(2^{k_{1}}\cdot\left\lfloor f\right\rceil_{k_{1}}\) is bounded in magnitude by \(2^{k_{1}}\), we have
\[\left|G(\widehat{\mathbf{a}^{(i)}})\right|=\left|\sum_{\mathbf{e}\in\{0,\ldots d -1\}^{m}}g_{\mathbf{e},k_{1}}\cdot 2^{(k_{2}dm)-k_{2}\left\lvert\mathbf{e} \right\rvert}\cdot\widehat{\mathbf{a}^{(i)}}^{\mathbf{e}}\right|\leq d^{m} \cdot 2^{k_{1}}\cdot 2^{k_{2}dm}\cdot 2^{k_{2}dm}\leq 2^{3kdm}.\]
From the definition of \(G(x_{1},\ldots,x_{m})\), note that
\[G(\widehat{\mathbf{a}^{(i)}}) =\sum_{\mathbf{e}\in\{0,\ldots,d-1\}^{m}}g_{\mathbf{e},k_{1}}\cdot 2 ^{(k_{2}dm)-k_{2}\left\lvert\mathbf{e}\right\rvert}\cdot\widehat{\mathbf{a}^{ (i)}}^{\mathbf{e}}\] \[=\sum_{\mathbf{e}\in\{0,\ldots,d-1\}^{m}}g_{\mathbf{e},k_{1}}\cdot 2 ^{(k_{2}dm)}\cdot\left(\nicefrac{{1}}{{2^{k_{2}}}}\cdot\widehat{\mathbf{a}^{ (i)}}\right)^{\mathbf{e}}\] \[=2^{(k_{2}\cdot d\cdot m)}\cdot\sum_{\mathbf{e}\in\{0,\ldots,d-1 \}^{m}}g_{\mathbf{e},k_{1}}\cdot\lfloor\mathbf{a}^{(i)}\rceil_{k_{2}}^{ \mathbf{e}}\] \[=2^{k_{1}+k_{2}dm}\cdot\left\lfloor f\right\rceil_{k_{1}}( \lfloor\mathbf{a}^{(i)}\rceil_{k_{2}}).\]
Since Theorem 4.1 correctly computes the evaluations of \(G(\mathbf{x})\) on \(\widehat{\mathbf{a}^{(i)}}\)'s, we have we have for each \(i\in[N]\)
\[\nicefrac{{1}}{{2^{k_{1}+k_{2}dm}}}\cdot G\left(\widehat{\mathbf{a}^{(i)}} \right)=\left\lfloor f\right\rceil_{k_{1}}(\lfloor\mathbf{a}^{(i)}\rceil_{k_{2 }})=\nicefrac{{B_{i}}}{{2^{k_{1}+k_{2}dm}}}.\]
Finally, if \(b_{i}=\left\lfloor\nicefrac{{B_{i}}}{{2^{k_{1}+k_{2}dm}}}-t\right\rfloor\), then
\[\nicefrac{{\left\lfloor b_{i}/2^{k_{1}+k_{2}dm}\right\rvert}}{{2^{k_{1}+k_{2 }dm}\lvert}}=\nicefrac{{1}}{{2^{i}}}\cdot\left\lvert b_{i}-\nicefrac{{B_{i}}} {{2^{k_{1}+k_{2}dm}}}-t\right\rvert\leq\nicefrac{{1}}{{2^{i+1}}}.\]
Hence,
\[\left|f(\mathbf{a}^{(i)})-\nicefrac{{b_{i}}}{{2^{i}}}\right|\leq\left|f( \mathbf{a}^{(i)})-\left\lfloor f\right\rceil_{k_{1}}(\lfloor\mathbf{a}^{(i)} \rceil_{k_{2}})\right|+\left|\left\lfloor f\right\rceil_{k_{1}}(\lfloor \mathbf{a}^{(i)}\rceil_{k_{2}})-\nicefrac{{b_{i}}}{{2^{i}}}\right|\leq \nicefrac{{1}}{{2^{i}}}.\]
Running time analysis:After Lines 1 to 3, we may assume that \(d,m=\omega(1)\) and all coefficients of \(f\) and coordinates of points are provided via approximation oracles with running time \(\tilde{O}(r+m\log d)\) to compute an \(r\)-bit approximation.
Lines 5 to 8 overall takes time
\[(d^{m}+Nm)\cdot\tilde{O}(k+O(m\log d))=(d^{m}+Nm)\cdot\tilde{O}(t+O(m\log d))= ((d^{m}+Nm)\cdot t)^{1+o(1)}.\]
Computing the coefficients of \(G(\mathbf{x})\) takes time \((d^{m})\cdot\tilde{O}(kdm)\). By Theorem4.1, Line10 takes time
\[((d^{m}+Nm)\cdot 3kdm)^{1+o(1)}=((d^{m}+Nm)\cdot t)^{1+o(1)}.\]
Therefore, Algorithm5 takes \(((d^{m}+Nm)\cdot t)^{1+o(1)}\) overall.
This completes the proof of Theorem5.2.
## 6 Exact-MME over rationals with known output complexity
We now use our algorithm for approximate-MME over real numbers to obtain a fast algorithm for exact-MME over the field of rational numbers. We start by formally stating the precise problem that we solve and then build upon some necessary preliminaries that we need for our algorithm.
### The problem statement
Input:A polynomial \(f(x_{1},\ldots,x_{m})\in\mathbb{Q}_{(-1,1)}[x_{1},\ldots,x_{m}]\) of individual degree less than \(d\), given as a list of \(d^{m}\), a list of points \(\mathbf{a}^{(1)},\ldots,\mathbf{a}^{(N)}\in\mathbb{Q}_{(-1,1)}^{m}\), an integer parameter \(s>0\) such that all rational numbers in the coefficients of \(f\), the coordinates of points and evaluations \(f(\mathbf{a}^{(i)})\) are expressible as rational numbers of the form \(\nicefrac{{p}}{{q}}\) with \(\left|p\right|,\left|q\right|<2^{s}\).
Output:Integers \(b_{1},\ldots,b_{N},c_{1},\ldots,c_{N}\) such that \(f(\mathbf{a}^{(i)})=\nicefrac{{b_{i}}}{{c_{i}}}\) for all \(i\in[N]\).
**Theorem 6.1** (Exact-MME over rationals).: _There is a deterministic algorithm (namely Algorithm7) that on input as mentioned above returns the required output as mentioned above and runs in time \(((d^{m}+Nm)\cdot s)^{1+o(1)}\) for all \(m\in\mathbb{N}\) and sufficiently large \(d\in\mathbb{N}\)._
Main idea:The main idea would be a reduction to approximate-MME (Theorem5.2) followed by a _rational number reconstruction_ step. If we can compute \(f(\mathbf{a}^{(i)})\) to a reasonable degree of accuracy (depending on the output guarantee \(s\)), we can recover the rational number exactly from it. Before we present the algorithm for the above theorem, we discuss the notion of continued fractions which would be the key to reconstructing the rational number of interest.
### Continued fractions, rational approximations, and extended Euclid's algorithm
**Definition 6.2** (Continued fractions).: _A finite continued fraction expressed by a sequence of integers \([q_{1},\ldots,q_{t}]\) computes the rational number_
\[q_{1}+\frac{1}{q_{2}+\frac{1}{\ddots+\frac{1}{q_{t-1}+\frac{1}{q_{t}}}}}.\]
_An infinite continued fraction expressed by an infinite sequence of integers \([q_{1},q_{2},\ldots]\) satisfying4\(q_{2},\ldots,q_{n}>0\) is said to compute a real number \(\alpha\) if_
Footnote 4: Traditionally, continued fractions with this condition are called ‘simple’ continued fractions but we will drop this qualifier as we will only deal with continued fractions with this additional constraint.
\[\alpha=q_{1}+\frac{1}{q_{2}+\frac{1}{q_{3}+\frac{1}{\ddots}}}.\]
_in the sense that \(\lim_{n\to\infty}|\alpha-[q_{1},\ldots,q_{n}]|=0\). \(\Diamond\)_
We note some basic properties of continued fractions which may be found in most standard texts (cf. Schmidt [13, Chapter 1]).
**Proposition 6.3** (Uniqueness of continued fractions (Lemma 4C, 4D in [13])).: _Every real number has a unique continued fraction expansion up to the following exceptions:_
1. _If_ \(\alpha\) _is an integer, then there are exactly two continued fraction representations for_ \(\alpha\) _namely_ \([\alpha]\) _and_ \([\alpha-1,1]\)_._
2. _If_ \(\alpha\) _is a non-integral rational number, then there are exactly two continued fraction representations for_ \(\alpha\)_: one of the form_ \([q_{1},\ldots,q_{n}]\) _with_ \(q_{n}\geq 2\)_, and_ \([q_{1},\ldots,q_{n}-1,1]\) _being the other._
3. _If_ \(\alpha\) _is irrational, then there is exactly one continued fraction representation for_ \(\alpha\)_._
**Definition 6.4** (Convergents).: _For a real number \(\alpha\) with \([q_{1},q_{2},\ldots]\) being the unique5, the rational number \(a_{i}/b_{i}\) corresponding to the \(i\)-th prefix \([q_{1},\ldots,q_{i}]\) is called the \(i\)-th convergent of \(\alpha\). \(\Diamond\)_
Footnote 5: As a convention, for rational numbers, we will only consider continued fraction representations of the first form described in Proposition 6.3 Items 1 and 2.
**Lemma 6.5** (Properties of convergents).: _Suppose \(\left\{a_{i}/b_{i}\right\}_{i}\) be the convergents of a real number \(\alpha=[q_{1},q_{2},\ldots]\). Then_
1. _For any_ \(n\geq 3\)_, we have_ \[a_{n} =q_{n}a_{n-1}+a_{n-2},\] \[b_{n} =q_{n}b_{n-1}+b_{n-2}.\] _In particular, the denominator sequence_ \(\left\{b_{n}\right\}_{n\geq 2}\) _is increasing._
2. _For all_ \(n\geq 1\)_,_ \[\frac{a_{n+1}}{b_{n+1}}-\frac{a_{n}}{b_{n}}=\frac{(-1)^{n-1}}{b_{n}(q_{n+1}b_{n }+b_{n-1})}=\frac{(-1)^{n-1}}{b_{n}b_{n+1}}.\]
3. _For any_ \(n\geq 1\)_, unless_ \(\alpha=\frac{a_{n}}{b_{n}}\)_, we have_ \[\frac{1}{b_{n}(b_{n}+b_{n+1})}\leq\left|\alpha-\frac{a_{n}}{b_{n}}\right|\leq \frac{1}{b_{n}b_{n+1}}.\]
4. _Suppose_ \(\nicefrac{{a}}{{b}}\) _is a rational number satisfying_ \(|\alpha-\nicefrac{{a}}{{b}}|<\nicefrac{{1}}{{2b^{2}}}\)_Then,_ \(\nicefrac{{a}}{{b}}\) _is one of the convergents of_ \(\alpha\)_._
Proof.: Items 1, 2 and 4 are just [12, Lemma 3A, Lemma 3E, Theorem 5C] respectively.
For Item 3, if \(\alpha\neq\nicefrac{{a}}{{b_{n}}}\), we have that \(q_{n+1}\) exists. Let \(\alpha_{n+1}=[q_{n+1},\ldots]\). Then, we may abuse notation and express \(\alpha\) as the "continued fraction" \(\alpha=[q_{1},\ldots,q_{n},\alpha_{n+1}]\). Item 2 for this expression yields
\[\left|\alpha-\frac{a_{n}}{b_{n}}\right|=\frac{1}{b_{n}(\alpha_{n+1}b_{n}+b_{n- 1})}.\]
Note that \(q_{n+1}\leq\alpha_{n+1}\leq q_{n+1}+1\) and hence
\[\left|\alpha-\frac{a_{n}}{b_{n}}\right| =\frac{1}{b_{n}(\alpha_{n+1}b_{n}+b_{n-1})}\] \[\leq\frac{1}{b_{n}(q_{n+1}b_{n}+b_{n-1})}=\frac{1}{b_{n}b_{n+1}}, \quad\text{(by Item 1)}\] \[\text{and}\quad\left|\alpha-\frac{a_{n}}{b_{n}}\right| =\frac{1}{b_{n}(\alpha_{n+1}b_{n}+b_{n-1})}\] \[\geq\frac{1}{b_{n}(q_{n+1}b_{n}+b_{n-1}+b_{n})}=\frac{1}{b_{n}(b_ {n}+b_{n+1})}.\qed\]
#### Extended Euclid's Algorithm
Closely related to continued fractions is the classical Extended Euclid's Algorithm for computing the greatest common divisor of two numbers.
**Definition 6.6** (Remainder and quotient sequences).: _For a pair of integers \(a,b>0\), we define the remainder sequence \(\left\{r_{i}\right\}_{i=0,\ldots,t+1}\) and the quotient sequence \(\left\{q_{i}\right\}_{i=1,\ldots,t}\) for the pair \((a,b)\) as follows:_
* \(r_{0}=a\) _and_ \(r_{1}=b\)_,_
* _For all_ \(i\geq 1\)_, define_ \(q_{i},r_{i+1}\) _as the quotient and remainder respectively when_ \(r_{i-1}\) _is divided by_ \(r_{i}\)_. Thus,_ \[r_{i+1}=r_{i-1}\bmod r_{i}=r_{i-1}-q_{i}r_{i}.\]
* \(r_{t+1}\) _is the first element of the sequence that is equal to zero._ \(\Diamond\)__
**Observation 6.7** (Continued fractions for a rational number and quotient sequences).: _Suppose \(a,b>0\) are a pair of integers and \(\left\{q_{1},\ldots,q_{t}\right\}\) is the associated quotient sequence. Then, the continued fraction representation of the rational number \(\nicefrac{{a}}{{b}}\) is \([q_{1},\ldots,q_{t}]\):_
\[\frac{a}{b}=q_{1}+\frac{1}{q_{2}+\frac{1}{\ddots+\frac{1}{q_{t}}}}.\qed\]
Computing the gcd of two given integers, and more generally computing the entire quotient sequence, can be done in deterministic nearly-linear time; this is attributed to Knuth and Schonhage (cf. Moller [14] for a complete description and a detailed history).
**Theorem 6.8** (Fast Extended Euclid Algorithm (cf. Moller [14] )).: _There is a deterministic algorithm that, on input a pair of integers \(a>b>0\) with \(a,b\leq 2^{s}\), computes the entire quotient sequence \(q_{1},\ldots,q_{t}\) for the pair \((a,b)\) in time \(\tilde{O}(s)\)._
**Corollary 6.9** (Fast computation of convergents).: _There is a deterministic algorithm that, on input a pair of integers \(M,N>0\) with \(M,N\leq 2^{s}\), and an integer \(i>0\) computes integers \(a_{i},b_{i}\) such that \(\nicefrac{{a_{i}}}{{b_{i}}}\) is the \(i\)-th convergent of the rational number \(\nicefrac{{m}}{{N}}\), with running time \(\tilde{O}(s)\)._
Proof.: Let \(q_{1},\ldots,q_{t}\) be the quotient sequence for the pair \((M,N)\), which may be computed using Theorem6.8 in \(\tilde{O}(s)\) time. By Observation6.7, this is the continued fraction representation of \(\nicefrac{{M}}{{N}}\). Thus, it is easy to note that
\[\begin{bmatrix}a_{i}&a_{i-1}\\ b_{i}&b_{i-1}\end{bmatrix}=\begin{bmatrix}q_{1}&1\\ 1&0\end{bmatrix}\ldots\begin{bmatrix}q_{i}&1\\ 1&0\end{bmatrix}\]
where \(\nicefrac{{a_{i}}}{{b_{i}}}\) is the \(j\)-th convergent. Note that \(\left|q_{j}\right|<\nicefrac{{r_{j-1}}}{{r_{j}}}\) where \(\{r_{0},\ldots,r_{t}\}\) is the associated remainder sequence and hence we have \(\left|q_{1}\cdots q_{t}\right|\leq M\leq 2^{s}\). Thus, this matrix product can be computed in \(\tilde{O}(s)\) time.
### Rational number reconstruction
**Lemma 6.10** (Fast rational number reconstruction).: _There is a deterministic algorithm (namely Algorithm6) that, given as input an integer parameter \(s>0\) and integers \(A,B\) with the guarantee that \(\left|B\right|<2^{2s+1}\) and there exist a unique rational number (in reduced form) \(\nicefrac{{a}}{{b}}\) with \(\left|b\right|<2^{s}\) and_
\[\left|\frac{A}{B}-\frac{a}{b}\right|<\frac{1}{2^{2s+1}},\]
_finds the integers \(a,b\) in time \(\tilde{O}(s)\)._
Proof.: The algorithm is straightforward given Corollary6.9 and Lemma6.5.
```
Input : Integers \(A,B\) and an integer parameter \(s>0\) such that \(\left|A\right|,\left|B\right|\leq 2^{2s+1}\) and there is some rational number \(\nicefrac{{a}}{{b}}\) such that \(\left|b\right|<2^{s}\) and \(\left|A\right/B-\nicefrac{{a}}{{b}}\right|<\nicefrac{{1}}{{2^{2s+1}}}\). Output : The integers \(a,b\).
1 Using Theorem6.8, compute the quotient sequence \(q_{1},\ldots,q_{\ell}\) for the pair \(A,B\).
2 Using Corollary6.9 and binary search, compute the largest index \(i\) such that the \(i\)-th convergent \(\nicefrac{{a_{i}}}{{b_{i}}}\) satisfies \(\left|b_{i}\right|<2^{s}\).
3return\(a_{i},b_{i}\).
```
**Algorithm 6**Fast-Rational-Number-Reconstruction
The running time of the algorithm is clearly \(\tilde{O}(s)\) as claimed as \(\ell=O(\log(A+B))=O(s)\) and thus we have at most \(O(\log\ell)=O(\log s)\) uses of Corollary 6.9 in Line 2.
For correctness, assume that \(\nicefrac{{A}}{{B}}\) is in its reduced form. Since we know \(b_{1}=1\), let \(i\) be the largest index with the denominator \(b_{i}\) of the convergent \(\nicefrac{{a_{i}}}{{b_{i}}}\) satisfies \(b_{i}<2^{s}\). If this is the last convergent, then \(\nicefrac{{A}}{{B}}=\nicefrac{{a_{i}}}{{b_{i}}}\) and we are done. Thus, we may assume that \(\nicefrac{{A}}{{B}}\neq\nicefrac{{a_{i}}}{{b_{i}}}\).
Since we are given that \(\nicefrac{{A}}{{B}}=\nicefrac{{a}}{{b_{i}}}<\nicefrac{{1}}{{2^{2s+1}}}< \nicefrac{{1}}{{2b^{2}}}\), by Lemma 6.5 Item 4, \(\nicefrac{{a}}{{b}}\) is one of the convergents of \(\nicefrac{{A}}{{B}}\). For any \(\ell>i\), the \(\ell\)-th convergent \(\nicefrac{{a_{i}}}{{b_{i}}}\) has denominator larger than \(2^{s}\). For any \(j<i\), from Lemma 6.5 Item 3 and Item 1 we have
\[\left|\frac{A}{B}-\frac{a_{j}}{b_{j}}\right|\geq\frac{1}{b_{j}(b_{j}+b_{j+1})} >\frac{1}{2\cdot b_{i}^{2}}\geq\frac{1}{2^{2s+1}}.\]
Thus, \(\nicefrac{{a}}{{b}}\) must be the \(i\)-th convergent \(\nicefrac{{a_{i}}}{{b_{i}}}\).
### Algorithm for exact-MME over rationals
We now have all the necessary ingredients to describe the algorithm to prove Theorem 6.1.
```
Input : A polynomial \(f(x_{1},\ldots,x_{m})\in\mathbb{Q}_{(-1,1)}[\mathbf{x}]\), points \(\mathbf{a}^{(1)},\ldots,\mathbf{a}^{(N)}\in\mathbb{Q}_{(-1,1)}^{m}\), with all rational numbers provided via the numerator and denominator, and an integer parameter \(s\) such that all numerators and denominators of the coefficients of \(f\), coordinates of the points, and evaluations \(f(\mathbf{a}^{(i)})\) are at most \(2^{s}\). Output : Integers \(b_{1},\ldots,b_{N}\) and \(c_{1},\ldots,c_{N}\) such that \(f(\mathbf{a}^{(i)})=\nicefrac{{b_{i}}}{{c_{i}}}\) for all \(i\in[N]\).
1 Using the numerators and denominators for the required approximation oracles, run approximate-MME-Reals\(\left(f,\left\{\mathbf{a}^{(1)},\ldots,\mathbf{a}^{(N)}\right\},t=2s+1\right)\) (Algorithm 5) to obtain integers \((B_{1},\ldots,B_{N})\) such that \[\left|f(\mathbf{a}^{(i)})-\frac{B_{i}}{2^{t}}\right|<\frac{1}{2^{t}}=\frac{1} {2^{2s+1}}.\]
2for\(i\in N\)do
3 Run Fast-Rational-Number-Reconstruction\(\left(B_{i},2^{2s+1},s\right)\) (Algorithm 6) to get \(b_{i},c_{i}\) with \(|c_{i}|<2^{s}\) such that \[\left|\frac{B_{i}}{2^{2s+1}}-\frac{b_{i}}{c_{i}}\right|<\frac{1}{2^{2s+1}}.\]
4return\(\left(b_{1},\ldots,b_{N}\right),\left(c_{1},\ldots,c_{N}\right).\)
```
**Algorithm 7**Exact-MME-Rationals
The correctness of Algorithm 7 is evident from Theorem 5.2 and Lemma 6.10. Theorem 5.2 asserts that Algorithm 5 correctly provides the required approximations for the evaluations, and
Lemma 6.10 asserts that Algorithm 6 reconstructs the correct rational number.
As for running time, given the numerator and denominators, we can build approximation oracles for each rational number with nearly-linear running time. Thus, Line 1 takes \(((d^{m}+Nm)\cdot s)^{1+o(1)}\) time and the loop in Lines 2 to 3 takes \(\tilde{O}(N\cdot s)\) time. Thus, the total running time is \(((d^{m}+Nm)\cdot s)^{1+o(1)}\) as claimed.
This completes the proof of Theorem 6.1.
## 7 Approximate-MME over complex numbers
In this section, we briefly discuss the extension of Theorem 5.2 to the field of complex numbers. As discussed in the preliminaries, the field constants in this case are given by two approximation oracles, one for the real part of the complex number, and one for the imaginary part. The ideas needed for this extension, on top of the ideas in the proof of Theorem 3.1 are quite standard and were introduced by Kedlaya & Umans [11] for designing fast algorithms for MME for finite fields that are not prime. This approach also found a subsequent application in the work of Bhargava et al. [2], again in the context of dealing with non-prime finite fields while designing algorithms for MME. In the interest of keeping this discussion succinct and to avoid repetition, we outline the main steps needed for this generalization, but skip the formal details. The structure of the algorithm closely follows that of Algorithm 5, with some additional care.
As in the proof of Algorithm 5, we first make sure that the number of underlying variables is growing. Next, we round each of the field constants (both the real and the imaginary parts) by rational numbers with denominator \(2^{k}\) for some sufficiently large integer \(k\) to be chosen later. At this point, we have introduced some error (which turns out to be small if \(k\) is sufficiently large), but have reduced the problem instance over \(\mathbb{C}\) to an instance over \(\mathbb{Q}[\omega]\), where \(\omega\) is a square root of \(-1\). Moreover, all the denominators of the field constants in the problem are of the form \(2^{k}\). We now clear out the denominators, as in Algorithm 5, and get an instance of MME where the constants in the problem are from the ring \(\mathbb{Z}[\omega]\). At this point, we replace \(\omega\) in the constants in the input by a new formal variable \(z\), and instead of working over the ring \(\mathbb{Z}[\omega]\), we work over the ring \(\mathbb{Z}[z]/\langle z^{2}+1\rangle\). Note that this is sufficient, since given a solution to MME over this ring, we can obtain a solution to the original problem by just replacing \(z\) by \(\omega\). Now, the idea is to just invoke the algorithm for exact MME over integers (Algorithm 4) for this problem instance. However, we cannot quite do that directly since the instance at hand is over \(\mathbb{Z}[z]/\langle z^{2}+1\rangle\) and not over \(\mathbb{Z}\) as desired. Nevertheless, we proceed as in Algorithm 4 by picking sufficiently many primes \(p_{1},p_{2},\ldots,p_{s}\) and reducing the problem instance over \(\mathbb{Z}[z]/\langle z^{2}+1\rangle\) modulo these primes to obtain instances over the rings \(\mathbb{F}_{p_{i}}[z]/\langle z^{2}+1\rangle\) for every \(i\). In Algorithm 4, we just invoked the result of [2] over prime fields at this stage, and then combined the outputs using fast Chinese remaindering. However, in this case, what we have are instances over the finite rings \(\mathbb{F}_{p_{i}}[z]/\langle z^{2}+1\rangle\). But this does not turn out to be an issue as the algorithm of Bhargava et al
continues to work over such rings, and indeed the results and proofs in the [11] are stated in this form. One final thing to note is that the small optimizations that we do over the results in [11] in Section 3 to make sure that the dependence of the running time on the field size is nearly-linear continues to be true for the extension rings that we have here. Once we have solved all the instances over \(\mathbb{F}_{p_{i}}[z]/\langle z^{2}+1\rangle\), we can recover the solution over \(\mathbb{Z}[z]/\langle z^{2}+1\rangle\) by an application of fast Chinese remaindering as in Algorithm 4, and an appropriate scaling of these evaluations (again, as in Algorithm 5) gives us approximations of the original evaluations over \(\mathsf{C}\). The error analysis and the bound on the running time essentially the same as that in the analysis of Algorithm 5. We skip the rest of the details.
## 8 Discussion and open problems
We conclude with some open problems.
1. Perhaps the most natural question here is to seek an algebraic algorithm for multivariate multipoint evaluation over general fields, both finite and infinite. Currently, we only know such algebraic algorithms over finite fields of small characteristic [15, 11].
2. The aforementioned question of having an algebraic algorithm for MME is also interesting in the non-uniform setting. For instance, we do not know if the linear transformation given by a multivariate Vandermonde matrices can be computed by an arithmetic circuit of nearly-linear (or even sub-quadratic) size over fields other than finite fields of small characteristic.
3. It would be interesting to have additional applications of these faster algorithms and the ideas therein, beyond the applications already mentioned by Kedlaya and Umans [14].
|
2308.10349 | Electromagnetic radiation at extreme angular velocity | We consider a system rotating at extremely high angular velocity, so that its
matter is found mostly at the light-cylinder. We posit that it can be described
by quantum fields confined to the two-dimensional cylindrical surface rotating
about its symmetry axis. We apply this model to study the electromagnetic
radiation. In particular, we compute the photon spectrum emitted by the
quark-gluon plasma. | Matteo Buzzegoli, Kirill Tuchin | 2023-08-20T19:53:37Z | http://arxiv.org/abs/2308.10349v2 | # Electromagnetic radiation at extreme angular velocity
###### Abstract
We consider a system rotating at extremely high angular velocity, so that its matter is found mostly at the light-cylinder. We posit that it can be described by quantum fields confined to the two-dimensional cylindrical surface rotating about its symmetry axis. We apply this model to study the electromagnetic radiation. In particular, we compute the prompt photon spectrum emitted by the quark-gluon plasma.
+
Footnote †: We are using natural units where \(c=\hbar=k_{B}=1\).
## I Introduction
The interest to the rapidly rotating systems has been recently rekindled thanks to the experimental observation of highly vortical quark-gluon plasma produced in the relativistic heavy-ion collisions [1; 2; 3; 4; 5; 6]. Previous studies discussed the thermodynamics and the hydrodynamics of rotating systems based on the statistical approach [7; 8; 9; 10; 11; 12; 13], developed a quantum kinetic theory with spin degrees of freedom [14; 15; 16; 17; 18; 19; 20; 21; 22], and made predictions for the spin polarization measured in heavy-ion collisions [23; 24; 25; 26; 27; 28; 29; 30; 31; 32], see for instance the reviews [33; 34; 35].
In [36; 37] we initiated a study of the electromagnetic radiation emitted by rapidly rotating systems. The advantage of the electromagnetic radiation is that it is only weakly affected by the plasma evolution. The idea is to observe the impact of rotation on the quantum fields.
The study in [36; 37] discussed "relatively slowly" rotating systems in magnetic field. Namely, we assumed that the magnetic length \(1/\sqrt{eB}\) is much shorter than the inverse angular velocity \(\Omega^{*}\). Such rotation is slow. On the other hand, the absolute value of the angular velocity satisfying this condition can be enormous, hence the qualifying adverb "relatively". Generally, we can say that a system is relatively slowly rotating if its transverse size \(a\) is much smaller than \(1/\Omega\).
Model simulations show that the vorticity of the quark-gluon plasma can be as high as its inverse transverse size \(a\). This upsets the slow rotation assumption. A system rotating with the angular velocity \(\Omega\) is causally connected only within the lightcone cylinder of radius \(R=1/\Omega\). When \(R<a\) only a part of the rotating plasma is causally connected. This is a genuine fast rotation. Setting the proper boundary conditions on the quantum fields at the causal boundary becomes an essential
procedure. In [38], in the spirit of the MIT bag model, we required that the radial current vanishes on the boundary. However, there may be other possible boundary conditions.
In the regime \(1/\Omega<\ell\), where \(\ell\) is the mean-free-path, the rotation is so fast that it overwhelms all inter-particle forces and pushes the matter towards the light-cylinder wall. Such a medium will break down to a set of rotating cylindrical regions of radii \(R\ll a\). Within each cylindrical region the matter will be concentrated mostly at the boundary at \(R\) due to the centrifugal force. It seems reasonable therefore, that the dynamics of such extremely rapidly rotating system can be described by the quantum fields confined to the cylindrical surface of radius \(R\). We will not be concerned with the statistical properties of the matter within the cylindrical region, since in view of \(\ell>R\), it is simply an ideal rotating gas. Rather we are interested in the electromagnetic radiation it emits. Since the precise nature of the particles that make up the rotating cylinder is not very important -- only the fact that they are found at the boundary is -- we employ the scalar QED for calculations. It is reasonable to expect that the qualitative features of our results should be fairly model-independent.
The three rotation regimes of a system characterized by the radial size \(a\), the light-cylinder radius \(R=1/\Omega\), the mean-free-path \(\ell\) are
* _Slow rotation_\(\ell\ll a\ll R\). This approximation is used in [36; 37].
* _Fast rotation_\(\ell\ll R\sim a\)[38; 39].
* _Extremely fast rotation_\(R\ll\ell\ll a\). This is the scenario we consider in the paper.
In summary, we consider model in which the charged scalar particles can freely move on a thin cylindrical sheet of radius \(R=1/\Omega\) rotating with angular velocity \(\Omega\). In the following sections we will compute the electromagnetic radiation by a single particle and by a system of particles in thermal equilibrium. We believe that this model describes the universal properties of extremely rapidly rotating systems.
## II Radiation by scalar particle on cylindrical sheet
The wave function of a scalar particle of mass \(M\) embedded into a cylindrical surface rotating with the angular velocity \(\Omega\) about its symmetry axis \(z\) is
\[\psi(t,\phi,z)=\frac{1}{\sqrt{2\pi L}}\frac{1}{\sqrt{2E}}e^{-iEt+ip_{z}z+im \phi}\,, \tag{1}\]
where energy spectrum is
\[E=\sqrt{p_{z}^{2}+\frac{m^{2}}{R^{2}}+M^{2}}+m\Omega\,. \tag{2}\]
The magnetic quantum number \(m\) is an integer, while the longitudinal momentum \(p_{z}\) is continuous assuming that the cylinder height \(L\) is very large. We note that \(E>0\) for any \(m\).
The velocity of the quasi-classical motion along the \(z\)-direction is \(v=p_{z}/E\). Clearly, only the states with \(|v|\leq 1\) are causally connected. Inspection of (2) reveals that when \(m\geq 0\), then \(E>|p_{z}|\) for any value of \(p_{z}\). In contrast, when \(m<0\) this condition is satisfied only when
\[|p_{z}|\leq\frac{M^{2}}{2|m|\Omega}\,,\quad\text{if}\quad m<0\,. \tag{3}\]
Fig. 1 shows an example of the dispersion relation (2) with \(m<0\). Vertical lines indicated the allowed range of \(p_{z}\)'s.
We are interested to compute the electromagnetic radiation by this particle. The \(S_{fi}\)-matrix element reads
\[S_{fi}=-ie\int dt\int d\phi\int dz\,\mathbf{j}_{fi}(t,\phi,z)\cdot\mathbf{A}^{*}(t,\phi,z,R)\,, \tag{4}\]
where the photon wave function in the Coulomb gauge is
\[\mathbf{A}(t,\phi,z,R)=\frac{1}{\sqrt{2\omega V}}\mathbf{\epsilon}_{\lambda}e^{i\mathbf{k }\cdot\mathbf{r}-i\omega t}\,,\quad\mathbf{\epsilon}_{\lambda}\cdot\mathbf{k}=0\,. \tag{5}\]
The transition current is
\[\mathbf{j}_{fi}=i(\psi\mathbf{\nabla}\psi^{\prime*}-\psi^{\prime*}\mathbf{\nabla}\psi)\,. \tag{6}\]
In our notation: \(\psi_{i}=\psi\), \(\psi_{f}=\psi^{\prime}\).
Substituting (1) and (5) into (4) we arrive at
\[S_{fi}= \frac{-ie(2\pi)^{2}}{2\pi L\sqrt{2\omega V2\sqrt{EE^{\prime}}}}\delta (E-E^{\prime}-\omega)\delta(p_{z}-p_{z}^{\prime}-k_{z})\] \[\times\mathbf{\epsilon}_{\lambda}^{*}\cdot\left[\frac{1}{R}\mathbf{e}_{ \phi}(m+m^{\prime})+\mathbf{e}_{z}(p_{z}+p_{z}^{\prime})\right]2\pi iJ_{m-m^{\prime }}(k_{\perp}R)\,. \tag{7}\]
Two convenient photon polarizations (following [40]):
\[\mathbf{\epsilon}_{1}=-\mathbf{e}_{\phi}\,,\quad\mathbf{\epsilon}_{2}=-\sin\theta\mathbf{e}_{ z}+\cos\theta\mathbf{e}_{\perp}\,. \tag{8}\]
where \(\theta\) is the polar angle defined with respect to the \(z\)-axis, e.g. \(k_{z}=\omega\cos\theta\). The photon transverse momentum is then \(k_{\perp}=\omega\sin\theta\).
The photon emission rate can be computed as
\[\dot{w}_{fi}=\sum_{\lambda}\sum_{m^{\prime}}\frac{|S_{fi}|^{2}}{T}\frac{dk_{z} L}{2\pi}\frac{dk_{\perp}k_{\perp}\pi R^{2}}{2\pi}\frac{dp_{z}L}{2\pi} \tag{9}\]
while the radiation intensity is given by \(W=\dot{w}_{fi}\omega\):
\[W=\sum_{m^{\prime}=-\infty}^{m-1}\frac{e^{2}}{16\pi EE^{\prime}}\delta(E-E^{ \prime}-\omega)\left[\frac{(m+m^{\prime})^{2}}{R^{2}}+\sin^{2}\theta(p_{z}+p_{ z}^{\prime})^{2}\right]J_{m-m^{\prime}}^{2}(k_{\perp}R)dk_{z}dk_{\perp}k_{ \perp}\,. \tag{10}\]
One can expresses \(dk_{z}dk_{\perp}k_{\perp}=\omega^{2}d\omega d\sin\theta=\omega^{2}d\omega d \omega d/2\pi\), where \(do\) is the element of the solid angle in the direction of the emitted photon. In the non-relativistic limit, the leading term in the multipole expansion of the intensity is the magnetic dipole one because the electric dipole moment vanishes while the magnetic moment \(\mathbf{\mu}=\frac{1}{2}e\mathbf{r}\times\mathbf{v}\) is finite.
The delta-function in (10) can be re-written as
\[\delta(E-E^{\prime}-\omega)=\frac{\delta(\omega-\omega_{0})(E^{\prime}-m^{ \prime}\Omega)}{E-m^{\prime}\Omega-\omega\sin^{2}\theta-p_{z}\cos\theta}\,. \tag{11}\]
where the characteristic frequency is
\[\omega_{0} =\frac{1}{\sin^{2}\theta}\bigg{\{}E-m^{\prime}\Omega-p_{z}\cos\theta\] \[-\sqrt{(E-m^{\prime}\Omega-p_{z}\cos\theta)^{2}-\sin^{2}\theta \left[(E-m^{\prime}\Omega)^{2}-p_{z}^{2}-m^{\prime 2}/R^{2}-M^{2}\right]}\bigg{\}}\,. \tag{12}\]
Taking the integral over \(\omega\) one is left with the angular spectrum \(dW/do\). Alternatively, we can cast the delta-function in the form
\[\delta(E-E^{\prime}-\omega)=\sum_{\pm}\frac{\delta(\cos\theta-\cos\theta_{\pm} )(E^{\prime}-m^{\prime}\Omega)}{\omega|\omega\cos\theta-p_{z}|}\,, \tag{13}\]
where
\[\cos\theta_{\pm}=\frac{1}{\omega}\left\{p_{z}\pm\sqrt{(E-m^{\prime}\Omega-\omega)^ {2}-m^{\prime 2}/R^{2}-M^{2}}\right\}\,. \tag{14}\]
We note that in order that \(|\cos\theta_{\pm}|\leq 1\), photon energy \(\omega\) must not be too small. For the states with \(m^{\prime}<0\) one has to take into account the requirement (3).
The radiation intensity is shown in Fig. 2 as a function of \(m\) and \(\Omega\).
## III Radiation intensity by a single particle at rest
To investigate the radiation intensity analytically, consider a reference frame in which the incident fermion is at rest in the axial direction: \(p_{z}=0\), \(p_{z}^{\prime}=-\omega\cos\theta\). In particular, (3) becomes
\[\omega_{0}|\cos\theta|\leq\frac{M^{2}}{2|m^{\prime}|\Omega}\,. \tag{15}\]
In addition, we will focus on the limit \(\Omega\gg M\). In this case the spectrum (2) reads
\[E\approx\left\{\begin{array}{ll}2m\Omega\,,&m>0\,;\\ M\,,&m=0\,;\\ \frac{M^{2}}{2|m|\Omega}\,,&m<0\,.\end{array}\right.\qquad E^{\prime}\approx \left\{\begin{array}{ll}\sqrt{p_{z}^{\prime 2}+m^{\prime 2}\Omega^{2}}+m^{ \prime}\Omega\,,&m^{\prime}>0\,;\\ \sqrt{p_{z}^{\prime 2}+M^{2}}\,,&m^{\prime}=0\,;\\ \frac{p_{z}^{\prime 2}+M^{2}}{2|m^{\prime}|\Omega}\,,&m^{\prime}<0\,.\end{array}\right. \tag{16}\]
In order that \(E>E^{\prime}\), the range of \(m^{\prime}\) must be \(m^{\prime}\leq m-1\).
We distinguish three cases: (A) \(m>0\), (B) \(m=0\) and (C) \(m<0\).
### \(m>0\)
Fig. 3 shows the dependence ow \(W\) on \(m^{\prime}\) for \(m>0\). The mode with \(m^{\prime}=0\) is enhanced over those with \(m^{\prime}>0\) which in turn are enhanced as compared to the modes \(m^{\prime}<0\). To understand
the dynamics in each case and find an appropriate approximation we split the summation over \(m^{\prime}\) into three parts: (A1) \(m^{\prime}\leq-1\), (A2) \(m^{\prime}\geq 1\) and (A3) \(m^{\prime}=0\).
#### ii.1.1 \(m^{\prime}<0\)
Using (16),(11),(12) in (10) and keeping the leading terms in \(M/\Omega\) we obtain
\[W^{A1}=\frac{e^{2}}{2\pi}\sum_{m^{\prime}=-\infty}^{-1}\int_{0}^ {\pi}\frac{1}{16\pi EE^{\prime}}\frac{\delta(2m\Omega-\omega)}{\frac{\omega \cos^{2}\theta}{E^{\prime}-m^{\prime}\Omega}+1}\left[\frac{(m+m^{\prime})^{2}} {R^{2}}+\sin^{2}\theta\cos^{2}\theta\omega^{2}\right]\] \[\times J_{m-m^{\prime}}^{2}(k_{\perp}R)\omega^{2}\eta\left(M^{2}+ 2m^{\prime}\Omega\omega|\cos\theta|\right)d\omega\sin\theta d\theta \tag{17}\] \[=\frac{e^{2}}{2\pi}\sum_{m^{\prime}=-\infty}^{-1}2\int_{0}^{1} \frac{E^{\prime}-m^{\prime}\Omega}{16\pi EE^{\prime}(2m\Omega\cos^{2}\theta+E ^{\prime}-m^{\prime}\Omega)}\left[\frac{(m+m^{\prime})^{2}}{R^{2}}+(1-x^{2})x ^{2}(2m\Omega)^{2}\right]\] \[\times J_{m-m^{\prime}}^{2}\left(2m\sqrt{1-x^{2}}\right)(2m \Omega)^{2}\eta\left(M^{2}+2m^{\prime}\Omega\omega x\right)dx\,, \tag{18}\]
where \(\eta\) is the step function accounting for (15), \(x=\cos\theta\) and we took advantage of the fact that the integrals over \(0\leq x\leq 1\) and \(-1\leq x\leq 0\) are equal. In view of (16) and (15), the angular integration is confined to the small region
\[x<\frac{M^{2}}{4\Omega^{2}m|m^{\prime}|}\ll 1\,. \tag{19}\]
Bearing this in mind and using (16) we can then write \(\omega_{0}\cos^{2}\theta+E^{\prime}\approx\frac{M^{2}}{2|m^{\prime}|\Omega}\ll\Omega\). After expanding the remaining terms in \(x\) we obtain
\[W^{A1}=\frac{e^{2}\Omega^{2}}{8\pi}\sum_{j=1}^{\infty}(m-j)^{2}J_{m+j}^{2}(2m) =\frac{e^{2}\Omega^{2}}{8\pi}S_{1}(m)\,, \tag{20}\]
Figure 3: (Color online) Radiation intensity (10) for \(m=5\) vs \(m^{\prime}\).
where \(j=-m^{\prime}\) and we defined
\[S_{1}(m)=\sum_{j=1}^{\infty}(m-j)^{2}J_{m+j}^{2}(2m)\,. \tag{21}\]
This function is shown in Fig. 4.
#### ii.2.2 \(m^{\prime}>0\)
Now consider the final states with \(m^{\prime}>0\). In this case the constraint (15) does not apply. It implies that we can safely neglect \(M\) in (12), assuming that the integral over \(\theta\) is finite at \(\theta=\pi/2\) (which is indeed the case as will be seen shortly). Thus,
\[\omega_{0}\approx\frac{\Omega}{\sin^{2}\theta}\left\{2m-m^{\prime}-\sqrt{(2m- m^{\prime})^{2}-4m(m-m^{\prime})\sin^{2}\theta}\right\}\equiv\Omega z\,, \tag{22}\]
and \(E^{\prime}=2\Omega m-\omega_{0}\). A simple calculation yields
\[W^{A2}=\frac{e^{2}\Omega^{2}}{32\pi}S_{2}\left(m\right)\,, \tag{23}\]
where we defined the function
\[S_{2}(m)=\sum_{m^{\prime}=1}^{m-1}\int_{0}^{\pi}\frac{(2m-m^{\prime}-z)\left[ (m+m^{\prime})^{2}+z^{2}\sin^{2}\theta\cos^{2}\theta\right]}{m(2m-z)(2m-m^{ \prime}-z\sin^{2}\theta)}J_{m-m^{\prime}}^{2}(z\sin\theta)z^{2}\sin\theta d\theta \tag{24}\]
which depends only on \(m\). The numerical calculation of \(S_{2}\) is shown in Fig. 4.
#### ii.2.3 \(m^{\prime}=0\)
In this case the photon frequencies are
\[\omega_{0}=\frac{2m\Omega}{\sin^{2}\theta}\left\{1-\sqrt{\cos^{2}\theta+\sin^{ 2}\theta\frac{M^{2}}{4m^{2}\Omega^{2}}}\right\} \tag{25}\]
Figure 4: Left panel: \(S_{1}(m)/m^{2}\), right panel: \(S_{2}(m)/m^{2}\) appearing in (21) and (24) respectively.
and the intensity is
\[W^{A3}=\frac{e^{2}}{16\pi(2m\Omega)}\int_{0}^{\pi}\frac{m^{2}\Omega^{2}+\omega_{0 }^{2}\sin^{2}\theta\cos^{2}\theta}{\omega_{0}\cos^{2}\theta+\sqrt{\omega_{0}^{2 }\cos^{2}\theta+M^{2}}}J_{m}^{2}(\omega_{0}R\sin\theta)\omega_{0}^{2}\sin \theta d\theta\,. \tag{26}\]
The integrand peaks at small \(x=\cos\theta\) which gives rise to the logarithmically enhanced contribution. In this leading logarithmic approximation we write
\[W^{A3}\approx\frac{e^{2}}{16\pi}2\int_{0}^{1}\frac{m^{2}\Omega^{2}}{x^{2}+ \sqrt{x^{2}+\frac{M^{2}}{4m^{2}\Omega^{2}}}}J_{m}^{2}(2m)dx\approx\frac{e^{2} \Omega^{2}}{8\pi}m^{2}J_{m}^{2}(2m)\log\frac{4m\Omega}{M}\,. \tag{27}\]
### \(m=0\)
In this case all final states have \(m^{\prime}<0\). Using (10)-(14) and replacing \(-m^{\prime}=j\) we get
\[W^{B}=\sum_{j=1}^{\infty}\frac{e^{2}(E^{\prime}-m^{\prime}\Omega )}{16\pi EE^{\prime}(E^{\prime}-m^{\prime}\Omega+\omega_{0}\cos^{2}\theta)} \left[\frac{j^{2}}{R^{2}}+\omega_{0}^{2}\sin^{2}\theta\cos^{2}\theta\right] \eta\left(M^{2}-2j\Omega\omega|\cos\theta|\right)\] \[\times J_{j}^{2}(\omega_{0}R\sin\theta)\omega_{0}^{2}\sin\theta d \theta\,, \tag{28}\]
where, considering that \(x=\cos\theta\ll 1\) (see (19)) we approximate:
\[\omega_{0}\approx M\left(1-\frac{M}{2j\Omega}(1+x^{2})\right)\,. \tag{29}\]
It follows that \(E^{\prime}+\omega_{0}\cos^{2}\theta+j\Omega\approx j\Omega\). We are now left with a trivial integral over \(x\) which yields:
\[W^{B}=\frac{e^{2}\Omega^{2}}{8\pi}\sum_{j=1}^{\infty}j^{2}J_{j}^{2}(MR) \tag{30}\]
Expanding the Bessel function and keeping only the leading term with \(j=1\) we finally obtain
\[W^{B}=\frac{e^{2}M^{2}}{32\pi}\,. \tag{31}\]
### \(m<0\)
This is the most unusual case because the fermion transitions occur between the levels with negative \(m\) and \(m^{\prime}\) which correspond to \(0<E^{\prime}<E<M\). First deal with the delta-function:
\[\delta\left(-\frac{M^{2}}{2m\Omega}+\frac{\omega^{2}\cos^{2}\theta+M^{2}}{2m^ {\prime}\Omega}-\omega\right)=\frac{1}{\frac{\omega\cos^{2}\theta}{m^{\prime} \Omega}-1}\delta\left(\omega-\frac{M^{2}(|m^{\prime}|-|m|)}{2|m^{\prime}||m \Omega}\right)\,. \tag{32}\]
were we used (16). We get for the intensity using (10):
\[W^{C}=\frac{e^{2}}{16\pi}\sum_{m^{\prime}=-\infty}^{m-1}\frac{2| m|\Omega}{M^{2}}\frac{2|m^{\prime}|\Omega}{\omega_{0}^{2}\cos^{2}\theta+M^{2}} \frac{|m^{\prime}|\Omega}{\omega_{0}\cos^{2}\theta+|m^{\prime}|\Omega}\left[ \frac{(m+m^{\prime})^{2}}{R^{2}}+\omega_{0}^{2}\sin^{2}\theta\cos^{2}\theta\right]\] \[\times\eta\left(M^{2}+2m^{\prime}\Omega\omega|\cos\theta|\right)J _{m-m^{\prime}}^{2}(\omega_{0}R\sin\theta)\omega_{0}^{2}\sin\theta d\theta\,. \tag{33}\]
In view of (32), the step function implies that \(|\cos\theta|\leq|m|/\nu\), where \(\nu=|m^{\prime}|-|m|\) is a positive integer. Since \(\omega_{0}\sim M^{2}/\Omega\), we can neglect it in the denominators and in the square brackets in (33). It also allows us to expand the Bessel function \(J_{\nu}(\xi)\approx\frac{1}{\nu!}(\xi/2)^{\nu}\) and retain only the leading \(\nu=1\) term. Since \(|m|>1\) in this section, \(\cos\theta\) is not restricted at all in this approximation. The angular integration becomes trivial and yields
\[W^{C}=\frac{e^{2}M^{4}}{192\pi\Omega^{2}}\frac{(2|m|+1)^{2}}{|m|^{3}(|m|+1)^{3 }}\,. \tag{34}\]
### Summary of \(\Omega\gg M\) approximation.
In summary, the radiation intensity at \(\Omega\gg M\) is given by
\[W=\frac{e^{2}\Omega^{2}}{8\pi}\left\{\begin{array}{ll}S_{1}(m)+\frac{1}{4}S _{2}(m)+m^{2}J_{m}^{2}(2m)\log\frac{4m\Omega}{M}\,,&m>0\,,\\ \frac{M^{2}}{4\Omega^{2}}\,,&m=0\,,\\ \frac{M^{4}}{24\Omega^{4}}\frac{(2|m|+1)^{2}}{|m|^{3}(|m|+1)^{3}}\,,&m<0\,. \end{array}\right. \tag{35}\]
The transitions from \(m>0\), with the corresponding energy \(E\approx 2\Omega m\), give the largest contribution. When \(m\) is not very large, the leading channel is the transition to \(m^{\prime}=0\) (\(E^{\prime}\approx M\ll E\)) given by (27). This is also seen in Fig. 3. For large \(m\), we can use the well-known formula (see e.g. **9.3.15** in [41])
\[J_{m}(2m)\approx\sqrt{\frac{2}{\pi\sqrt{3}m}}\cos\left[m(\sqrt{3}-\pi/3)-\pi/4 \right]\,,\quad m\gg 1\,, \tag{36}\]
to conclude that the contribution from the transitions to \(m^{\prime}>0\) becomes dominant since \(S_{2}(m)\sim m^{2}\). We verified that (35) is an accurate approximation of the exact formula at \(p_{z}=0\) and \(M\ll\Omega\)..
## IV Radiation by spinning ideal gas
### Energy spectrum
If an ideal Maxwell-Boltzmann gas rotates extremely rapidly, then its radiation intensity is given by
\[I=\sum_{m=-\infty}^{\infty}\int\frac{dp_{z}L}{2\pi}\int do\int_{0}^{E}d\omega \frac{dW}{dod\omega}e^{-\beta E}\,. \tag{37}\]
Using (10),(13) and (14) we obtain
\[\frac{dI}{Ld\omega}=\frac{e^{2}}{16\pi}\sum_{m=-\infty}^{\infty}\sum _{\pm}\sum_{m^{\prime}=-\infty}^{m-1}\int\frac{dp_{z}}{2\pi}e^{-\beta E}\frac{E ^{\prime}-m^{\prime}\Omega}{EE^{\prime}\omega|\omega\cos\theta_{\pm}-pz|}\] \[\times\left[\frac{(m+m^{\prime})^{2}}{R^{2}}+\sin^{2}\theta_{\pm }(2p_{z}-\omega\cos\theta_{\pm})^{2}\right]J_{m-m^{\prime}}^{2}(\omega R\sin \theta_{\pm})\omega^{2}\,. \tag{38}\]
where the integral over \(p_{z}\) is restricted by (3) for \(m<0\). The corresponding energy spectra are exhibited in Fig. 5. We observe that there is the threshold frequency below which photon emission is impossible. We also notice that the spectrum peaks at \(\omega\sim\Omega\).
### Prompt photons in heavy-ion collisions
It is tempting to apply the model developed in this article to describe the prompt photon production by rotating quark-gluon plasma. We certainly realize that the plasma is not yet in the regime of extremely fast rotation \(\Omega>\ell^{-1}\). On the other hand, it is already in the region of fast rotation \(\Omega\sim a^{-1}\), using the notation introduced in Introduction. We therefore consider the calculation of the prompt photon spectrum as the no more than a back-of-the-envelop estimate and the proof of principle that rotation is a relevant effect.
The prompt photon emission from quark-gluon plasma, is described in terms of the following variables: \(k_{T}\), the photon momentum in the plane perpendicular to the collision axis (not to be confused with \(k_{\perp}\) defined with respect to the rotation axis), \(\phi\) its azimuthal angle in that plane and \(y\) its rapidity. They are related to the photon energy \(\omega\) and the emission angle \(\theta\) (see [42] for
Figure 5: (Color online) The energy spectrum of electromagnetic radiation by a thermal system. Right panel zooms into the infrared region of the left one. \(M=1\).
more details):
\[\omega=k_{T}\cosh y,\quad\cos\theta=\sin\phi/\cosh y\,. \tag{39}\]
We now express the prompt photon spectrum as
\[\frac{dN(k_{T},y,\phi)}{k_{T}dk_{T}d\phi dy}=g\Delta t\frac{dI(\omega,\theta)}{ \omega^{2}d\omega do}\,, \tag{40}\]
where \(\Delta t\) is the time interval. In realistic case photons are emitted by fermions with two possible polarizations and three colors and three flavors which make up the degeneracy factor \(g=18\). Eq. (40) simplifies at the midrapidity region \(y=0\):
\[\frac{dN}{k_{T}dk_{T}dy}\bigg{|}_{y=0}= \frac{g\Delta tL}{k_{T}^{2}}\int_{0}^{2\pi}d\phi\frac{dI}{Ld \omega do}\bigg{|}_{\omega=k_{T},\,\theta=\frac{\pi}{2}-\phi}\,. \tag{41}\]
The intensity \(I\) is given by (37) and \(W\) is given by (10).
It is now advantageous to use the delta-function in (10) to take the integral over \(p_{z}\). To this end we write
\[\delta(E-E^{\prime}-\omega)=\delta\left(\sqrt{p_{z}^{2}+m^{2} \Omega^{2}+M^{2}}-\sqrt{(p_{z}-\omega\cos\theta)^{2}+m^{\prime 2}\Omega^{2}+M^{2}}+ \Delta\right)\] \[=\frac{\delta(p_{z}-p_{z0})(E-m\Omega)(E-\omega-m^{\prime}\Omega )}{|\omega\cos\theta(E-m\Omega)+p_{z}\Delta|}\,, \tag{42}\]
where we introduced a convenient notation
\[\Delta=(m-m^{\prime})\Omega-\omega\,. \tag{43}\]
To compute \(p_{z0}\) we rewrite the equation in the second delta-function in (42) as a quadratic equation for \(p_{z}\). Of course not all its solutions necessarily satisfy the original equation. A careful examination of its two roots reveals that one root satisfies it at \(\Delta>0\), while another one at \(\Delta<0\). These can be combined in a single formula:
\[p_{z0}= \frac{1}{2(\Delta^{2}-\omega^{2}\cos^{2}\theta)}\Big{\{}-\omega \cos\theta(\omega^{2}\cos^{2}\theta-\Delta^{2}+m^{2}\Omega^{2}-m^{\prime 2} \Omega^{2})\] \[+\Delta\sqrt{(\omega^{2}\cos^{2}\theta-\Delta^{2}+m^{2}\Omega^{2 }-m^{\prime 2}\Omega^{2})^{2}+4(m^{2}\Omega^{2}+M^{2})(\omega^{2}\cos^{2} \theta-\Delta^{2})}\Big{\}}\,, \tag{44}\]
provided that
\[\Delta^{2}-\omega^{2}\cos^{2}\theta\leq 0\,. \tag{45}\]
Using (43) in (45) and noting that \(|\cos\theta|\leq 1\) we find that the allowed photon energies are
\[\omega\geq\frac{1}{2}(m-m^{\prime})\Omega\,. \tag{46}\]
In particular, the photon spectrum has an infrared threshold at \(\omega_{\rm min}=\Omega/2\). This is indeed clearly seen in Fig. 5. Eq. (45) is a constraint on the allowed values of the photon emission angle:
\[|\theta|\leq\Theta=\arcsin\sqrt{1-\Delta^{2}/\omega^{2}}\,. \tag{47}\]
Taking all these into account we obtain the final expression for the prompt photon spectrum:
\[\left.\frac{dN}{k_{T}dk_{T}dy}\right|_{y=0}= \frac{g\Delta tLe^{2}}{4(2\pi)^{3}}\sum_{m=-\infty}^{\infty}\sum_ {m^{\prime}=-\infty}^{m-1}\int_{-\Theta}^{+\Theta}d\theta\frac{1}{EE^{\prime} }e^{-E/T}\frac{(E-m\Omega)(E^{\prime}-m^{\prime}\Omega)}{|k_{T}\cos\theta(E-m \Omega)+p_{z}\Delta|}\] \[\times\left[(m+m^{\prime})^{2}\Omega^{2}+\sin^{2}\theta(2p_{z0}- k_{T}\cos\theta)^{2}\right]J_{m-m^{\prime}}^{2}(k_{T}\sin\theta\Omega^{-1})\,, \tag{48}\]
valid for \(k_{T}=\omega\) satisfying (46). \(E\) and \(E^{\prime}\) are the functions of \(p_{z0}\). At negative \(m\) and \(m^{\prime}\) the angular integration is further restricted by (3). However, as we have seen, the negative \(m\) and \(m^{\prime}\) give a negligible contribution. Therefore in practice the sums over \(m\) and \(m^{\prime}\) rum only over the non-negative values.
The results of the calculation are shown in Fig. 6. We infer that the effects of plasma rotation might be an important for the relativistic heavy-ion collisions phenomenology.
## V Summary
We posited that systems rotating with extreme angular velocity \(\Omega\sim\ell^{-1}\), where \(\ell\) is the mean-free-path, can be described by matter fields confined to the two dimensional cylindrical surface of
Figure 6: (Color online) Prompt photon spectrum at two temperatures and \(\Omega=0.1\) fm\({}^{-1}\). The data is from [43]. \(\Delta t=10\) fm/\(c\), \(L=10\) fm, \(M=150\) MeV.
radius \(R=1/\Omega\). We developed an application of this idea to the prompt photon production by the rotating quark-gluon plasma and argued that it is consistent with the experimental observations.
The statistical properties inside the light-cylinder are equivalent to those of the two-dimensional ideal gas, as the interactions are screened by \(R<\ell\). However, the statistical properties of the entire plasma are determined by the interaction of the light-cylinders; the development of these ideas is left for another study.
###### Acknowledgements.
This work was supported in part by the U.S. Department of Energy Grants No. DE-SC0023692.
|
2310.05268 | The Debye layer as a transmission line in the 4Hz-100kHz frequency range | We report measurements on the dynamics of the Debye layer at a gold electrode
in several electrolytes. In the experiments, the Debye layer transmits a damped
voltage wave along the electrode, which we use to probe the dynamics. We
compare the measurements with traditional impedance models, which schematize
the Debye layer as a capacitance. We find good agreement for very dilute
electrolytes, but also for an ionic liquid. However, the same model fails for
the concentrated electrolyte, as ion - solvent - ion interactions become
important. | Thanh-Tri Châu, Giovanni Zocchi | 2023-10-08T19:48:38Z | http://arxiv.org/abs/2310.05268v1 | # The Debye layer as a transmission line in the 4 Hz - 100 kHz frequency range
###### Abstract
We report measurements on the dynamics of the Debye layer at a gold electrode in several electrolytes. In the experiments, the Debye layer transmits a damped voltage wave along the electrode, which we use to probe the dynamics. We compare the measurements with traditional impedance models, which schematize the Debye layer as a capacitance. We find good agreement for very dilute electrolytes, but also for an ionic liquid. However, the same model fails for the concentrated electrolyte, as ion - solvent - ion interactions become important.
## 1 Introduction
A charged surface in an electrolyte drives the formation of a cloud of counterions within a thin boundary layer (Debye layer) next to the surface. This electric double layer (EDL) has a profound effect on macromolecular and colloidal interactions. Ionic screening transforms the long range Coulomb interaction into a short range force. The large electric field within the Debye layer (typically of order \(100\,\mathrm{mV/1\,nm}\)) can affect a variety of chemical and physical processes. For example, the binding dynamics of biological macromolecules involves the disruption of the EDL at the surface of contact. Action potential transmission is accompanied by a large perturbation of the EDL at the cell membrane [10]. Another motivation for the study of EDL dynamics lies in the development of capacitive energy storage devices (supercapacitors) and electrochemical processes for energy conversion [4, 6].
Direct experimental measurements of the Debye layer are challenging, due to the \(\mathrm{nm}\) scale. The static properties, such as the concentration profile of ions in the diffuse layer near a charged surface, can be deduced from measurements of the force between such surfaces [7, 15]. The corresponding theoretical framework is based on the Poisson-Boltzmann equation and known as the Gouy-Chapman theory. Measurements of the dynamics are mostly obtained from electrochemical cells. There are three common methods [4]. In Electrochemical Impedance Spectroscopy (EIS) a voltage consisting of a DC component plus a sine wave is imposed; the measured quantity is the current. The measurements are often presented in terms of a complex impedance vs frequency (a Nyquist plot). Cyclic Voltammetry (CV) is essentially the same measurement with a different voltage protocol (a triangular wave). In Galvanometric Cycling the electrode current is imposed and the measured quantity is the voltage. Physical quantites of interest, such as the EDL capacitance, are extracted by comparing to a model [4, 6]. Increasingly, models are compared with MD simulations, which provide quantities (such as the structure of the EDL) only indirectly accessible to experiments [11, 12].
To a first approximation, and for non-Fradaic processes, the EDL behaves like a capacitor, charging and discharging as charges on the two sides of the electrode-electrolyte interface accumulate and disperse without an actual charge transfer across the interface. To introduce the characteristic scales, let us consider a planar electrode in a 1:1 electrolyte (such as NaCl in water). From the linearized Debye-Falkenhagen equation for the charge density \(\rho\):
\[\frac{\partial\rho}{\partial t}-\chi k_{\mathrm{B}}T\,\nabla^{2}\rho+\frac{2 \chi}{\epsilon}|e|^{2}n_{0}\,\rho=0 \tag{1}\]
where \(\chi\) is the mobility of the ions (for simplicity assumed the same for the two species e.g. Na\({}^{+}\) and Cl\({}^{-}\)), \(n_{0}\) the bulk concentration of NaCl, \(|e|\) the proton charge and \(\epsilon\) the dielectric constant of the medium, one obtains the characteristic length scale \(\ell=\sqrt{(\epsilon k_{\mathrm{B}}T)/(2|e|^{2}n_{0})}\) (the Debye length) and time scale \(\tau=\ell^{2}/(\chi k_{\mathrm{B}}T)=\epsilon/(2|e|^{2}n_{0}\chi)=\epsilon/\sigma\) (\(\sigma\) is the ionic conductivity of the solution). Below we write the equations in dimensionless form using these characteristic scales and scaling \(\rho\) by \(2|e|n_{0}\). In an axis-symmetric geometry with the \(z\) axis perpendicular to the electrode, (1) becomes:
\[\frac{\partial\rho}{\partial t}-\frac{\partial^{2}\rho}{\partial z^{2}}+\rho= 0\,. \tag{2}\]
The electrostatic potential \(\phi\) is related to \(\rho\) through the Poisson equation; scaling \(\phi\) by \(k_{\mathrm{B}}T/|e|\), it reads:
\[\frac{\partial^{2}\phi}{\partial z^{2}}=-\rho\,. \tag{3}\]
With sinusoidal forcing \(\phi(z=0,t)=\exp(\mathrm{i}\omega t)\) and \(\phi\), \(\rho\to 0\) for \(z\to\infty\) (in the bulk), the solution of (2) and (3) is:
\[\phi(z,t) = e^{-kz}\mathrm{e}^{\mathrm{i}\omega t}\,, \tag{4}\] \[\rho(z,t) = -k^{2}e^{-kz}\mathrm{e}^{\mathrm{i}\omega t}\,,\] (5) \[\text{with}\qquad k^{2}=1+\mathrm{i}\omega\,.\]
We are concerned only with the regime \(\omega\ll 1\), in which case \(k\approx 1+\mathrm{i}\omega/2\) and (5) describes the formation of a Debye layer of size one (size \(\ell\) in dimensional variables), independent of frequency (apart from a slight phase lag). The reason is that, taking as an example a \(150\,\mathrm{mM}\) ("physiological") NaCl solution, \(\sigma\approx 10\,\mathrm{m}\mathrm{S/cm}\), \(n_{0}\approx 10^{20}\,\mathrm{cm}^{-3}\) so that \(\tau\approx 1\,\mathrm{ns}\) and \(\ell\approx 1\,\mathrm{nm}\).
The capacitive current at the electrode is obtained by considering the relation of the surface charge to the electric field: \(\epsilon E_{n}=Q/A\) where \(Q/A\) is the charge per unit area and \(E_{n}\) the normal component of the electric field at the surface. The capacitive current density \(j\) is then \(j=(\partial/\partial t)(Q/A)=\epsilon(\partial/\partial t)E_{n}=-\epsilon( \partial/\partial t)(\nabla\phi\cdot\mathbf{n})_{=0}\) with \(\mathbf{n}\) the unit normal. Using (4) in the regime \(\omega\ll 1\) we then find \(|j|\approx\epsilon\,\omega(1+\omega^{2}/8)\), to be compared with a driven RC circuit, where, in the same approximation, the current is \(|I|\approx C\omega(1-\omega^{2}/2)\). We see that the frequency behavior of the capacitive current at the electrode departs from that of an RC circuit only for \(\omega\sim 1\). For this reason one associates to the Debye layer a capacitance per unit area \(c=|j|/|\dot{\phi}(z=0)|\approx\epsilon(1+\omega^{2}/8)\) which, for \(\omega\ll 1\), is the constant \(\epsilon\) (or \(\epsilon/l\) in dimensional variables). However, this simple theory neglects hydrodynamic effects, as well as interactions between ions, and between ions and the electrode.
Here we probe Debye layer dynamics through an experimental configuration where the Debye layer forms part of an RC transmission line. We measure voltage transmission at the end of the line, given a sinusoidal input at the other end. The Debye layer at a planar gold electrode provides the capacitive part of the line, while the resistive part is provided by the thin film gold layer which forms the electrode. It is helpful to reason in terms of the continuum limit of the equivalent discrete elements circuit. As a first approximation, Fig. 1 is a schematic of the (1D) transmission line if the EDL can be considered as a distributed capacitance with capacitance per unit length \(c\) ; \(r\) is the resistance per unit length of the gold electrode. The potential along the line satisfies the diffusion equation
\[\frac{\partial V(x,t)}{\partial t}-\frac{1}{rc}\frac{\partial^{2}V(x,t)}{ \partial x^{2}}=0\,. \tag{6}\]
For AC driving voltage \(V(x=0,t)=V_{\mathrm{in}}\mathrm{e}^{\mathrm{i}\omega t}\) with the Ansatz
\[V(x,t)=\exp(-kx)\exp(\mathrm{i}\omega t) \tag{7}\]
one obtains a complex wave vector
\[k=(1+\mathrm{i})\sqrt{\frac{\omega rc}{2}}\,. \tag{8}\]
In this simple model, for increasing frequency the output voltage \(V_{\mathrm{out}}\) has an amplitude going exponentially to zero and a monotonously decreasing phase (modulo \(2\pi\)).
## 3 Materials, sample preparation and experimental procedure
The electrolyte solutions under study are NaCl aqueous solutions of \(10\,\mathrm{mM}\), \(100\,\mathrm{mM}\) and \(1\,\mathrm{M}\) concentrations in \(10\,\mathrm{mM}\) NaH\({}_{2}\)PO\({}_{4}\)/Na\({}_{2}\)HPO\({}_{4}\) buffer with pH \(7\) at \(25\) "C. We choose phosphate buffer because its \(\mathrm{p}\mathrm{K}_{\mathrm{a}}\) at the second step is close to the neutral \(\mathrm{pH}\) and has modest variation with temperature.1 The ionic liquid is 1-Butyl-3-methylimidazolium hexafluorophosphate (C\({}_{8}\)H\({}_{15}\)F\({}_{6}\)N\({}_{2}\)P). All chemicals are from Sigma-Aldrich.
Footnote 1: \(\mathrm{p}\mathrm{K}_{\mathrm{a}}2=7.21\) with a variation of approximately \(-0.0028/\)“C.
The ionic liquid or salt solution is sealed with a UV-activated epoxy (Loctite AA 3525) between two gold-coated glass slides arranged perpendicular to one another and separated by two spacer strips \(120\,\mathrm{\mu m}\) thick (see Fig. 2). The top plate provides a resistive path for the voltage signal and is prepared by depositing \(3\,\mathrm{nm}\) of Cr followed by \(20\,\mathrm{nm}\) of Au on a glass slide using electron-beam evaporation method (CHA Mark 40). The bottom plate, having an extra \(10\,\mathrm{nm}\) of Au compared to the top, serves as the ground electrode. The contacts are made by soldering gauge-28 wires on a buffering layer of electrically conductive silver paint (MG Chemicals 842AR-P) applied on the gold layer to prevent the latter from being peeled off.
We use a lock-in amplifier (SR850) for the measurements. The reference signal from the lock-in drives an in-house built voltage clamp circuit which applies the input voltage between \(V_{\mathrm{in}}\) and ground, as in Fig. 2. By switching the input to the detection path of the lock-in between \(V_{\mathrm{in}}\) and \(V_{\mathrm{out}}\), we measure both their amplitude and phase relative to the reference. We deduce from these measurements their relative amplitude and phase difference. For each experimental condition, we keep the temperature of the sample stable within \(\pm 0.01\)"C in an in-house built thermoelectrically controlled chamber.
## 4 Results
Fig. 3 shows the ratio of the rms amplitudes of the voltage at the end of the electrolytic cell, \(V_{\mathrm{out}}\), to the input voltage \(V_{\mathrm{in}}\), as a function of the square root of the driving frequency (blue symbols). Also shown is the phase of \(V_{\mathrm{out}}\) relative to \(V_{\mathrm{in}}\) (red symbols). Measurements are displayed for 3 different NaCl concentrations of the electrolyte: 10, 100, and 1000 \(\mathrm{mM}\). In the text and in the figures we refer to the above
Figure 1: Schematic of the Debye layer at the gold electrode as a distributed RC transmission line in 1D. The resistive path in the gold is of resistance \(r\) per unit length and the electrolyte within the Debye layer is considered as a capacitor with capacitance \(c\) per unit length. \(R\) is a trailing resistance, present in the experiments, between the input AC source and the starting point \(x=0\) of the line. In the experiments, we measure the amplitude and phase of \(V_{\mathrm{out}}\) relative to \(V_{\mathrm{in}}\), at \(x=L\).
NaCl concentrations to identify the samples, however all these solutions also contain \(10\,\mathrm{mM}\) phosphate buffer, so for example the total ionic strength of the "\(10\,\mathrm{mM}\) salt" samples is actually \(20\,\mathrm{mM}\). We notice immediately that at high enough frequencies the behavior of both the amplitude and phase of the output signal is non monotonic with salt concentration; this feature cannot be explained by a model based on the mean field theory of the Introduction. Equation (8) predicts that the amplitudes in Fig. 3 should decrease linearly on this log-linear scale above some small frequency2. Similarly the phase should decrease linearly on the same scale with jumps from \(-\pi\) to \(\pi\) due to its periodic nature. Up to some moderate frequency, such as \(110^{2}=$12.1\,\mathrm{kHz}$\) for \(100\,\mathrm{mM}\), this is indeed the case. However at higher frequency, the amplitude saturates to some constant level while the phase goes back up to zero. The reason is that in the experiment there are _two_ Debye layers, one at the "live" and one at the ground electrode, connected by a resistive path through the electrolyte. A more appropriate transmission line model is therefore as shown in Fig. 4, where the ground electrode is endowed with its own (capacitive) Debye layer and the conductance (per unit length) \(\sigma\) refers to conduction through the electrolyte in the direction orthogonal to the plates.
Footnote 2: The behavior at low frequencies reflects the finite size of the transmission line, among other things.
We now consider this 1D transmission line, of finite length \(L\), and solve analytically for the output voltage \(V_{\mathrm{out}}\equiv V(L)\), under harmonic driving. Ohm's law relates the current \(I(x)\) through the resistive gold layer to the voltage \(V(x)\) at an arbitrary point along the transmission line as
\[I(x)=-\frac{1}{r}\frac{\partial V(x)}{\partial x}\,. \tag{9}\]
We attribute the spatial variation of this current to a capacitive current per unit length \(I_{\mathrm{c}}(x)\), related to the local voltage \(V(x)\) through the impedance of the electrolyte strip of width \(\mathrm{d}x\):
\[-\frac{\partial I(x)}{\partial x}=I_{\mathrm{c}}(x)=V(x)\left(\frac{1}{ \mathrm{i}\omega c}+\frac{1}{\sigma}\right)^{-1}\,. \tag{10}\]
As a result, the voltage \(V(x)\) satisfies
\[\frac{\partial^{2}V(x)}{\partial x^{2}}=\frac{\mathrm{i}\omega r\sigma\sigma} {\mathrm{i}\omega c+\sigma}V(x) \tag{11}\]
where \(\omega\) is the forcing frequency. We solve eq. (11) under boundary conditions that account for a vanishing current at \(x=L\)
\[V^{\prime}(L)\sim I(L)=0\]
and a voltage drop across the trailing resistance \(R\)
\[V_{\mathrm{in}}+\frac{R}{r}V^{\prime}(0)=V(0)\,.\]
The result for the complex output \(V_{\mathrm{out}}\) is
\[\frac{V_{\mathrm{out}}}{V_{\mathrm{in}}}=\frac{1}{\cosh(kL)+\alpha kL\sinh(kL )}, \tag{12}\]
with
\[\alpha=\frac{R}{rL} \tag{13}\]
being the ratio between the trailing resistance \(R\) and the resistance of the metal layer in direct contact with the electrolyte.
The real and imaginary parts of the wave vector \(k\) are:
\[\begin{split} k^{\prime}L&=\left[\frac{\omega/ \omega_{rc}\left(\sqrt{1+\omega^{2}/\omega_{\sigma\sigma}^{2}}+\omega/\omega _{c\sigma}\right)}{2\left(1+\omega^{2}/\omega_{c\sigma}^{2}\right)}\right]^{ 1/2}\\ k^{\prime\prime}L&=\left[\frac{\omega/\omega_{rc} \left(\sqrt{1+\omega^{2}/\omega_{c\sigma}^{2}}-\omega/\omega_{c\sigma} \right)}{2\left(1+\omega^{2}/\omega_{c\sigma}^{2}\right)}\right]^{1/2}\,. \end{split} \tag{14}\]
The frequencies \(\omega_{rc}\) and \(\omega_{c\sigma}\) are set by the charging time of the capacitors, and limited by the (longitudinal) resistance of
Figure 3: Measured amplitude (blue) and phase (red) of \(V_{\mathrm{out}}/V_{\mathrm{in}}\) vs the square root of the driving frequency \(f=\omega/2\pi\) for three concentrations of buffered NaCl solution: \(10\,\mathrm{mM}\) (filled squares), \(100\,\mathrm{mM}\) (empty squares), and \(1\,\mathrm{M}\) (stars). The amplitude is shown on a log scale, the phase on a linear scale. The driving amplitude was \(V_{\mathrm{in}}=$24\,\mathrm{mV}\,\mathrm{rms}$\), and the temperature \(25\,\mathrm{\SIUnitSymbolCelsius}\).
Figure 2: Schematic of an experimental sample. The electrolyte is sealed in a \(120\,\mathrm{\mu m}\) thick cell obtained from two microscope slides separated by spacers. The inner surface of the slides is conductive through a thin layer of gold evaporated on it. One electrode serves as the ground, while the other provides a resistive path for the transmission line.
the gold layer and the (transverse) resistance of the electrolyte, respectively:
\[\omega_{rc} = \frac{1}{(rL)(cL)} \tag{15}\] \[\omega_{co} = \frac{\sigma}{c}\,. \tag{16}\]
\(\alpha\) is not a fitting parameter as we can deduce it from measurements of the resistive gold layer's geometry and surface resistance. For a rectangular gold layer \(2.98\pm 0.30\,\mathrm{cm}\) long and \(0.93\pm 0.10\,\mathrm{cm}\) wide, we measure a resistance of \(9.3\pm 0.1\,\Omega\). This corresponds to a surface resistance of \(2.9\pm 0.4\,\Omega/\mathrm{sq}\) (\(\mathrm{sq}\) refers to any square patch of surface). We are left with two fitting parameters: \(\omega_{rc}\) and \(\omega_{co}\).
The two parameters fit to the model of Eq. (12) is displayed in Fig. 5 for the \(100\,\mathrm{mM}\) salt solution (same data as in Fig. 3). The fit for the \(10\,\mathrm{mM}\) salt is noticeably better than the case shown, while for the \(1\,\mathrm{M}\) case it is noticeably worse (data not shown). The conclusion we draw is that the model of Fig. 4, which schematizes the Debye layer as a fixed (frequency independent) capacitance and the bulk electrolyte as a resistance, describes the frequency dependence fairly well. However we also see systematic departures from the data, at the level of \(10\,\mathrm{\char 37}\) for the \(100\,\mathrm{mM}\) salt and \(20\,\mathrm{\char 37}\) for the \(1\,\mathrm{M}\) concentration. The discrepancy between model and experiments is significant as the stability and reproducibility of the experiment (for the same sample) is of order the size of the dots in the plots. It cannot be attributed to nonlinearities arising from voltage dependent processes (such as chemical reactions): the measurements of Fig. 3, obtained for \(V_{\mathrm{in}}=$24\,\mathrm{mV}$\), were repeated (on the same samples) for \(V_{\mathrm{in}}=$50\,\mathrm{mV}$\), and identical results (overlapping symbols, not shown) were obtained. Intriguingly, the measurements on the ionic liquid, shown in Fig. 6, show no discrepancy at all with the model (11).
However, the virtues of the RC transmission line model are somewhat starnished if one tries to directly relate its effective parameters to the physical properties of the electrolytes. Table 1 summarizes the parameters obtained from the fits, for the 3 salt concentrations and the ionic liquid. The \(100\,\mathrm{mM}\) salt condition was measured for two independent samples A and B. The two fitting parameters are \(\omega_{rc}\) and \(\omega_{co}\) (Eq. (14)); \(\alpha\) is obtained from the measured conductivity of the gold layer and the geometry of the sample. The capacitance per unit area _of one EDL_, \(C\), is obtained from the value of \(\omega_{rc}\) using (15) and the measured geometry (length and width) of the cell. For comparison, the next column in the Table lists the corresponding capacitance value \(\epsilon/\ell\) obtained from the (calculated) Debye length \(\ell\). Note that for the \(1\,M\) salt and more so for the ionic liquid, the calculated \(\ell\) is smaller than the size of the ions. The last two columns display the bulk conductivity of the electrolyte obtained from \(\omega_{co}\) using (16) and the cell geometry, and
Figure 4: Schematic of the 1D transmission line model where the electrolyte is considered as two capacitors sandwiching a series resistor. Each capacitor has a value of \(2c\) per unit length, whereas the resistor corresponds to a conductance of \(\sigma\) per unit length. The transmission line electrode has resistance per unit length \(r\), whereas the ground electrode is an equipotential, due to the thicker gold layer on it.
Figure 5: Amplitude and phase of \(V_{\mathrm{out}}/V_{\mathrm{in}}\) for sample A: \(100\,\mathrm{mM}\) NaCl in \(10\,\mathrm{mM}\) phosphate buffer, pH 7 at \(25\) °C (same data as in Fig. 3). The frequency sweep is carried out at \(25\) °C and \(24\,\mathrm{mV}\) rms driving voltage. The solid lines show the global two-parameter fit according to Eq. (12), using a measured \(\alpha=1.69\).
Figure 6: Amplitude (blue) and phase (red) of \(V_{\mathrm{out}}/V_{\mathrm{in}}\) for the ionic liquid \(\mathrm{CsH_{3}F_{5}N_{2}P}\) at \(25\) °C and \(24\,\mathrm{mV}\) rms driving voltage. Solid lines show the two parameter fit to the model Eq. (12), using a measured \(\alpha=1.29\).
for comparison the literature values. The quantity \(\epsilon/\ell\), which is proportional to \(\sqrt{\epsilon}\), is calculated using \(\epsilon=80\) (relative to the permittivity of free space) for the salt solutions and \(\epsilon=11\) for the ionic liquid [13]. Focusing on the capacitance \(C\), we see that in all cases the EDL capacitance measured in the experiment is more than an order of magnitude lower than \(\epsilon/\ell\), which is the value calculated assuming a parallel plates capacitor of thickness \(\ell\) (the Debye length). This well known phenomenon is usually attributed to the existence of a Stern layer of immobilized water molecules and counterions at the metal surface. In terms of the model of Fig. 4 the effect is to add a "Stern layer capacitance" \(C_{\rm S}=\epsilon_{\rm S}/\delta_{\rm S}\) in series to the Debye layer capacitance \(C\); \(\epsilon_{\rm S}\) is the dielectric constant of the Stern layer, \(\delta_{\rm S}\) its thickness [4]. The composite EDL capacitance is then \(C_{\rm EDL}=(CC_{\rm S})/(C+C_{\rm S})<C_{\rm S}\) i.e. it is bounded by \(C_{\rm S}\). With representative values \(\epsilon_{\rm S}\approx 2\) and \(\delta_{\rm S}\approx 4\rm\AA\) for the Stern layer (see for example the detailed analysis in [12]) one obtains \(C_{\rm S}\approx 4.4\,\mu{\rm F/cm^{2}}\), not inconsistent with our measured values for the NaCl electrolytes.
For the conductivity \(\sigma\), at low ionic strength (the \(10\,{\rm mM}\) sample) there is rough agreement between the value obtained from the experiment (which assumes that the resistors labeled \(1/(\sigma\Delta x)\) in Fig. 4 correspond to the conductivity of the bulk electrolyte) and the actual bulk conductivity of the electrolyte. But for high ionic strength there is a discrepancy of more than an order of magnitude. On the other hand, the ionic liquid again seems to display "ideal" behavior in that there is rough agreement between the experimental value and the actual bulk conductivity.
Next to ionic strength, temperature is another thermodynamic control parameter in the experiments, affecting the Debye layer, the Stern layer, and the bulk conductivity, among other factors. Without providing a comprehensive analysis in this Letter, we show in Fig. 7 a series of measurements at different temperatures, for the \(100\,{\rm mM}\) salt. The data for \(25\), \(15\), \(5\), and \(0\,{}^{\circ}{\rm C}\) are from the same sample, while \(-10\) and \(-15\)\({}^{\circ}{\rm C}\) are from another. The monotonic rise of the high frequency amplitude plateau with decreasing temperature, and corresponding behavior of the phase, are due to the decrease in the conductivity of the electrolyte (due to the decreased ion mobility) with decreasing temperature. At \(-15\,{}^{\circ}{\rm C}\) the sample is frozen; there is no Debye layer (the conductivity \(\sigma\) and capacitance per unit length \(c\) are essentially zero) and we obtain a featureless response (\(V_{\rm out}/V_{\rm in}\approx 1\) and \(\Delta\theta\approx 0\)). Note that under our conditions the \(100\,{\rm mM}\) samples freeze at about \(-10\)\({}^{\circ}{\rm C}\), because the cell keeps the sample at approximately constant volume and therefore under pressure when frozen.
## 4 Summary and Discussion
We have introduced a setup in which the Debye layer at the electrode-electrolyte interface forms part of a transmission line along the electrode. Measurements of the voltage along the electrode, alternative to traditional current measurements with equipotential electrodes, reflect the dynamics of the Debye layer. We present measurements for buffered NaCl solutions of different ionic strengths, in the low voltage regime where hydrolysis and other chemical reactions are unimportant. We also present a set of measurements for an ionic liquid. We find that the traditional view of the EDL as a (frequency independent) capacitance describes the dynamics fairly well in some cases, but not others. Specifically, for the low salt concentration electrolyte (\(10\,{\rm mM}\) NaCl), and, at the other end, for the ionic liquid, the frequency dependence of the measurements is well described by the model of Fig. 4. However, for the more concentrated \(100\,{\rm mM}\) salt electrolyte, and more so for the \(1\,{\rm M}\) concentration, there are large discrepancies between the measurements and the model. The two parameters measured from the fits, \(\omega_{rc}\) and \(\omega_{c\sigma}\), can be converted, within this model, into an effective capacitance for the EDL and an effective ionic conductivity for the electrolyte. In all cases, the capacitance values thus measured are an order of magnitude or more smaller than the values calculated for a parallel plates capacitor of thickness a Debye length \(\ell\), and using the bulk static dielectric constant of the medium. This behavior is attributed to the existence of a Stern layer of immobilized electrolyte molecules at the electrode [12, 14]. The conductivity values measured from the fits are roughly consistent with the actual bulk ionic conductivity of the electrolyte for the very dilute salt solution (\(10\,{\rm mM}\)) and for the ionic liquid. However for the more concentrated salt solutions the conductivity from the fits clearly does not represent the bulk conductivity. Phenomenologically one could possibly invoke a reduced mobility of the ions in the region of the Stern / Debye layer. The more fundamental conclusion is however that the impedance model Fig. 4 misses part of the physics.
The hydrodynamics of interacting ions close to a surface forms indeed an interesting mathematical problem, due to the range of scales: the Debye layer at the \(\rm nm\) scale, the bulk
Figure 7: Effect of temperature on the amplitude and phase of \(V_{\rm out}/V_{\rm in}\), measured for the \(100\,{\rm mM}\) NaCl solution; drive amplitude \(V_{\rm in}=24\,{\rm mV}\). The behavior is monotonic with temperature; the different plots correspond to \(T=25^{\circ}{\rm C}\) (filled squares), and \(15,5,0,-10,-15\,{}^{\circ}{\rm C}\). At \(-15\,{}^{\circ}{\rm C}\) (open diamonds) the sample is frozen (whereas at \(-10\,{}^{\circ}{\rm C}\) (open circles) it is still liquid, due to the increased pressure).
electrolyte at the \(\mu\)m or \(\rm mm\) scale. The method of matched asymptotic expansions has been used in this context [1, 5]. The impedance elements models (such as Fig. 4) circumvent the mathematical difficulties by placing a resistance (\(1/\sigma\) in Fig. 4) for the bulk electrolyte, but this is unsatisfactory in general. The transmission line model based on impedances was introduced in the 60's to describe electrochemistry at a porous electrode [2], a subject of renewed interest today [3]. Remarkably, this simple approach seems to work well for the ionic liquid. Ion - solvent interactions are absent in this case, a situation analogous to the ideal behavior of a polymer melt. Moreover, measurements with the surface force apparatus indicate that ionic liquids behave as dilute electrolytes in terms of the static properties of the diffuse layer [16, 17], apparently for the reason that only a small fraction of the charges are effectively dissociated.
From a purely experimental point of view, looking at the columns \(C\) and \(\sigma\) in Table 1, one would think that what is measured are properties of the electrode rather than the different electrolytes. In fact it is well known that even in the static case, the structure of the EDL is more complicated than the result of the mean field Gouy-Chapman-Stern theory suggests [6]. For high ionic strength, the ion density profile away from the electrode is oscillatory rather than monotonic [12], while the Debye length may increase with salt at high enough concentrations [8]. The non-monotonicity with increasing salt concentration visible in the plots of Fig. 3 may be related to this latter phenomenon; further measurements with this setup at higher salt concentrations could be informative. Similarly it should be interesting to compare our measurements with the predictions from continuum theories which take into account the finite size of the ions and ion-ion interactions [5, 18].
Measurements with this setup may be extended in a number of ways. A DC bias can be added to the drive, in order to probe the dynamics with different electrode potentials. This is routinely done in EIS, where a third (reference) electrode is normally used to standardize the measurements. The high voltage regime (\(|e|V/k_{\rm B}T>1\)) will introduce nonlinearities and eventually new processes (water hydrolysis, potentially surface remodelling), and remains to be studied in this system. Coupling this transmission line configuration to redox chemical reactions in the electrolyte will generate reaction-diffusion systems where voltage is one phase space coordinate. These should be interesting as pattern forming systems.
###### Acknowledgements.
We thank Anastassia Alexandrova for suggesting the ionic liquid measurements.
|
2302.12505 | Spatial Bias for Attention-free Non-local Neural Networks | In this paper, we introduce the spatial bias to learn global knowledge
without self-attention in convolutional neural networks. Owing to the limited
receptive field, conventional convolutional neural networks suffer from
learning long-range dependencies. Non-local neural networks have struggled to
learn global knowledge, but unavoidably have too heavy a network design due to
the self-attention operation. Therefore, we propose a fast and lightweight
spatial bias that efficiently encodes global knowledge without self-attention
on convolutional neural networks. Spatial bias is stacked on the feature map
and convolved together to adjust the spatial structure of the convolutional
features. Therefore, we learn the global knowledge on the convolution layer
directly with very few additional resources. Our method is very fast and
lightweight due to the attention-free non-local method while improving the
performance of neural networks considerably. Compared to non-local neural
networks, the spatial bias use about 10 times fewer parameters while achieving
comparable performance with 1.6 ~ 3.3 times more throughput on a very little
budget. Furthermore, the spatial bias can be used with conventional non-local
neural networks to further improve the performance of the backbone model. We
show that the spatial bias achieves competitive performance that improves the
classification accuracy by +0.79% and +1.5% on ImageNet-1K and cifar100
datasets. Additionally, we validate our method on the MS-COCO and ADE20K
datasets for downstream tasks involving object detection and semantic
segmentation. | Junhyung Go, Jongbin Ryu | 2023-02-24T08:16:16Z | http://arxiv.org/abs/2302.12505v1 | # Spatial Bias for Attention-free Non-local Neural Networks
###### Abstract
In this paper, we introduce the spatial bias to learn global knowledge without self-attention in convolutional neural networks. Owing to the limited receptive field, conventional convolutional neural networks suffer from learning long-range dependencies. Non-local neural networks have struggled to learn global knowledge, but unavoidably have too heavy a network design due to the self-attention operation. Therefore, we propose a fast and lightweight spatial bias that efficiently encodes global knowledge without self-attention on convolutional neural networks. Spatial bias is stacked on the feature map and convolved together to adjust the spatial structure of the convolutional features. Therefore, we learn the global knowledge on the convolution layer directly with very few additional resources. Our method is very fast and lightweight due to the attention-free non-local method while improving the performance of neural networks considerably. Compared to non-local neural networks, the spatial bias use about \(\times 10\) times fewer parameters while achieving comparable performance with \(1.6\sim 3.3\) times more throughput on a very little budget. Furthermore, the spatial bias can be used with conventional non-local neural networks to further improve the performance of the backbone model. We show that the spatial bias achieves competitive performance that improves the classification accuracy by \(+0.79\%\) and \(+1.5\%\) on ImageNet-1K and cifar100 datasets. Additionally, we validate our method on the MS-COCO and ADE20K datasets for downstream tasks involving object detection and semantic segmentation.
keywords: Non-local operation, Long-range dependency, Spatial bias, Global context, Image classification, Convolutional neural networks +
Footnote †: journal: Expert Systems with Applications
## 1 Introduction
Convolutional neural networks (CNNs) excel in extracting nuanced local information. Thanks to this advantage, CNNs are utilized for a variety of visual recognition tasks. However, their inability to effectively capture the global context has been mentioned numerous repeatedly. Due to the limited receptive field size, the convolution focuses on a small region that learns only local information; to overcome this, several approaches to increase the receptive field size have been extensively studied, such as building deeper layers He et al. (2016), employing different kernel sizes Szegedy et al. (2015); Li et al. (2019); Li and Zhang (2022), and learning non-local pixel-level pairwise relationships Wang et al. (2018); Cao et al. (2019); Fang et al. (2021); You et al. (2022); Ding et al. (2023); Cho et al. (2022); Chi et al. (2020); Xie et al. (2022). Among these methods, self-attention based non-local neural networks Wang et al. (2018) have been a major approach to capture long-range information. However, they exploit an excessive amount of resources because of the self-attention operation. Therefore, this paper presents a novel lightweight method that directly learns the long-range dependency during the convolution operation. The proposed method acquires global knowledge by
Figure 1: Performance comparison for the proposed method and conventional non-local neural networks on ImageNet-1K dataset. \(\bullet\) denotes the naive ResNet backbone and conventional non-local neural networks and \(\star\) present performance improvement of the conventional networks with our spatial bias. In all cases, the proposed spatial bias enhance networks with minimal computational complexity.
incorporating a spatial bias term into the convolution operation. A spatial bias with long-range correlation is added to the position in which convolution is performed.
The proposed method allows for the simultaneous learning of local information from the convolution term and global knowledge from the spatial bias term. In addition, a minimal amount of resources are used for the proposed spatial bias term, and thus our method is very fast and lightweight compared to the conventional self-attention based non-local neural networks. We extensively carry out experiments on the number of parameters, FLOPs, throughput, and accuracy to show the efficacy of the proposed spatial bias method. As shown in Fig. 1, the inference time overhead of our spatial bias is **1.6** to **3.3** times less than that of non-local neural network while achieving competitive performance compared to the non-local networks Wang et al. (2018); Cao et al. (2019); Chi et al. (2020). The proposed spatial bias further improves the performance of backbone networks in conjunction with existing self-attention operations. The following is a summary of the contributions regarding our spatial bias.
* We introduce a spatial bias that takes into account both local and global knowledge in a convolution operation. Thanks to the proposed lightweight architecture, the spatial bias term is computed very quickly and with a small amount of overhead.
* We show that the proposed spatial bias term significantly improves the performance while incurring very modest overheads: in the case of ResNet-50 backbone, the parameter overhead of spatial bias has only 6.4% and \(\times 3.3\) faster compared to the non-local neural network (NLNet) Wang et al. (2018).
* We verify the generalizability of the proposed spatial bias by combining it with other non-local methods and backbones. We also confirm that spatial bias improves performance in downstream tasks.
## 2 Related Work
Non-local Neural Network with Self-attentionThe non-local neural networks Wang et al. (2018); Cao et al. (2019); Chi et al. (2020) using self-attention operation that learns long-range dependency has been most widely studied. Unlike
convolution operation, self-attention learns global knowledge in a single layer, which alleviates the narrow receptive field problem of CNNs. This approach performs well when applied to a variety of visual tasks. In particular, NLNet Wang et al. (2018) is the first study to exploit the self-attention operation for learning the pairwise relationship at the global pixel-level. However, since NLNet is calculated after obtaining an attention map independent of each query, the computation complexity is very high. For different query locations, GCNet Cao et al. (2019) generates similar attention maps, thereby modeling an efficient non-local block. In order to create lighter non-local networks, CCNet Huang et al. (2019), A2Net Chen et al. (2018), and BAT Chi et al. (2020) have been introduced by using an iterative cross-attention module Huang et al. (2019), dual-attention method Chen et al. (2018), and data-adaptive bilinear attentional transformation Chi et al. (2020). Fang et al. (2021) proposes an location-based attention method that distills positive information in a global context. Recently, the non-local neural networks are
Figure 2: Grad-Cam Selvaraju et al. (2017) visualization for spatial bias and convolution feature map. Notably, the grad-cam on the spatial bias exposes wider regions, which aids in learning global knowledge.
used to various tasks such as histopathological image classification Ding et al. (2023) and hand detection Xie et al. (2022). These methods have contributed to the design paradigm of non-local neural networks with the reduced overhead of self-attention operation. However, we argue that they still suffer from a fatal flaw in that their computational cost is quadratic \(O(n^{2})\)1 which causes a slowdown of inference time.
Footnote 1: \(n\) indicates the number of all positions in the feature map
Due to the heavy design of self-attention, conventional non-local methods are inserted only at specific layers in a convolutional neural network by consideration of the throughput and network size. Additionally, since traditional non-local operations only consider spatial correlation by merging channels, they are blind to any channel correlation. Therefore, to overcome these limitations, we propose spatial bias, an attention-free non-local neural network with a fast and lightweight architecture. In comparison to self-attention based non-local neural networks, the proposed spatial bias achieves comparable performance with \(1.6\sim 3.3\) times more throughput on a very little budget. Additionally, lightweight spatial bias can be employed across all network levels, complementing the existing heavy self-attention that can be given to particular layers. Thus, our spatial bias enables a network to learn more about global knowledge due to its fast and lightweight properties.
_Architecture Modeling._ Recently, effective neural networks have shown advances across a range of many visual recognition tasks. The modern CNN architecture conceived by LeNet LeCun et al. (1998) was realized a decade ago by AlexNet Krizhevsky et al. (2012), and various studies have been conducted to improve its performance and usefulness. Since then, CNN Simonyan and Zisserman (2014) with small filter sizes has been developed to encode more precise regional data. Consequently, skip connections have made it possible to build deeper networks He et al. (2016) and several studies have been done to increase expressiveness by varying the width, multi-path block design, or grouped feature flow of neural networks Zagoruyko and Komodakis (2016); Szegedy et al. (2015); Xie et al. (2017); Gao and Zhou (2023); Schwarz Schuler et al. (2022). Through a multi-branch design, recent CNN architectures Szegedy et al. (2015); Li et al. (2019); Zhang et al. (2020); Gao et al. (2019); Guo et al. (2021) communicate information between branches. Additionally, several methods Wang et al. (2018); Chi et al. (2020); Cao et al. (2019) capture long-range dependencies by taking advan
tage of the self-attention operation that guarantees a better understanding of global knowledge for the visual recognition task.
## 3 Proposed Method
The convolution utilizes a shared weight within a limited local window, allowing CNN to have the property of translational equivalence Zhang (2019). Recently, this property has been identified as the inductive bias Baxter (2000), and it has been stated repeatedly that convolution is not particularly effective at learning the relationship between long-range objects Wang et al. (2018). To address this problem, we propose a spatial bias term to compensate for these shortcomings in the convolution. Different from the existing method Wang et al. (2018); Cao et al. (2019); Chi et al. (2020) using the heavy self-attention module, the proposed method learns the global knowledge by adding a few spatial bias channels to the convolution feature map. Inspired by parallel network designs Szegedy et al. (2015); Li et al. (2019); Zhang et al. (2020); Gao et al. (2019), we devise the parallel branches that could complement long-range dependencies of a network as shown in Fig. 3. To generate the spatial bias, we encode long-range dependencies by compressing feature maps in channel and spatial directions. Then, we extend it to concatenate the spatial bias map and the convolutional feature map in the channel direction. Global knowledge from spatial bias is aggregated with the local features of the convolutional feature map, so the network learns both
Figure 3: Design of Spatial Bias term. The figure on the left shows the overall workflow and the right one shows its detail. In the right figure, to capture the global dependency, the channel and spatial size of the feature map are reduced through the \(1\times 1\) convolution and average pooling operations. We aggregate spatial bias on a reduced feature map using a simple 1D convolution operation.
local and global information. As shown in the Fig. 2, the spatial bias learns a wider region while convolution focuses on the local features in an image. Therefore, the CNN with the spatial bias learns richer information through our concatenated feature map. The following section introduces the specific process of aggregating convolutional feature map and spatial bias.
Generating Spatial Bias Map.Let the input feature map \(X\) of a convolution layer be \(X\in\mathbb{R}^{H\times W\times C}\). On this feature map, we compress it in the channel and spatial direction. Specifically, we use \(1\times 1\) convolution for channel reduction, where the feature map has a \(C^{\prime}\) channel. Then, we adopt an average pooling layer for the spatial compression that yields \(P\in\mathbb{R}^{H^{\prime}\times W^{\prime}\times C^{\prime}}\). We get a transformed feature map by flattening each channel of the feature map P into a 1D vector, \(Q\in\mathbb{R}^{1\times C^{\prime}\times H^{\prime}W^{\prime}}\).
To aggregate global knowledge on this transformed feature map, we exploit a \(1\times N\) convolution in the channel dimension where the \(N\) is larger than 1 so that we encode the inter-channel relationship global knowledge. The spatial bias map is then upsampled to the same size as the convolutional feature map using bilinear interpolation. The upsampled spatial bias map is concatenated with the convolutional feature maps as Eq. 1.
\[Output=ReLU(BN[Conv(X),SB]), \tag{1}\]
where \(X\) denotes an input feature map, \(Conv()\) and \(SB\) denote a standard convolution and a spatial bias, and [,] is the concatenate operation. After concatenation, the resultant feature map is sent through batch normalization and nonlinear activation layers.
Convolution with Spatial Bias.In general, naive convolution employs modest kernel sizes (_e.g._, \(3\times 3\)). To compensate the limited kernel size, the self-attention module is added after specific convolution layer to learn the global knowledge. In other words, the heavy self-attention operation is independently applied which increase parameters and computational budget extensively. The proposed spatial bias and convolutional features are complementary to each other. Spatial bias extracts the information of long range-dependency, which complement the existing short range-dependency of convolutional operation. The proposed spatially biased convolution need only minimal overhead of convolution operation due to our self-attention free approach. We aim to learn both of the local and global knowledge in convolution layer directly without additional module.
Complexity of Spatial BiasIn this paragraph, we discuss about the complexity of the proposed spatial bias in comparison with the self-attention operation. Suppose input size is defined as \(X\in\mathbb{R}^{H\times W\times C}\), the self-attention mechanism has a computational complexity of \(O((HW)^{2}C)\approx O((HW)^{2})\), because self-attention operation applies three projections to obtain query, keys and values and computes dot products between query and keys. On the other hand, the proposed spatial bias reduces the feature map size \(X\) by a fixed constant. Therefore, the complexity of spatial bias is \(O(H^{\prime}W^{\prime}sf)\approx O(H^{\prime}W^{\prime}f)\), where \(s\) and \(f\) denote the kernel size, number of filters. Since the number of filters is the same as \(H^{\prime}W^{\prime}\), the spatial bias has the complexity of \(O((H^{\prime}W^{\prime})^{2})\). The reduced height \(H^{\prime}\) and width \(W^{\prime}\) are fixed constant value, so the computational complexity is ideally \(O(1)\).
Therefore, we get very fast and lightweight operation that can be inserted into any convolutional layers. In the experiment section, we show its effectiveness with regard to the throughput, parameters, and computational overhead as well as performance improvement of CNNs.
On the Comparison with Squeeze and ExcitationThe general channel attention method (_i.e._, SENet, SKNet) Hu et al. (2018); Li et al. (2019) captures the channel-wise summarized information of the feature map and then learns the relationship between the channels to adjust the feature map, so the spatial correlation is not learned. On the other hand, the proposed spatial bias extracts the spatial-wise compressed information and then expands it toward the channel. That is, dependence on the spatial-channel direction can be aggregated at once with only a general convolution operation. In addition, while SE-like method refine the channel importance of the output feature map of the convolution layer, the proposed method learn different information in that it learns the channel and spatial information together in the convolution process directly by adding a bias to the feature map. Therefore, as shown in Table 8, the proposed spatial bias is more efficient than SE-like methods and additionally improve the performance of backbone when combining them with our spatial bias.
## 4 Experiments
In this section, we first perform ablation studies on the proposed spatial bias, then compare it with the conventional non-local neural networks with self-attention operation. We, then, provide the experimental analysis
of OOD and shape bias to support the effectiveness of the spatial bias. Finally, we show the experimental result on the object detection and semantic segmentation tasks.
### Experimental result on CIFAR-100
Setup.We report mean accuracy of three experiments using the CIFAR-100 dataset on the proposed method with different random seed for reliable comparison. We set the training recipe with reference to Yun et al. (2019). We use \(32\times 32\) image size 64 samples per mini-batch with 300 epochs. We initially set the learning rate as 0.25 and decayed it by a factor of 0.1 after each 75 epochs.
\begin{table}
\begin{tabular}{|c|c|c c c c|} \hline \hline Network & Stage & Param. & MFLOPs & Top-1 Error (\%) & Top-5 Error (\%) \\ \hline \multirow{4}{*}{ResNet-38} & - & 0.43M & 62.2 & 23.98 \(\pm\)0.23 & 5.73 \(\pm\)0.20 \\ & \(s_{1}\) & 0.45M & 63.6 & 23.86 \(\pm\)0.08 & 5.60 \(\pm\)0.13 \\ \cline{2-6} & \(s_{1},s_{2}\) & 0.46M & 64.2 & 22.44 \(\pm\)0.10 & 4.99 \(\pm\)0.05 \\ & \(s_{1},s_{2},s_{3}\) & 0.48M & 64.6 & 22.46 \(\pm\)0.25 & 5.21 \(\pm\)0.30 \\ \hline \multirow{4}{*}{ResNet-65} & - & 0.71M & 103.3 & 21.87 \(\pm\)0.35 & 5.33 \(\pm\)0.18 \\ & \(s_{1}\) & 0.74M & 105.8 & 20.81 \(\pm\)0.41 & 4.73 \(\pm\)0.09 \\ \cline{2-6} & \(s_{1},s_{2}\) & 0.77M & 106.9 & 20.77 \(\pm\)0.14 & 4.67 \(\pm\)0.29 \\ \cline{2-6} & \(s_{1},s_{2},s_{3}\) & 0.80M & 107.5 & 20.37 \(\pm\)0.20 & 4.61 \(\pm\)0.09 \\ \hline \multirow{4}{*}{ResNet-110} & - & 1.17M & 171.9 & 20.59 \(\pm\)0.38 & 4.96 \(\pm\)0.10 \\ & \(s_{1}\) & 1.22M & 176.1 & 19.97 \(\pm\)0.08 & 4.55 \(\pm\)0.06 \\ \cline{1-1} \cline{2-6} & \(s_{1}\),\(s_{2}\) & 1.28M & 178.0 & 19.42 \(\pm\)0.20 & 4.38 \(\pm\)0.06 \\ \cline{1-1} \cline{2-6} & \(s_{1},s_{2},s_{3}\) & 1.34M & 179.1 & 19.65 \(\pm\)0.68 & 4.80 \(\pm\)0.21 \\ \hline \end{tabular}
\end{table}
Table 1: Experimental result on CIFAR-100 through the proposed Spatial Bias(**3-bias channels**). Here, \(s_{\#}\) denotes the stage index of the ResNet architecture after the stem cell.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Position & Stage & Top-1 Error (\%) & Param. \\ \hline \multirow{2}{*}{**Conv1**} & \(s_{1},s_{2}\) & 20.57 & 0.78M \\ & \(s_{1},s_{2},s_{3}\) & 20.56 & 0.83M \\ \hline \multirow{2}{*}{**Conv2**} & \(s_{1},s_{2}\) & 20.77 & 0.77M \\ & \(s_{1},s_{2},s_{3}\) & 20.37 & 0.80M \\ \hline \end{tabular}
\end{table}
Table 2: Experimental result on the comparison of insertion position in a bottleneck. We add spatial bias in parallel after **Conv1** or **Conv2**. The out channels of **Conv2** is reduced so that the spatial bias after **Conv2** has less parameters.
Position of Spatial Bias.Table 1 summarizes the performance comparison of spatial bias positions in ResNet stages. Since the spatial bias compresses the spatial resolution to the fixed size (_i.e._, 6 for CIFAR-100, 10 for ImageNet, Table 3), the overhead of parameters and computational budget is very small regardless of the stages. When we apply the spatial bias to stage1 (\(s1\)) and stage2 (\(s2\)), the performance of ResNet backbone is improved considerably. However, in the last stage (\(s3\)), there is no performance improvement with the spatial bias. We assume that the spatial size of the last stage is too small so that the global knowledge is disappeared.
Further, we validate the position of the spatial bias in a residual bottleneck. Table 2 shows the performance comparison in the insertion position of spatial bias after **Conv1** or **Conv2** in a bottleneck. We confirm that the spatial bias after **Conv2** reduce the parameters while achieving similar performance compared to that of **Conv1**.
Number of Spatial Bias Channels.We compare the performance on the number of spatial bias channels. The channel of the spatial bias represents its importance in the concatenated output, and thus the more channels are used, the more global knowledge will be learned from the spatial bias. As shown in Table 4, we confirm that the performance and overhead are the optimal when three channels were used (**Bias-3** and **Bias-4**), but the performance is degraded beyond that (_i.e._, **Bias-5** and **Bias-6**). It is inferred that if too much spatial bias is used, the convolution features are damaged which cause the performance deteriorates of entire networks.
Component analysis.We perform an ablation study on spatial bias component analysis. **Add** in Table 5 represent that the spatial bias is added to the feature map. In addition, the average pooling layer is replaced by the maximum pooling layer (**Maxpool**). Lastly, the global context is aggregated
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Network} & \multirow{2}{*}{Param.} & \multirow{2}{*}{Top-1 Error (\%)} & Throughput (sample/sec) \\ \hline ResNet-65 & 0.71M & 21.87 & 12,816 \\ \(SB_{6}\) & 0.77M & 20.77 & 10,276 \\ \(SB_{10}\) & 1.13M & 20.48 & 10,221 \\ \(SB_{14}\) & 2.33M & 20.84 & 10,267 \\ \(SB_{16}\) & 3.47M & 20.75 & 10,467 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Various size for Spatial bias on CIFAR-100 (_i.e._, \(SB_{6}\) denotes compression size of 6).
with only the average layer(**Pool only**) for performance verification on the key operations.
### Experimental results on ImageNet
SetupIn this section, we present experimental result on ImageNet-1k, which includes more than 1.2 million images with 1,000 class labels. We validate our performance on ImageNet dataset using two training recipes. First, we use the training recipe of Wightman et al. (2021) to demonstrates the effectiveness of the proposed spatial bias. In this recipe, the training mini-batch size is set to 512 with 100 epochs using \(160\times 160\) input size. We initialize the learning rate to 8e-3 with the cosine learning rate scheduler. For optimization, we use a lamb You et al. (2019) that is suitable for training of large batch size. Second, we follow NLNet Chi et al. (2020)'s training recipe for fair comparison with state-of-the-art non-local neural networks. Specifically, we exploit the cropped input image as \(224\times 224\) size. The initial learning rate of 0.1 is reduced by 0.1 after 30, 60, and 80 epochs. We use 256 batch size
\begin{table}
\begin{tabular}{c|c c} \hline \hline Method & Param. & Top-1 Error (\%) \\ \hline Bias-0 & 0.71M & 21.87 \\ \hline Bias-1 & 0.76M & 20.91 \\ \hline Bias-2 & 0.77M & 20.99 \\ \hline Bias-3 & 0.77M & **20.77** \\ \hline Bias-4 & 0.77M & **20.60** \\ \hline Bias-5 & 0.77M & 21.06 \\ \hline Bias-6 & 0.77M & 20.82 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Experimental results on the number of spatial bias channels in CIFAR-100. Bias-# indicates the number of channels. We use ResNet-65 as the backbone networks. We add the spatial bias to the stage 1 and 2.
\begin{table}
\begin{tabular}{c c c} \hline \hline Network & Param. & Top-1 Error (\%) \\ \hline Add & 0.76M & 20.93 \\ Maxpool & 0.77M & 21.03 \\ Pool only & 0.71M & 22.34 \\ \hline \(SB_{6}\) & 0.77M & 20.77 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Component analysis for Spatial bias on CIFAR-100.
and stochastic gradient descent(SGD) optimizer.
Result.We perform experiments with the position of the spatial bias and its channel size. We compare the performance of spatial bias on which stage of ResNet backbone. As shown in Table 6, the spatial bias has the best performance when adding it from stage 1 to 3 \(s1,s2,s3\). This result is the same as that of CIFAR-100 result where the spatial bias does not work on the small spatial size of the last stage. In addition, when the spatial bias is not used at the first stage \(s1\), the performance increase is insignificant. This result means that, as in previous studies Wang et al. (2018); Chi et al. (2020), global knowledge exists a lot in the earlier layer with high resolution, and thus the effect of spatial bias is greater.
Table 7 shows the performance comparison in the number of spatial bias channels in ImageNet dataset. When 3\(\sim\)4 channels of the spatial bias are added, the performance is improved by +0.58% compared to the baseline, and wider channels 5\(\sim\)6 degrade the performance. This result also confirm that the optimal channels should be used to avoid the damage of convolution feature map.
In Table 9, we compare the performance of spatial bias and conventional non-local neural networks Wang et al. (2018); Cao et al. (2019); Chi et al. (2020). Our spatial bias need a minimal parameter overhead compared to
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Method & Stage & Param. & Top-1 (\%) & Top-5 (\%) \\ \hline \hline Bias-0 & - & 25.56M & 76.42 & 92.87 \\ \hline \multirow{4}{*}{Bias-3} & \(s_{1},s_{3}\) & 25.86M & 76.68 & 93.13 \\ & \(s_{2},s_{3}\) & 25.89M & 77.00 & 93.00 \\ \cline{1-1} & \(s_{1},s_{2},s_{3}\) & 25.99M & **77.11** & 93.19 \\ \cline{1-1} & \(s_{1},s_{2},s_{3},s_{4}\) & 26.02M & 76.70 & 93.08 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Experimental results on the position of spatial bias in ImageNet-1K.
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline Bias-\# & 1 & 2 & 3 & 4 & 5 \\ \hline \hline Top-1 (\%) & 76.70 & 76.84 & **77.00** & **77.00** & 76.66 \\ \hline Param. & 25.87M & 25.88M & **25.89M** & **25.90M** & 25.91M \\ \hline \hline \end{tabular}
\end{table}
Table 7: Experimental result on the number of spatial bias channels in ImageNet-1K.
NLNet Wang et al. (2018) so that ours is faster than them by 3.3 times. Compared with improved version of non-local neural networks (_i.e._, GCNet and BAT) Cao et al. (2019); Chi et al. (2020), the computational budget of the spatial bias is much cheaper while achieving comparative performance. In particular, existing non-local methods (NLNet, GCNet, and BAT) apply the self-attention module in only specific layers due to the heavy design, but the proposed spatial bias can be used to all layers with minimal overhead. Therefore, our spatial bias is combined with the existing non-local methods in a network to further improve its performance. We also proceed with the comparison by visualizing the attention map of spatial bias and other non-local neural networks. As shown in Fig. 4, the proposed spatial bias is simple yet straightforward, but the visualization results of our method are comparable to complex self-attention-based non-local neural networks.
### Compare with Compressed Self-attention
We conduct further experiments on compressed non-local neural networks. We applied NLNet-50 by compressing the features to \(10\times 10\) with the same average pooling used in our spatial bias. As shown in Table 10, we confirms that the compressed NLNet-50 has little gain in parameters and latency, but its performance deteriorate.
### OOD distortion Robustness and Shape bias
In this section, we verify the performance of the proposed spatial bias on the out-of-distribution data set for measuring a statistically different dis
\begin{table}
\begin{tabular}{l l c c c} \hline \hline Network & Param. & GFLOPs & Top-1 (\%) & Top-5 (\%) & Throughput (sample/sec) \\ \hline ResNet-50 & 25.56M & 4.11 & 76.05 & 92.80 & 2042 \\ SRM-ResNet-50\({}^{4}\) & 25.62M & 4.15(\(\Delta\)0.04) & 77.10 & - & 1243(\(\Delta\)799) \\ GE-ResNet-50\({}^{4}\) & 31.12M & 4.14(\(\Delta\)0.03) & 76.80 & - & 1365(\(\Delta\)677) \\ SE-ResNet-50 & 28.09M & 4.14(\(\Delta\)0.03) & 76.84 & 93.45 & 1787(\(\Delta\)255) \\ SK-ResNet-50 & 27.49M & 4.47(\(\Delta\)0.36) & 77.56 & 93.62 & 1557(\(\Delta\)485) \\ SB-ResNet-50 & 25.99M & 4.13(\(\Delta\)0.02) & 76.86 & 93.33 & 1836(\(\Delta\)206) \\ \hline SB-SE-ResNet-50 & 28.52M & 4.16(\(\Delta\)0.05) & 77.10 & 93.59 & 1626(\(\Delta\)416) \\ SB-SK-ResNet-50 & 27.94M & 4.49(\(\Delta\)0.38) & 77.93 & 93.54 & 1440(\(\Delta\)602) \\ \hline \hline \end{tabular}
\end{table}
Table 8: Experimental results on the standard attention operation. Unlike channel attention operation, spatial bias learns channel and spatial-wise dependencies to readjust the feature map. In addition, our spatial bias is lighter than channel attention operation, and has faster inference speed. Therefore, the channel attention network to which spatial bias is added improves performance with only a very small additional budget.
tribution from the training data. We conduct an OOD distortion robustness experiment on a total of 17 test datasets which have statistically different distributions. It includes five datasets(sketches Wang et al. (2019), edge-filtered images, silhouettes, texture-shape cue conflict, and stylized images Geirhos et al. (2018a)) and 12 test datasets Geirhos et al. (2018b). We compare the OOD robustness of the spatial bias and non-local neural networks using two different metrics (accuracy difference2, observed consistency3. In Table 11, the proposed spatial bias is more robust to OOD datasets compared to the conventional non-local neural networks. This result prove that the proposed spatial bias works well regardless of the data domain.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Network & Param. & Top-1 (\%) & Latency (step/sec) \\ \hline \hline NLNet-50 & 32.9M & 77.2 & \(\mathbf{\Delta}\)**689** \\ Reduced NLN & 32.9M & 77.0 & \(\mathbf{\Delta}\)**630** \\ SB (Ours) & 26.0M & 76.9 & \(\mathbf{\Delta}\)**206** \\ \hline \hline \end{tabular}
\end{table}
Table 10: Comparison between non-local neural networks and spatial bias on ImageNet-1K.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Network & Param. & GFLOPs & Top-1 (\%) & Top-5 (\%) & Throughput (sample/sec) \\ \hline ResNet-50 & 25.56M & 4.11 & 76.05 & 92.80 & 2042 \\ NLNet-50 & 32.92M & 7.66(\(\Delta\)3.55) & 77.25 & 93.66 & 1353(\(\Delta\)689) \\ GCet-50\({}_{+all}\) & 28.08M & 4.12(\(\Delta\)0.01) & 76.93 & 93.25 & 1719(\(\Delta\)323) \\ BAT-50 & 30.23M & 5.41(\(\Delta\)1.30) & 77.78 & 94.01 & 1232(\(\Delta\)810) \\ \hline SB-ResNet-50 & 25.99M & 4.13(\(\Delta\)0.02) & 76.86 & 93.33 & 1836(\(\Delta\)206) \\ \hline SB-NLNet-50 & 33.35M & 7.68(\(\Delta\)3.57) & 77.59 & 93.74 & 1276(\(\Delta\)766) \\ SB-GCNet-50\({}_{+all}\) & 28.51M & 4.14(\(\Delta\)0.03) & 77.00 & 93.27 & 1613(\(\Delta\)429) \\ SB-BAT-50 & 30.66M & 5.43(\(\Delta\)1.32) & 78.06 & 93.97 & 1153(\(\Delta\)889) \\ \hline \hline \end{tabular}
\end{table}
Table 9: Experimental result on the comparison with state-of-the-art (SoTA) non-local neural networks. We compare the proposed attention-free spatial bias method with the self-attention based non-local neural networks. Our spatial bias (SB) improve the performance with very few additional resources compared to the SoTA methods. Additionally, due to our lightweight architecture, SB further improve the networks when combining with self-attention based non-local methods.
### Object Detection
In this subsection, we validate the performance of spatial bias on object detection task. In this experiment, we use Faster R-CNN Ren et al. (2015) and Cascade R-CNN Cai and Vasconcelos (2018) with FPN Lin et al. (2017) using 118k training and 5k verification images from the MS COCO-2017 dataset Lin et al. (2014). We use ResNet-50 as a backbone models previously trained on ImageNet dataset. By following the standard protocol Chen et al. (2019), we use a \(1\times\) learning rate schedule with 12 epochs. We exploit the SGD optimizer with 1e-4 weight decay value and 0.9 momentum, initial learning rate as 0.02. Networks are trained on two A5000 GPUs with 8 samples per GPU. We reduce the width of the image to 800 and keep the height below 1,333 pixels. As shown in Table 12, the networks with our spatial bias improve the performance of detection model for all metrics.
Figure 4: Grad-CAM Selvaraju et al. (2017) visualization of our spatial bias (SB) and non-local methods. Our spatial bias focuses more on the discriminant parts of an object.
### Semantic Segmentation
We perform the evaluation of semantic segmentation task using the ADE20k dataset Zhou et al. (2019). FPN Lin et al. (2017) architecture is utilized for the baseline model to which the spatial bias is applied5. Segmentation networks are trained on two GPUs with 14 samples per GPU for 40K iterations. Same as the detection networks, we use ResNet-50 backbone model trained on ImageNet with input \(512\times 512\) input image size. We employ the AdamW Loshchilov and Hutter (2017) as the optimization algorithm and set the initial learning rate as \(2\times 10^{-4}\) and a weight decay as \(10^{-4}\) with polynomial learning rate decay. As shown in Table 13, networks with our spatial bias outperform baseline networks by \(+1.27aAcc\), \(+2.16mIoU\), \(+3.31mAcc\).
Footnote 5: We adopt the implementation of FPN model from Contributors (2020).
## 5 Conclusion
In this paper, we propose the spatial bias that learn global knowledge with fast and lightweight architecture. The proposed method adds only a
\begin{table}
\begin{tabular}{c|c c} \hline \hline Network & Acc diff. \(\downarrow\) & Obs.consistency \(\uparrow\) \\ \hline BAT-50 & 0.069 & 0.677 \\ \hline SBRNet-50 & 0.078 & 0.668 \\ \hline GCNet-50 & 0.081 & 0.668 \\ \hline NLNet-50 & 0.086 & 0.661 \\ \hline ResNet-50 & 0.087 & 0.657 \\ \hline \end{tabular}
\end{table}
Table 11: Experimental results on OOD datasets. We compare the OOD robustness using three metrics. Spatial bias shows better OOD robustness compared to non-local neural networks.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline Method & Backbone & \(\mathcal{AP}\) & \(\mathcal{AP}_{50}\) & \(\mathcal{AP}_{75}\) & \(\mathcal{AP}_{S}\) & \(\mathcal{AP}_{M}\) & \(\mathcal{AP}_{L}\) \\ \hline \multirow{2}{*}{Faster-RCNN} & ResNet-50 & 39.0 & 60.3 & 42.4 & 23.0 & 43.2 & 50.0 \\ & ResNet-50 + ours & **40.0** & **61.5** & **43.7** & **24.0** & **44.1** & **51.6** \\ \hline \multirow{2}{*}{Cascade-RCNN} & ResNet-50 & 41.9 & 60.5 & 45.7 & 24.2 & 45.8 & 54.8 \\ & ResNet-50 + ours & **42.8** & **61.9** & **46.8** & **25.2** & **46.2** & **55.5** \\ \hline \hline \end{tabular}
\end{table}
Table 12: Experimental results on object detection using MS-COCO dataset.
few additional spatial bias channels to a convolutional feature map so that the convolution layer itself learns global knowledge with the self-attention operation. That is, the spatial bias is be a kind of non-local method that allows convolution to learn long-range dependency. Spatial bias generates much less parameter, FLOPs, and the throughput overhead than existing non-local methods Wang et al. (2018); Chi et al. (2020); Huang et al. (2019); Chen et al. (2018). Our design choice is simple yet straightforward. We assume this is the advantage of being applicable to various network architectures. We argue that the computational cost of the existing non-local neural networks with self-attention operation has increased considerably by using rather complex design choice. Also, the proposed spatial bias can be used together with existing self-attention based non-local methods. We believe that our new approach without self-attention based non local neural networks will inspire future studies.
|
2305.13864 | MIANet: Aggregating Unbiased Instance and General Information for
Few-Shot Semantic Segmentation | Existing few-shot segmentation methods are based on the meta-learning
strategy and extract instance knowledge from a support set and then apply the
knowledge to segment target objects in a query set. However, the extracted
knowledge is insufficient to cope with the variable intra-class differences
since the knowledge is obtained from a few samples in the support set. To
address the problem, we propose a multi-information aggregation network
(MIANet) that effectively leverages the general knowledge, i.e., semantic word
embeddings, and instance information for accurate segmentation. Specifically,
in MIANet, a general information module (GIM) is proposed to extract a general
class prototype from word embeddings as a supplement to instance information.
To this end, we design a triplet loss that treats the general class prototype
as an anchor and samples positive-negative pairs from local features in the
support set. The calculated triplet loss can transfer semantic similarities
among language identities from a word embedding space to a visual
representation space. To alleviate the model biasing towards the seen training
classes and to obtain multi-scale information, we then introduce a
non-parametric hierarchical prior module (HPM) to generate unbiased
instance-level information via calculating the pixel-level similarity between
the support and query image features. Finally, an information fusion module
(IFM) combines the general and instance information to make predictions for the
query image. Extensive experiments on PASCAL-5i and COCO-20i show that MIANet
yields superior performance and set a new state-of-the-art. Code is available
at https://github.com/Aldrich2y/MIANet. | Yong Yang, Qiong Chen, Yuan Feng, Tianlin Huang | 2023-05-23T09:36:27Z | http://arxiv.org/abs/2305.13864v1 | # MIANet: Aggregating Unbiased Instance and General Information for Few-Shot Semantic Segmentation
###### Abstract
Existing few-shot segmentation methods are based on the meta-learning strategy and extract instance knowledge from a support set and then apply the knowledge to segment target objects in a query set. However, the extracted knowledge is insufficient to cope with the variable intra-class differences since the knowledge is obtained from a few samples in the support set. To address the problem, we propose a multi-information aggregation network (MIANet) that effectively leverages the general knowledge, i.e., semantic word embeddings, and instance information for accurate segmentation. Specifically, in MIANet, a general information module (GIM) is proposed to extract a general class prototype from word embeddings as a supplement to instance information. To this end, we design a triplet loss that treats the general class prototype as an anchor and samples positive-negative pairs from local features in the support set. The calculated triplet loss can transfer semantic similarities among language identities from a word embedding space to a visual representation space. To alleviate the model biasing towards the seen training classes and to obtain multi-scale information, we then introduce a non-parametric hierarchical prior module (HPM) to generate unbiased instance-level information via calculating the pixel-level similarity between the support and query image features. Finally, an information fusion module (IFM) combines the general and instance information to make predictions for the query image. Extensive experiments on PASCAL-\(5\)1 and COCO-\(20\)1 show that MIANet yields superior performance and set a new state-of-the-art. Code is available at github.com/Aldrich2y/MIANet.
Footnote 1: Corresponding author ([email protected]).
## 1 Introduction
The challenge of few-shot semantic segmentation (FSS) is how to effectively use one or five labeled samples to segment a novel class. Existing few-shot segmentation methods [28, 30, 33, 37] adopt the metric-based meta-learning strategy [26, 29]. The strategy is typically composed of two stages: meta-training and meta-testing. In the meta-training stage, models are trained by plenty of independent few-shot segmentation tasks. In meta-testing, models can thus quickly adapt and extrapolate to new few-shot tasks of unseen classes and segment the novel categories since each training task involves a different seen class.
As shown in Figure 2, natural images of same categories have semantic differences and perspective distortion, which leads to intra-class differences. Current FSS approaches segment a query image by matching the guidance information from the support set with the query features (Figure 1 (a)). Unfortunately, the correlation between the support image and the query image is not enough to support the match
Figure 1: Comparison between (a) existing FSS methods and (b) proposed MIANet. (a) Existing methods extract instance-level knowledge from the support images, which is not able to cope with large intra-class variation. (b) our MIANet extracts instance-level knowledge from the support images and obtains general class information from word embeddings. These two types of information benefit the final segmentation.
ing strategy in some support-query pairs due to the diversity of intra-class differences, which affects the generalization performance of the models. On the other hand, modules with numerous learnable parameters are devised by FSS methods to better use the limited instance information. And lots of few-shot segmentation tasks of seen classes are used to train the models in the meta-training stage. Although current methods freeze the backbone, the rest parameters will inevitably fit the feature distribution of the training data and make the trained models misclassify the seen training class to the unseen testing class.
To address the above issues, a multi-information aggregation network is proposed for accurate segmentation. Specifically, we first design a general information module (GIM) to produce a general class prototype by leveraging class-based word embeddings. This prototype represents general information for the class, which is beyond the support information and can supplement some missing class information due to intra-class differences. As shown in Figure 1 (b), the semantic word vectors for each class can be obtained by a pre-trained language model, i.e., _word2vec_. Then, GIM takes the word vector and a support prototype as input to get the general prototype. Next, a well-designed triplet loss [25] is applied to achieve the alignment between the semantic prototype and the visual features. The triplet loss extracts positive-negative pairs from local features which distinguishes our method from other improved triplets [3, 4, 11]. The semantic similarity between the word embeddings in a word embedding space can therefore be transferred to a visual embedding space. Finally, the projected prototype is supplemented into the main branch as the general information of the category for information fusion to alleviate the intra-class variance problem.
Moreover, to capture the instance-level details and alleviate the model biasing towards the seen classes, we propose a non-parametric hierarchical prior module (HPM). HPM works in two aspects. (1) HPM is class-agnostic since it does not require training. (2) HPM can generate hierarchical activation maps for the query image by digging out the relationship between high-level features for accurate segmentation of unseen classes. In addition, we build information channels between different scales to preserve discriminative information in query features. Finally, the unbiased instance-level information and the general information are aggregated by an information fusion module (IFM) to segment the query image. Our main contributions are summarized as follows:
1. We propose a multi-information aggregation network (MIANet) to aggregate general information and unbiased instance-level information for accurate segmentation.
2. To the best of our knowledge, this is the first time to use word embeddings in FSS, and we design a general information module (GIM) to obtain the general class information from word embeddings for each class. The module is optimized through a well-designed triplet loss and can provide general class information to alleviate intra-class differences.
3. A non-parametric hierarchical prior module (HPM) is proposed to supply MIANet with unbiased instance-level segmentation knowledge, which provides the prior information of the query image on multi-scales and alleviates the bias problem in testing.
4. Our MIANet achieves state-of-the-art results on two few-shot segmentation benchmarks, i.e., PASCAL-5\({}^{i}\) and COCO-20\({}^{i}\). Extensive experiments validate the effectiveness of each component in our MIANet.
## 2 Related work
Few-Shot Semantic Segmentation.Few-shot semantic segmentation (FSS) is proposed to address the dependence of semantic segmentation models on a large amount of annotated data. Current FSS methods are based on metric-based meta-learning and can be largely grouped into two types: prototype-based methods [5, 15, 30, 34, 39, 40] and parameter-based methods [14, 18, 31, 32, 36, 38]. The prototype-based methods use a non-parametric metric tool, e.g., cosine similarity or euclidean distance, to calculate segmentation guidance. And non-parametric metric tools alleviate overfitting. The parameter-based FSS methods employ learnable metric tools to explore the relationship between the support and query features. For instance, BAM [14] proposes a base learner to avoid the interference of base classes in testing and achieve the state-of-the-art performance. Current methods can effectively segment the target area of novel classes when samples of the classes are lim
Figure 2: We define two types of intra-class variation. (a) The object in each column has the same semantic label but belongs to different fine-grained categories. (b) The object belonging to the same category differs greatly in appearance due to the existence of perspective distortion.
ited. However, these methods only extract instance knowledge from the limited support set, and cannot segment some support-query pairs with large intra-class differences as detailed in Figure 2. For this problem, we propose a multi-information aggregation network, which extracts instance information and learns general class prototypes from word embeddings to alleviate the intra-class differences.
**Intra-Class Differences.** The intra-class differences problem is a key factor affecting the performance of the few-shot segmentation. Previous methods try to mine more support information to alleviate this issue. [21] dynamically transforms a classifier trained on the support set to each query image. [7, 20] produce a pseudo query mask based on the support information to capture more self-attention information of the query image. But the performance gain is restricted since the support set is limited. In zero-shot learning (ZSL), semantic information is used to generate visual features for unseen classes [1, 2, 8, 12, 35], so that the models recognize the unseen classes. The achievement in ZSL demonstrates that word embeddings contain the general semantic information of categories, which inspires us to integrate class-based semantic information [13, 22] to supplement the missing information when the features in the support set and in the query set don't match.
## 3 Methodology
### Problem Definition
We define two datasets, \(D_{train}\) and \(D_{test}\), with the category set \(C_{train}\) and \(C_{test}\) respectively, where \(C_{train}\cap C_{test}=\emptyset\). The model trained on \(D_{train}\) is directly transferred to evaluate on \(D_{test}\) for testing. Besides, each category \(c\in C_{train}\cup C_{test}\) is mapped through the word embedding to a vector representation \(W[c]\in R^{d}\), where d is the dimension of \(W[c]\). In line with previous works [28], we train the model in an episode manner. Each episode contains a support set \(S\), a query set \(Q\) and a word embedding map \(W\). Under the K-shot setting, each support set \(S=\left\{X_{s}^{i},M_{s}^{i}\right\}_{i=1}^{K}\), includes K support images \(X_{s}\) and corresponding masks \(M_{s}\), and each query set \(Q=\left\{X_{q},M_{q}\right\}\), includes a query image \(X_{q}\) and a corresponding mask \(M_{q}\). The training set \(D_{train}\) and test set \(D_{test}\) are represented by \(D_{train}=\left\{(S_{i},Q_{i},W)\right\}_{i=1}^{N_{train}}\) and \(D_{test}=\left\{(S_{i},Q_{i},W)\right\}_{i=1}^{N_{test}}\), where \(N_{train}\) and \(N_{test}\) is the number of episodes for training and test set. During training, the support masks \(M_{s}\) and query masks \(M_{q}\) are available, and the \(M_{q}\) is not accessible during testing.
### Method Overview
As shown in Figure 3, our multi-information aggregation network includes three modules, i.e., hierarchical prior module (HPM), general information module (GIM), and information fusion module (IFM). Specifically, given the support and query images \(X_{s}\) and \(X_{q}\), a common backbone with shared weights is used to extract both middle-level [37] and high-level features [28]. We then employ HPM whose task is to produce unbiased instance-level information \(M_{ins}\) of the query image by using labeled support instances. Meanwhile, GIM is introduced to generate general class information which aims to make up for the insufficiency of instance information. At last, we pass the instance information and general information to an information fusion module to aggregate into the final guidance information and then make predictions for the query image.
### Hierarchical Prior Module
Few-shot semantic segmentation models are trained on labeled data of seen classes, which makes it inclined for trained models to misjudge seen training categories as unseen target categories. Moreover, current approaches usually resort to well-designed modules with numerous learnable parameters in order to maximize the use of limited support information. Inspired by [28], we propose a non-parametric hierarchical prior module (HPM) to capture the unbiased instance information from a few labeled samples in an efficient way. HPM leverages the high-level features (e.g., layer 4 of ResNet50) from the support set and query set to generate prior information, which is a rough localization map of the target object in the query image. Moreover, we compute prior information at multiple different scales that provide rich guidance for objects of varying sizes and shapes. In order to avoid the loss of discriminative information when the query features are extended to different scales, we establish information channels between different scales.
Specifically, HPM takes as input the high-level support features \(f_{s}^{h}\in R^{c\times h\times w}\), the corresponding binary mask \(M_{s}\in R^{H\times W}\), and the high-level query features \(f_{q}^{h}\in R^{c\times h\times w}\), where c is the channel dimension, h (H), w (W) are the height and width of the features and the mask. Empirically [28], we define the instance-level information as \(M_{ins}=\left\{m_{ins}^{i}\right\}_{i=1}^{4}\), \(m_{ins}^{i}\in R^{c\times h_{i}\times w_{i}}\), and \(h_{i}>h_{j},w_{i}>w_{j}\), when \(i<j\), \(h_{1}=h,w_{1}=w\).
To obtain the \(m_{ins}^{i}\), we first filter out the background elements in the support features via
\[f_{s}^{h}=f_{s}^{h}\otimes\mathcal{I}(M_{s},f_{s}^{h}) \tag{1}\]
where \(\mathcal{I}(M_{s},f_{s}^{h})\) down- or up-samples the \(M_{s}\) to a spatial size as the \(f_{s}^{h}\) by interpolation, \(\otimes\) means the Hadamard product. Next, we reshape the \(f_{s}^{h}\) and \(f_{q}^{h}\) to a size of (\(c\times hw\)). The pixel-wise cosine similarity \(A_{q}\) between \(f_{s}^{h}\) and \(f_{q}^{h}\) is calculated as
\[A_{q}=\frac{(f_{q}^{h})^{T}f_{s}^{h}}{||f_{q}^{h}||\;||f_{s}^{h}||}\in R^{h_{1 }w_{1}\times h_{1}w_{1}} \tag{2}\]
We then take the mean similarity in the support (second) dimension as the activation value and pass the \(A_{q}\) into a min-max normalization (\(\mathcal{F}_{norm}\)) to get the \(m^{1}_{ins}\).
\[m^{1}_{ins}=\mathcal{F}_{norm}(mean(A_{q}))\in R^{h_{1}\times w_{1}} \tag{3}\]
In order to extend to the next scale, i.e., \((h_{2},w_{2})\), the pooling operation is needed to down-sample the \(f^{h}_{q}\). We use the weighted average pooling to add information channels between different scales since discriminative details are prone to be ignored by the average pooling
\[f^{h}_{q}=\mathcal{F}_{pool}(f^{h}_{q}\otimes m^{1}_{ins})\in R^{c\times h_{2 }\times w_{2}} \tag{4}\]
where \(\mathcal{F}_{pool}\) is the average pooling. Then the high-level support features in the next stage can be computed by
\[f^{h}_{s}=\mathcal{I}(f^{h}_{s},f^{h}_{q})\in R^{c\times h_{2}\times w_{2}} \tag{5}\]
Finally, prior information \(m^{2}_{ins}\) can be obtained by using equation 1 - 3, and \(\left\{m^{i}_{ins}\right\}_{i=1}^{4}\) can be calculated after four stages.
### General Information Module
One of the main challenges of few-shot semantic segmentation is the intra-class differences as shown in Figure 2. Current methods aim to address this problem by thoroughly excavating the relationship between instance samples and the query image, i.e., digging out the instance-level information. But this can only solve some highly correlated support-query pairs. For instance, in the case of Figure 2 (1st and 2nd columns), objects in the support image and the query image have similar local features despite belonging to different fine-grained categories, such as the legs of the chair, the feathers, and the body of the bird. But in Figure 2 (b), due to the existence of perspective distortion, some local features (the part in the red box) are lost, and it is difficult for the model to segment the query image according to the incomplete support sample.
To counter this, a general information module (GIM) is used to extract language information from word embeddings to generate a general class prototype, and a triplet loss is designed to optimize this module. GIM contains two components: general information generator (GIG) and local feature generator (LFG). GIG takes the foreground prototype obtained from the support set and the category semantic vector obtained from the semantic label as input, and generates a general class prototype. LFG takes the mid-level support features as input and generates region-related local features to collect positive-negative pairs to form triplets.
Specifically, we input the category word (e.g., _aeroplane_) to the pre-trained _word2vec_ to obtain a vector representation \(w\in R^{1\times d}\).
\[w=\mathcal{F}_{word2vec}(word) \tag{6}\]
where \(\mathcal{F}_{word2vec}(.)\) represents generating vector representation from the word embeddings according to \(word\).
Next, masked average pooling is applied on the support features \(f_{s}\in R^{c\times h\times w}\) to get a foreground class prototype \(p\in R^{1\times c}\) as
\[p=\mathcal{F}_{pool}(f_{s}\otimes\mathcal{I}(M_{s},f_{s})) \tag{7}\]
Then, we input the foreground class prototype \(p\) and the word vector \(w\) into GIG to produce a general class prototype \(p_{gen}\in R^{1\times c}\)
\[p_{gen}=\mathcal{F}_{GIG}(w\oplus p) \tag{8}\]
Figure 3: The overall architecture of our proposed multi-information aggregation network.
where \(\oplus\) is the concatenation operation in channel dimension, \(\mathcal{F}_{GIG}(.)\) means producing the general information, GIG consists of two fully connected layers.
The obtained prototype \(p_{gen}\) represents the general and complete information for a specific category, which is expected to distinguish whether a local feature belongs to the category. To achieve this, we set \(p_{gen}\) as the _anchor_, and then sample pairs of _positive_ and _negative_ from local features to calculate the triplet loss. Different from pixel-level features, local features are region-related and represent part of the semantic information of categories, such as the tail, head, torso, and other features. We design a local feature generator (LFG) which consists of three convolutional blocks and reduces the size of the support features by a factor of 4 to obtain regional features. A regional vector \(v\in R^{1\times c}\) in the regional features \(f_{reg}\) can represent an area in the original image, i.e., a local feature representation.
\[f_{reg}=\mathcal{F}_{reshape}^{hw\times c}(\mathcal{F}_{LFG}(f_{s}))\in R^{hw \times c} \tag{9}\]
where \(\mathcal{F}_{LFG}(.)\) indicates generating the local information, and \(\mathcal{F}_{reshape}^{hw\times c}(.)\) means reshaping the input to a spatial size of \((hw\times c)\). We then use support mask \(M_{s}\in R^{H\times W}\) for feature selection, which separates the foreground and background regional vectors into two different sets, i.e., \(V_{fg}=\left\{v_{fg}^{i}\right\}_{i=1}^{n_{1}},V_{bg}=\left\{v_{bg}^{i} \right\}_{i=1}^{n_{2}},v_{bg},v_{fg}\in R^{1\times c},n1+n2=hw\).
\[\hat{M_{s}}=\mathcal{F}_{reshape}^{hw\times 1}(\mathcal{I}(M_{s},f_{ reg}))\in R^{hw\times 1} \tag{10}\] \[V_{fg}=\mathcal{F}_{index}(\hat{M_{s}}^{h}==1,f_{reg}^{k})\;\;k \in\{1,2,...,hw\}\] (11) \[V_{bg}=\mathcal{F}_{index}(\hat{M_{s}}^{h}==0,f_{reg}^{k})\;\;k \in\{1,2,...,hw\} \tag{12}\]
where \(\mathcal{F}_{index}(\hat{M_{s}}^{k},f_{reg}^{k})\) indicates that when \(\hat{M_{s}}^{k}\) is 1, add the corresponding vector \(f_{reg}^{k}\) to \(V_{fg}\), otherwise, add it to \(V_{bg}\). Next, we average the \(V_{bg}\) to get negative sample since the elements in the background of the support images are very complex and are hard to use [30].
\[negative=\frac{\sum_{i}^{n_{2}}(v_{bg}^{i})}{n_{2}},\;\;v_{bg}^{i}\in V_{bg} \tag{13}\]
The positive samples are the foreground regional vectors in \(V_{fg}\). Similar to [11], we calculate the hardest sample, which has the farthest distance from the _anchor_, to obtain the positive vector for better optimization.
\[positive=\operatorname*{arg\,max}_{v_{fg}^{i}}(\mathcal{F}_{d}(p_{gen},v_{fg}^ {i})),\;\;v_{fg}^{i}\in V_{fg} \tag{14}\]
where \(\mathcal{F}_{d}\) is the \(l_{2}\) distance function. The triplet loss \(\mathcal{L}_{triplet}\) is
\[\begin{split}\mathcal{L}_{triplet}=\max(\mathcal{F}_{d}(p_{gen}, positive)+margin\\ -\mathcal{F}_{d}(p_{gen},negative),0)\end{split} \tag{15}\]
where margin is a fixed value (0.5) to keep negative samples far apart.
By calculating the distance among triplets (anchor, foreground local features, background local features), the semantic information of the anchor and the visual information of local features are aligned, and the relationship among different word vectors can also be converted to visual embedding space to provide additional general information to alleviate the intra-class differences even some features are lost due to perspective distortion in Figure 2 (b). In addition, the triplet loss encourages the GIG to learn better general prototypes (_anchor_) to distinguish fine-grained local features (_positive_) of the same category from background features (_negative_).
### Prediction and Training Loss
The instance-level information \(M_{ins}\) and general information \(p_{gen}\) are aggregated as guidance information through the information fusion module (IFM) to supervise the segmentation of query images. In order to seek more contextual cues, we utilize the FEM [28] structure as our information fusion module. As shown in Figure 3, the mid-level query feature \(f_{q}\), instance information \(M_{ins}\) and general class information \(p_{gen}\) are input to IFM. The \(f_{q}\) and \(p_{gen}\) are first expanded to four scales\(\left\{p_{gen}^{i}\right\}_{i=1}^{4}\),\(\left\{f_{q}^{i}\right\}_{i=1}^{4}\), according to the size of \(M_{ins}\).
\[f_{q}^{i}=\mathcal{I}(f_{q},m_{ins}^{i})\in R^{c\times h_{i}\times w_{i}},i=\{ 1,2,3,4\} \tag{16}\]
\[p_{gen}^{i}=\mathcal{F}_{expand}(\mathcal{I}(p_{gen},m_{ins}^{i}))\in R^{c \times h_{i}\times w_{i}} \tag{17}\]
where \(\mathcal{F}_{expand}(.)\) means expanding the input in channel dimension. We then input the \(\left\{m_{ins}^{i}\right\}_{i=1}^{4}\),\(\left\{p_{gen}^{i}\right\}_{i=1}^{4}\),\(\left\{f_{q}^{i}\right\}_{i=1}^{4}\) to FEM to compute the binary intermediate predictions \(Y_{inter}=\left\{y^{i}\right\}_{i=1}^{4}\) and final prediction \(Y\), where \(Y,y^{i}\in R^{H\times W}\).
The training loss has two parts, namely the segmentation loss and the triplet loss. The segmentation loss is calculated using multiple cross-entropy functions, with \(L_{seg1}\) on the intermediate predictions \(Y_{inter}\) and \(L_{seg2}\) on the final prediction \(Y\). The triplet loss is computed from the hardest triplet, as shown in equation 15. The final loss is
\[\mathcal{L}=\mathcal{L}_{seg1}+\mathcal{L}_{seg2}+\mathcal{L}_{triplet} \tag{18}\]
### Extending to K-Shot Setting
The above discussions focus on the 1-shot setting. For the K-shot setting, K support samples \(\left\{X_{s}^{i},M_{s}^{i}\right\}_{i=1}^{K}\) are available. Our method can be easily extended to the K-shot setting. First, K sets of instance information \(\left\{M_{ins}^{i}\right\}_{i=1}^{K}\) are computed respectively using the K samples. We then average the instance information separately at different scales to get \(\hat{M}_{ins}=\left\{\hat{m}_{ins}^{j}\right\}_{j=1}^{4}\) for the subsequent process.
\[\hat{m}_{ins}^{j}=\frac{1}{K}\sum_{i=1}^{K}m_{ins}^{j:i} \tag{19}\]
In addition, the K prototypes obtained by Equation 7 are also averaged. Finally, the local feature \(f_{reg}\) will be obtained from the union of K support features through equation 9.
## 4 Experiments
### Experimental Settings
**Datasets.** Experiments are conducted on two commonly used few-shot segmentation datasets, PASCAL-5\({}^{i}\) and COCO-20\({}^{i}\), to evaluate our method. PASCAL-5\({}^{i}\) is created from PASCAL VOC 2012 [6] with additional annotations from SBD [9]. The total 20 classes in the dataset are evenly divided into 4 folds \(i\in\{0,1,2,3\}\) and each fold contains 5 classes. The COCO-20\({}^{i}\) is proposed by [24], which is conducted from MSCOCO [16]. Similar to PASCAL-5\({}^{i}\), 80 classes in COCO-20\({}^{i}\) are partitioned into 4 folds and each fold contains 20 classes.
**Metric and Evaluation.** We follow the previous methods and adopt the mean intersection-over-union (mIoU) and foreground-background IoU (FB-IoU) as the evaluation metrics. The FB-IoU results are listed in the supplementary material. During testing, we follow the settings of PFENet to make the experimental results more accurate. Specifically, five different random seeds are set for five tests in each experiment. In each test, 1000 and 5000 support-query pairs are sampled for PASCAL-5\({}^{i}\) and COCO-20\({}^{i}\) respectively. We then average the results of five tests for each experiment.
**Implementation Details.** Following [14, 21], we first train the PSPNet [40] to obtain a feature extractor (backbone) based on the seen training classes for each fold, i.e., 16/61 training classes (including background) for PASCAL-5\({}^{i}\)/COCO-20\({}^{i}\). Next, we fix the parameters of the trained feature extractor and use a meta-learning strategy to train the remaining structures. These structures are optimized using the SGD optimizer, trained for 200 epochs on PASCAL-5\({}^{i}\) and 50 on COCO-20\({}^{i}\). The learning rate and batch size are 5e-3 and 4, respectively. And we use the _word2vec_ model learned on google news to obtain d (300) dimensional word vector representations. The word embeddings of categories that contain multiple words are obtained by averaging the embeddings of each individual word.
**Baseline.** As shown in Figure 3, we first remove the HPM and GIM from the MIANet. Then we replace the general class information \(p_{gen}\) in the information fusion module with the instance prototype \(p\) to establish the baseline. The rest of the experimental settings are consistent with MIANet.
### Comparison with State-of-the-Arts
**PASCAL-5\({}^{i}\).** Table 1 shows the mIoU performance comparison on PASCAL-5\({}^{i}\) between our method and several representative models. It can be seen that (1) MIANet achieves state-of-the-art performance under the 1-shot and 5-shot settings. Especially for the VGG16 [27] backbone, we surpass BAM [14], which holds the previous state-of-the-art results, by 2.69% and 3.23%. (2) MIANet outperforms the baseline with a large margin. For example, when VGG16 is the backbone, MIANet and the baseline model achieve 67.10% and 61.11% respectively. Compared with ResNet50 [10], VGG16 provides less information that is useful for segmentation, so the extra information is more valuable. After adding the detailed general and instance information generated by the GIM and HPM to the baseline model, better performance improvement occurs than ResNet50.
**COCO-20\({}^{i}\).** COCO-20\({}^{i}\) is a more challenging dataset that contains multiple objects and shows greater variance. Table 2 shows the mIoU performance comparison. Overall, MIANet surpasses all the previous methods under 1-shot and 5-shot settings. Under the 1-shot setting, MIANet leads BAM by 2.19% and 1.43% on VGG16 and ResNet50. Meanwhile, our method outperforms the baseline by 9.45%, and 7.76%, which demonstrate the superiority of our method, despite the challenging scenarios.
**Qualitative Results.** We report some qualitative results generated from our MIANet and baseline model on the PASCAL-5\({}^{i}\) and COCO-20\({}^{i}\) benchmarks. Compared with the baseline, MIANet exhibits the following advantages as shown in Figure 4. (1) MIANet can more accurately segment the target class, while the baseline incorrectly segments the seen classes as the target classes (1st to 3rd columns). (2) MIANet can mine similar local features for different fine-grained categories to address the intra-class variance problem caused by semantic differences, i.e., sailboat/small boat, chair/sofa chair, and eagle/owl in the 4th, 5th and 6th columns respectively. (3) MIANet can provide general information that is missing in the support image (7th to 9th columns), i.e., the intra-class variance caused by perspective distortion.
line by 5.99%. In the second row, HPM mines the multi-scale instance-level information and improves the baseline by 3.44%. Meanwhile, replacing the support prototype \(p\) with the general prototype \(p_{gen}\), the baseline yields a 1.35% performance gain. This is because GIM produces general information, while HPM can discover pixel-level information of instances, which is more helpful for the improvement of segmentation performance. After the combination of GIM and HPM, the instance information and general information are aggregated by IFM so that the model can alleviate the problem of intra-class differences, and effectively improve the performance by 2.55% compared to the second row.
**Hierarchical Prior Module.** HPM uses multi-scale prior information and establishes information channels with weighted average pooling between different scales, which provides instance-level prior information for MIANet. Table 4 shows the impact of each element in HPM on the
\begin{table}
\begin{tabular}{c|c|c c c c c|c c c c c} \hline \hline \multirow{2}{*}{Backbone} & \multirow{2}{*}{Methods} & \multicolumn{6}{c|}{1-shot} & \multicolumn{6}{c}{5-shot} \\ \cline{3-13} & & Fold-0 & Fold-1 & Fold-2 & Fold-3 & Mean & Fold-0 & Fold-1 & Fold-2 & Fold-3 & Mean \\ \hline \multirow{8}{*}{VGG16} & PFENet(TPAMI’20) [28] & 56.90 & 68.20 & 54.40 & 52.40 & 58.00 & 59.00 & 69.10 & 54.80 & 52.90 & 59.00 \\ & HSNet(ICCV’21) [23] & 59.60 & 65.70 & 59.60 & 54.00 & 59.70 & 64.90 & 69.00 & 64.10 & 58.60 & 64.10 \\ & DPCN(CVPR’22) [17] & 58.90 & 69.10 & 63.20 & 55.70 & 61.70 & 63.40 & 70.70 & 68.10 & 59.00 & 65.30 \\ & BAM(CVPR’22) [14] & 63.18 & 70.77 & 66.14 & 57.53 & 64.41 & 67.36 & 73.05 & 70.61 & 64.00 & 68.76 \\ & NURENet(CVPR’22) [19] & 57.70 & 67.60 & 57.10 & 53.70 & 59.00 & 60.30 & 68.00 & 55.20 & 57.10 & 60.20 \\ & Baseline & 56.12 & 70.86 & 63.10 & 54.36 & 61.11 & 59.92 & 72.03 & 64.69 & 57.16 & 63.45 \\ & MIANet & **65.42** & **73.58** & **67.76** & **61.65** & **67.10** & **69.01** & **76.14** & **73.24** & **69.55** & **71.99** \\ \hline \hline \multirow{8}{*}{ResNet50} & PFENet(TPAMI’20) [28] & 61.70 & 69.50 & 55.40 & 56.30 & 60.80 & 63.10 & 70.70 & 55.80 & 57.90 & 61.90 \\ & HSNet(ICCV’21) [23] & 64.30 & 70.70 & 60.30 & 60.50 & 64.00 & 70.30 & 73.20 & 67.40 & 67.10 & 69.50 \\ \cline{1-1} & DPCN(CVPR’22) [17] & 65.70 & 71.60 & **69.10** & 60.60 & 66.70 & 70.00 & 73.20 & 70.90 & 65.50 & 69.90 \\ \cline{1-1} & BAM(CVPR’22) [14] & **68.97** & 73.59 & 67.55 & 61.13 & 67.81 & **70.59** & 75.05 & 70.79 & 67.20 & 70.91 \\ \cline{1-1} & NURENet(CVPR’22) [19] & 65.40 & 72.30 & 59.40 & 59.80 & 64.20 & 66.20 & 72.80 & 61.70 & 62.20 & 65.70 \\ \cline{1-1} & SSP(ECCV’22) [7] & 60.50 & 67.80 & 66.40 & 51.00 & 61.40 & 67.50 & 72.30 & **75.20** & 62.10 & 69.30 \\ \cline{1-1} & Baseline & 61.87 & 72.78 & 64.10 & 55.17 & 63.48 & 63.36 & 73.87 & 66.50 & 59.34 & 65.77 \\ \cline{1-1} & MIANet & 68.51 & **75.76** & 67.46 & **63.15** & **68.72** & 70.20 & **77.38** & 70.02 & **68.77** & **71.59** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance comparison on PASCAL-\(5^{i}\) in terms of mIoU. The **best** and _second_ best results are highlighted with **bold** and underline, respectively.
\begin{table}
\begin{tabular}{c|c c c c c c|c c c c c} \hline \hline \multirow{2}{*}{Backbone} & \multirow{2}{*}{Methods} & \multicolumn{6}{c|}{1-shot} & \multicolumn{6}{c}{5-shot} \\ \cline{3-13} & & Fold-0 & Fold-1 & Fold-2 & Fold-3 & Mean & Fold-0 & Fold-1 & Fold-2 & Fold-3 & Mean \\ \hline \multirow{8}{*}{VGG16} & PFENet(TPAMI’20) [28] & 35.40 & 38.10 & 36.80 & 34.70 & 36.30 & 38.20 & 42.50 & 41.80 & 38.90 & 40.40 \\ & DPCN(CVPR’22) [17] & 38.50 & 43.70 & 38.20 & 37.70 & 39.50 & 42.70 & 51.60 & 45.70 & 44.60 & 46.20 \\ & BAM(CVPR’22) [14] & 38.96 & 47.04 & 46.41 & 41.57 & 43.50 & **47.02** & 52.62 & 48.59 & 49.11 & 49.34 \\ & Baseline & 33.55 & 41.45 & 35.49 & 34.46 & 36.24 & 38.11 & 49.57 & 41.94 & 41.53 & 42.79 \\ & MIANet & **40.56** & **50.53** & **46.50** & **45.18** & **45.69** & 46.18 & **56.09** & **52.33** & **49.54** & **51.03** \\ \hline \hline \multirow{8}{*}{ResNet50} & HSNet(ICCV’21) [23] & 36.30 & 43.10 & 38.70 & 38.70 & 39.20 & 43.30 & 51.30 & 48.20 & 45.00 & 46.90 \\ & DPCN(CVPR’22) [17] & 42.00 & 47.00 & 43.20 & 39.70 & 43.00 & 46.00 & 54.90 & 50.80 & 47.40 & 49.80 \\ \cline{1-1} & BAM(CVPR’22) [14] & **43.41** & 50.59 & 47.49 & 43.42 & 46.23 & **49.26** & 54.20 & **51.63** & 49.55 & 51.16 \\ \cline{1-1} & NTRENet(CVPR’22) [19] & 36.80 & 42.60 & 39.90 & 37.90 & 39.30 & 38.20 & 44.10 & 40.40 & 38.40 & 40.30 \\ \cline{1-1} & SSP(ECCV’22) [7] & 35.50 & 39.60 & 37.90 & 36.70 & 37.40 & 40.60 & 47.00 & 45.10 & 43.90 & 44.10 \\ \cline{1-1} & Baseline & 36.07 & 43.97 & 40.23 & 39.34 & 39.90 & 42.79 & 49.42 & 47.41 & 46.08 & 46.43 \\ \cline{1-1} & MIANet & 42.49 & **52.95** & **47.77** & **47.42** & **47.66** & 45.84 & **58.18** & 51.29 & **51.90** & **51.65** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance comparison on COCO-\(20^{i}\) in terms of mIoU. The **best** and _second_ best results are highlighted with **bold** and underline, respectively.
\begin{table}
\begin{tabular}{c c|c c c c|c c} \hline \hline HPM & GIM & Fold-0 & Fold-1 & Fold-2 & Fold-3 & mIoU \\ \hline \multirow{2}{*
model performance. We can see that using the proposed multi-scale prior outperforms the one-scale method by 1.69%. This is because multi-scale instance information can adapt to input objects of different sizes. In addition, by establishing information paths between different scales, the proposed weighted pooling method can also avoid losing discriminative features and achieve a performance improvement of 0.48%.
**General Information Module.** Table 5 shows the impact of main components in GIM, namely triplet loss, and word embeddings. After removing the triplet loss, the performance drops by 0.61%. This is because the triplet loss pulls together similar local features and pushes away dissimilar ones in \(l_{2}\) metric space, and learns better general information representations for MIANet. Second, when we directly remove the word embedding in Figure 3 and only use the instance class prototype as the input of the general information generator, the performance drops by 1.34%.
## 5 Conclusion
We propose a multi-information aggregation network (MIANet) with three major parts (i.e., HPM, GIM and IFM) for the few-shot semantic segmentation. The non-parametric HPM generates unbiased multi-scale instance information at the pixel level while alleviating the prediction bias problem of the model. The GIM obtains additional general class prototypes from word embeddings, as a supplement to the instance information. A triplet loss is designed to optimize the GIM to make the prototypes better alleviate the intra-class variance problem. The instance-level information and general information are aggregated in IFM, which is beneficial to more accurate segmentation results. Comprehensive experiments show that MIANet achieves state-of-the-art performance under all settings.
|
2302.09643 | Triple birthday matches in the Senate: Lies, damned lies and chatGPT | Our question is ``What is the probability that at least three members of the
senate share the same birthday?'' Before the pandemic, I asked this question in
several popular math talks I gave at universities across the country. Inspired
by ChatGPT's abysmal failure to answer the question, I have recently come back
to this problem and now have a more satisfactory answer, thanks in no small
part to what I learned form a page of Wolfram's Math World, which I located by
a Google search. | Rick Durrett | 2023-02-19T18:18:21Z | http://arxiv.org/abs/2302.09643v1 | # Triple birthday matches in the Senate:
###### Abstract
Our question is "What is the probability that at least three members of the senate share the same birthday?" Before the pandemic, I asked this question in several popular math talks I gave at universities across the country. Inspired by ChatGPT's abysmal failure to answer the question, I have recently come back to this problem and now have a more satisfactory answer, thanks in no small part to what I learned form a page of Wolfram's Math World, which I located by a Google search.
## 1 Introduction
I begin with the answer I gave in a _Math for Everyone_ talk at Notre Dame in April 2018
* The number of senators born on a given day is Binomial(100,1/365) and hence approximately Poisson with mean 100/365.
* Probability three senators born on a given day \[e^{-100/365}(100/365)^{3}/3!=0.002606\] The expected number of triple birthdays in 365 days = 0.9512. If we assume that the events \(B_{j}^{3}=\) exactly three senators were born on day \(j\) are independent then the number would be Poisson with mean 0.9512 and the probability that this occurs for at least one day is \(1-\exp(-0.9512)=0.6137\)
* Probability two senators born on a given day \[e^{-100/365}(100/365)^{2}/2!=0.028536\] expected number among 365 days = 10.415. Again if we assume that the events \(B_{j}^{2}=\) exactly two senators were born on day \(j\) are independent then the number would be Poisson with mean 10.415.
* Answers for the 2018 Senate: One triple birthday, May 3: Jim Risch (Idaho), David Vitter (Louisiana), Ron Wyder (Oregon). Ten double birthdays: Jan 7, March 31, June 22, August 24, Sept 29, October 20 and 24, November 17, December 7 and 10
Curioius at what chatGPT would say I typed in:
What is the probability that at least three members of the senate share the same birthday?
The answer it gave was:
The probability that at least three members of a senate of size n share the same birthday is approximately
\[1-(364/365)^{\binom{n}{3}}-(364/365)^{n}+(364/365)^{\binom{n}{2}}.\]
This is because there are 365 possible birthdays, and the chance of any one senator not sharing a birthday with any other is 364/365. The chance of no two senators sharing a birthday is \((364/365)^{\binom{n}{2}}\), and the chance of no three senators sharing a birthday is \((364/365)^{\binom{n}{3}}\). So, to get the chance of at least three senators sharing a birthday, we subtract the chance of no two sharing a birthday, no three sharing a birthday and add the chance of all of them sharing a birthday. For example, if there are 100 members in a senate, the probability that at least three members share the same birthday is approximately 0.527 or 52.7%.
To start to understand this answer we consider the claim: "The chance of no two senators sharing a birthday is \((364/365)^{\binom{n}{2}}\)." This would be true if the events \(A_{i,j}\) that the \(i\)th and \(j\)th senators have the same birthday were independent but they are not. If \(\{i,j\}\neq\{k,\ell\}\) are distinct integers then
\[P(A_{i,j}\cap A_{k,\ell})=\frac{1}{365^{2}}=P(A_{i,j})P(A_{k,\ell})\]
This shows that the events are **pairwise independent**, but for true independence we would need
\[P(A_{1,2}\cap A_{2,3}\cap A_{3,1})=P(A_{1,2})P(A_{2,3})P(A_{3,1})\]
Unfortunately the left-hand side is \(1/(365)^{2}\) while the right is \(1/(365)^{3}\). A second problem with this part of chatGPT's answer is that it is a well-known fact that the chance of no two senators sharing a birthday is
\[\frac{365\cdot 364\cdots 266}{(365)^{100}}=3.072\times 10^{-7} \tag{1}\]
versus \((354/365)^{\binom{100}{2}}=1.265\times 10^{-6}\)
The second claim "the chance of no three senators sharing a birthday is \((364/365)^{\binom{n}{3}}\)" is more mysterious. The try to explain this we turn to Suraj Regmi's blog post from January 26, 2019, which, as this was being written, was top answer in response to the Google search
for "birthday triples math formulas." Regmi reasons: There are \(C(n,3)\) triplets and for all of the triplets to not have the same birth date probability becomes
\[(1-1/(365)^{2})^{C(n,3)} \tag{2}\]
as the triples are INDEPENDENT EVENTS.
While the situation with double birthdays is subtle, involving the distinction between pairwise and fully independent events, the situation for triples is not. If \(A_{i,j,k}\) is the probability that senators \(i,j,k\) share the same birthday then
\[P(A_{1,2,3}\cap A_{2,3,4})=\frac{1}{(365)^{3}}<\frac{1}{(365)^{4}}=P(A_{1,2,3 })\cdot P(A_{2,3,4})\]
When \(n=100\) the formula in (2) evaluates to \(0.29708\), which gives an answer of
\[1-0.29708=0.70292 \tag{3}\]
To be fair to the bot and the blogger, I should confess that I sinned at Notre Dame: my events are not independent
\[P(B_{1}^{2})=0.037155 P(B_{2}^{2}|B_{1}^{2})=0.035676\] \[P(B_{1}^{3})=0.003325 P(B_{2}^{3}|B_{1}^{3})=0.003032\]
However, I did not claim them to be. My goal was to give a simple approximation for the probability that there is a day on which exactly three senators were born and it turns out to be quite accurate. When you put 100 in the Wolfram's birthday problem calculator [4] you get the answers for birthdays in senate.
\[\begin{array}{ll}\mbox{at least 2 the same}&1-3.072\times 10^{-7}\\ \mbox{at least 3 the same}&0.6459\end{array}\]
The first result is 1 minus (1). The second answer shows that (2) and (3) are wrong.
Note that here \(0.6459\) is the probability for at least 3 senators sharing a birthday, compared to the earlier estimate of \(0.6137\) for exactly 3. At the end of Section 2.1 the reader we will see that our answer using the Poisson approximation is very close to the answer of \(0.6140\) for exactly 3 senators sharing a birthday computed by using the first six terms of the inclusion exclusion formula. Our approximation for the expected number of double birthdays \(10.415\) compares very well with the true expected value of \(10.3645\), which should not be surprising since expected values are not sensitive to dependence. That fact is fortunate for us, since our argument implies that the number of double birthdays has a Poisson distribution, but as results below will show (see Figure 1) this statement is not very accurate.
Calculations
### The probability of a triple birthday
Let \(T\) be the number of triple birthdays, i.e., the days of the year that are the birthdays of exactly three senators. If we use \(i\) as shorthand for \((i_{1},i_{2},i_{3})\) with \(1\leq i_{1}<i_{2}<i_{3}\leq 100\) and let \(A_{i}\) be the event that senators \(i_{1},i_{2},i_{3}\) have the same birthday and no other senator has this birthday then
\[q_{1}=\sum_{i}P(A_{i}) =\binom{100}{3}\left(\frac{1}{365}\right)^{2}\left(\frac{364}{365 }\right)^{97}\cdot\frac{365}{365}\] \[=\binom{100}{3}\frac{365(364)^{97}}{(365)^{100}}=0.9301\]
which gives \(ET\) and an upper bound on the probability of \(\cup_{i}A_{i}\). To get a lower bound using the Bonferroni inequalities, we need to subtract
\[q_{2}=\sum_{i<j}P(A_{i}\cap A_{j}) =\frac{1}{2}\cdot\binom{100}{3}\left(\frac{1}{365}\right)^{2} \binom{97}{3}\frac{364}{365}\left(\frac{1}{365}\right)^{2}\left(\frac{363}{365 }\right)^{94}\] \[=\frac{1}{2}\cdot\binom{100}{3}\binom{97}{3}\frac{365\cdot 364 \cdot(363)^{94}}{(365)^{100}}=0.3996\]
(here \(<\) is lexicographic or dictionary order on triples \((i_{1},i_{2},i_{3})\)).
To get a second upper bound we need to add
\[q_{3} =\sum_{i<j<k}P(A_{i}\cap A_{j}\cap A_{k})\] \[=\frac{1}{3!}\cdot\binom{100}{3}\binom{97}{3}\binom{94}{3}\left( \frac{1}{365}\right)^{6}\frac{364}{365}\cdot\frac{363}{365}\left(\frac{362}{36 5}\right)^{91}\] \[=\frac{1}{3!}\cdot\binom{100}{3}\binom{97}{3}\binom{94}{3}\frac{ P_{365,3}\cdot(362)^{100-9}}{(365)^{100}}=0.1054\]
To get a second lower bound, we need to subtract
\[q_{4}=\sum_{i<j<k<\ell}P(A_{i}\cap A_{j}\cap A_{k}\cap A_{\ell})\]
To do this we use the general formula
\[q_{k}=\frac{1}{k!}\prod_{k=0}^{k-1}\binom{100-3j}{3}\cdot\frac{P_{365,k}\cdot (365-k)^{100-3k}}{(365)^{100}}\]
which gives \(q_{4}=0.019153181\), \(q_{5}=2.548039\times 10^{-3}\), and \(q_{6}=2.57641\times 10^{-4}\)
To computer the answer
\begin{tabular}{l r} upper bound \(u_{1}\) & \(q_{1}=0.931045\) \\ lower bound \(v_{1}\) & \(q_{1}-q_{2}=0.530545\) \\ upper bound \(u_{2}\) & \(v_{1}+q_{3}=0.635962\) \\ lower bound \(v_{2}\) & \(u_{2}-q_{4}=0.616809\) \\ upper bound \(u_{3}\) & \(v_{2}+q_{5}=0.614261\) \\ lower bound \(v_{3}\) & \(u_{3}-q_{6}=0.614004\) \\ \end{tabular}
Later we will need to do this calculation for \(n\) people and a calendar with \(d\) days. In this case the \(k\)th term in inclusion-exclusion is
\[q_{k}(n,d)=\frac{1}{k!}\prod_{k=0}^{k-1}\binom{n-3j}{3}\cdot\frac{P_{d,k}\cdot (d-k)^{100-3k}}{(365)^{100}}\]
### The number of double birthdays
Write \(k\) as shorthand for \((k_{1},k_{2})\) with \(1\leq k_{1}<k_{2}\leq 100\). Let \(C_{k}\) be the event that senators \(k_{1}\) and \(k_{2}\) have the same birthday which is not shared by any of the other members of the Senate, and let \(D\) be the number of double birthdays. Using \(\binom{100}{2}=4950\)
\[ED=\sum_{k}P(C_{k})=\binom{100}{2}\left(\frac{1}{365}\right)\left(\frac{364}{3 65}\right)^{98}=10.3645 \tag{4}\]
in contrast to the approximate value of \(10.415\). The error comes from the Poisson approximation of the binomial.
\[365\cdot P(\text{binomial}(100,1/365)=2)365\cdot 0.028396=10.3645\]
If \(D\) had a Poisson(\(\lambda\)) distribution then \(ED(D-1)=\lambda^{2}\). Writing \(j<k\) for the lexicographic order on \(\mathbb{Z}^{2}\) and noting that birthday coincidences are pairwise independent
\[ED(D-1) =\sum_{j<k}P(C_{j}\cap C_{k})\] \[=\binom{100}{2}\left(\frac{1}{365}\right)\binom{98}{2}\left( \frac{364}{365}\cdot\frac{1}{365}\right)\left(\frac{363}{365}\right)^{96}\] \[=ED\cdot\binom{98}{2}\left(\frac{1}{364}\right)\left(\frac{363}{3 64}\right)^{96}=ED\cdot 10.027<(ED)^{2}\]
where in the last step we have multiplied and divided by \((364/365)^{98}\)
Hocking and Schweterman [1] have derived a formula for the probability \(p_{k}\) of \(k\) double birthdays and no triple (or higher) coincidences. As we have already noted in (1)
\[p_{0}=\frac{P_{365,100}}{(365)^{100}}=3.072\times 10^{-7}\]
Arguing as in our treatment of triple birthdays
\[p_{1} =\binom{100}{2}\frac{1}{365}\cdot\frac{P_{364,98}}{(365)^{98}}\cdot \frac{365}{365}=\binom{100}{2}\cdot\frac{P_{365,99}}{(365)^{100}}\] \[p_{2} =\frac{1}{2}\binom{100}{2}\frac{1}{365}\cdot\binom{98}{2}\frac{3 64}{365}\cdot\frac{1}{365}\cdot\frac{P_{363,96}}{(365)^{96}}\] \[=\frac{1}{2}\binom{100}{2}\binom{98}{2}\cdot\frac{P_{365,98}}{(3 65)^{100}}\] \[p_{3} =\frac{1}{3!}\binom{100}{2}\frac{1}{365}\cdot\binom{98}{2}\frac{3 64}{365}\cdot\frac{1}{365}\cdot\binom{96}{2}\frac{363}{365}\cdot\frac{1}{365} \cdot\frac{P_{362,94}}{(365)^{94}}\] \[=\frac{1}{3!}\binom{100}{2}\binom{98}{2}\binom{96}{2}\cdot\frac{ P_{365,97}}{(365)^{100}}\]
Referring to (1) in [1] and doing some algebra, we see that the formula for general \(k\) and \(n\) is
\[p_{k}=\frac{1}{k!}\prod_{j=0}^{k-1}\binom{n-2j}{2}\cdot\frac{P_{365,n-k}}{(365 )^{n}}\]
Using this formula we see that
\[P_{k}=P_{k-1}\cdot\frac{1}{k}\cdot\binom{n-2(k-1)}{2}\cdot\frac{1}{365-n+k}\]
If \(p_{k}\) was \(\alpha\) times the Poisson(\(\lambda\)) distribution (recall that \(\sum_{k}p_{k}=P(T=0)\)) then we would have \(p_{k}=\lambda p_{k-1}\) so \(p_{k}/P(T=0)\) is not a Poisson distribution. To compare with the Poisson (see Figure 1) we note that
\[\sum_{k}kP_{k}=3.87454\qquad\sum_{k}p_{k}=P(T=0)=0.354135 \tag{5}\]
which agrees with the Wolfram Alpha result \(P(T=0)=1-0.6549\). So we have \(E(D|T=0)=10.941\).
### The number of triple birthdays
McKinney [2]was perhaps the first person to try to determine the probability that in \(n\) people selected at random \(r\) will have the same birthday. To do this he let \(X_{i}\), \(1\leq i\leq n\) be i.i.d. uniform on \(\{1,2,\ldots M\}\). Let \(n_{i}\) be the number of values that appear \(i\) times in the sample. His main result is
\[P_{n}(n_{1},n_{2},\ldots n_{r-1})=\frac{n!}{\prod_{i=1}^{r-1}n_{i}!(j!)^{n_{j }}}\frac{P_{M,\sum_{i=1}^{r-1}n_{i}}}{M^{n}} \tag{6}\]
Proof.: The second factor in (6) is the probability than \(n\) independent uniforms will have \(n_{1}\) nonrepeated values, \(n_{2}\) pairs, \(n_{3}\) triples in a specified order. The first factor representa the number of distinguishable ways that this particular oder can be permuted.
If we let \(G_{r}^{c}=\) no value is repeated \(r\) or more times in the sample then
\[P(G_{r}^{c})=\sum\{P_{n}((n_{1},n_{2},\ldots n_{r-1}):\sum_{i}in_{i}=n\}\]
He used this to compute the following values of \(P(G+r)\). (His \(E=G_{r}^{c}\).)
\[\begin{array}{llll}r=2&n=22&0.4758&n=23&0.5074\\ r=3&n=87&0.4998&n=88&0.5114\\ r=4&n=186&0.4758&n=187&0.5033\end{array}\]
To compute the distribution of the number of triple birthdays, we write recursions that are inspired by those given in Wolfram's Math World [5] for \(Q_{i}(n,d)=\) the probability that in a group of size \(n\) with \(d\) possible birthdays, a birthday is shared by exactly \(i\) (and no more) people. Here we let \(\tau_{k}(n,d)\) be the probability that there are exactly \(k\) triple birthdays in a group of size \(n\) with \(d\) possible birthdays.
We have computed \(\tau_{0}(100,365)=0.386\).
\[\tau_{1}(100,365)={100\choose 3}{1\over 365^{2}}\left({364\over 365}\right)^{ 97}\times\tau_{0}(97,364)\]
The term \((364/365)^{97}\) is the probability that the \(97\) remaining people do not have birthdays that match the triple birthday. If we condition on this event then their birthday are uniform
Figure 1: Graph of \(p_{k}=\) the probability of \(k\) double birthdays conditioned on no triple birthday (line with longer dashes), compared to Poisson with mean \(10.941\) (shorter dashes). Solid line gives results from simulation of \(1\) million instances of the unconditioned distribution. The mean is \(10.36\) in agreement with (4).
over the remaining 364 possibilities. Similarly
\[\tau_{2}(100,365) =\frac{1}{2!}{100\choose 3}\frac{1}{365^{2}}{97\choose 3}\frac{364}{3 65}\frac{1}{365^{2}}\left(\frac{363}{365}\right)^{94}\times\tau_{0}(94,363)\] \[=\frac{1}{2!}{100\choose 3}{97\choose 3}\cdot\frac{P_{365,2}\cdot(3 63)^{94}}{(365)^{100}}\times\tau_{0}(94,363)\]
Following the pattern we can see
\[\tau_{3}(100,365) =\frac{1}{3!}{100\choose 3}{97\choose 3}{94\choose 3}\cdot\frac{P_{3 65,3}\cdot(362)^{91}}{(365)^{100}}\times\tau_{0}(91,362)\] \[\tau_{4}(100,365) =\frac{1}{4!}\prod_{j=0}^{3}{100-3j\choose 3}\cdot\frac{P_{3 65,4}\cdot(361)^{88}}{(365)^{100}}\times\tau_{0}(88,361)\]
These probabilities are easier to compute than one might expect. The quantities to the left of the \(\times\) signs are the \(q_{k}\) computed in Section 2.1. So it remains to compute \(\tau_{0}(100-3k,365-k)\) using the Bonferroni inequalities (and stopping with the fourth bound). The next table gives the results and compare with values computed from 1 million simulations.
\[\begin{array}{ccccc}k&q_{k}&1-\tau_{0}(100-3k,365-k)&\text{calculation}&\text{ simulation}\\ 0&&&&0.386&0.380921\\ 1&0.93014&0.58796&0.38325&0.381977\\ 2&0.39960&0.55777&0.17672&0.176321\\ 3&0.10542&0.52719&0.049843&0.049634\\ 4&0.019153&0.49585&0.009656&0.009604\\ 5&2.548\times 10^{-4}&0.46415&0.001365&0.001375\\ \end{array}\]
## Acknowledgement
This work was partially supported by NSF grant DMS 2153429 from the probability program. The views expressed here are those of the author and do not necessarily reflect the view of the Natioanl Science Foundation. Computations were performed by my student Hwai-Ray Tung, who will graduate from Duke in May 2023 and go to a postdoctoral position in the Utah math department on July 1. On that date his adviser will become a James B. Duke Emeritus Professor of Mathematics. |
2310.06761 | Symmetric Semi-invariants for some Inonu-Wigner contractions | Let $\mathfrak p$ be a proper parabolic subalgebra of a simple Lie algebra
$\mathfrak g$. Writing $\mathfrak p=\mathfrak r\oplus \mathfrak m$, with
$\mathfrak r$ being the Levi factor of $\mathfrak p$ and $\mathfrak m$ the
nilpotent radical of $\mathfrak p$, we may consider the semi-direct product
$\tilde\mathfrak p=\mathfrak r\ltimes(\mathfrak m)^a$ where $(\mathfrak m)^a$
is an abelian ideal of $\tilde\mathfrak p$, isomorphic to $\mathfrak m$ as an
$\mathfrak r$-module. Then $\tilde\mathfrak p$ is a Lie algebra, which is a
special case of In\"on\"u-Wigner contraction and may be considered as a
degeneration of the parabolic subalgebra $\mathfrak p$. Let $S(\tilde\mathfrak
p)$ be the symmetric algebra of $\tilde\mathfrak p$ (it is equal to the
symmetric algebra $S(\mathfrak p)$ of $\mathfrak p$) and consider the algebra
of semi-invariants $Sy(\tilde\mathfrak p)\subset S(\tilde\mathfrak p)$ under
the adjoint action of $\tilde\mathfrak p$. Using what we call a generalized PBW
filtration on a highest weight irreducible representation $V(\lambda)$ of
$\mathfrak g$, induced by the standard degree filtration on the enveloping
algebra $U(\mathfrak m^-)$ of $\mathfrak m^-$, the nilpotent radical of the
opposite parabolic subalgebra $\mathfrak p^-$ of $\mathfrak p$, one obtains a
lower bound for the formel character of the algebra $Sy(\tilde\mathfrak p)$,
when the latter is well defined. | Florence Fauquant-Millet | 2023-10-10T16:37:48Z | http://arxiv.org/abs/2310.06761v1 | # Symmetric semi-invariants for some
###### Abstract.
Let \(\mathfrak{p}\) be a proper parabolic subalgebra of a simple Lie algebra \(\mathfrak{g}\). Writing \(\mathfrak{p}=\mathfrak{r}\oplus\mathfrak{m}\) with \(\mathfrak{r}\) being the Levi factor of \(\mathfrak{p}\) and \(\mathfrak{m}\) the nilpotent radical of \(\mathfrak{p}\), we may consider the semi-direct product \(\tilde{\mathfrak{p}}=\mathfrak{r}\ltimes(\mathfrak{m})^{a}\), where \((\mathfrak{m})^{a}\) is an abelian ideal of \(\tilde{\mathfrak{p}}\), isomorphic to \(\mathfrak{m}\) as an \(\mathfrak{r}\)-module. Then \(\tilde{\mathfrak{p}}\) is a Lie algebra, which is a special case of Inonu-Wigner contraction and may be considered as a degeneration of the parabolic subalgebra \(\mathfrak{p}\). Let \(S(\tilde{\mathfrak{p}})\) be the symmetric algebra of \(\tilde{\mathfrak{p}}\) (it is equal to the symmetric algebra \(S(\mathfrak{p})\) of \(\mathfrak{p}\)) and consider the algebra of semi-invariants \(Sy(\tilde{\mathfrak{p}})\subset S(\tilde{\mathfrak{p}})\) under the adjoint action of \(\tilde{\mathfrak{p}}\). Using what we call a generalized PBW filtration on a highest weight irreducible representation \(V(\lambda)\) of \(\mathfrak{g}\), induced by the standard degree filtration on \(U(\mathfrak{m}^{-})\) (where \(\mathfrak{m}^{-}\) is the nilpotent radical of the opposite subalgebra \(\mathfrak{p}^{-}\) of \(\mathfrak{p}\)) one obtains a lower bound for the formal character of the algebra \(Sy(\tilde{\mathfrak{p}})\), when the latter is well defined.
_Mathematics Subject Classification_ : 16 W 22, 17 B 22, 17 B 35.
_Key words_ : Inonu-Wigner contraction, parabolic subalgebra, symmetric invariants, semi-invariants.
## 1. Introduction.
The base field \(\Bbbk\) is algebraically closed of characteristic zero.
### The aim of the paper
Let \(\mathfrak{g}\) be a simple Lie algebra over \(\Bbbk\) and fix a Cartan subalgebra \(\mathfrak{h}\) of \(\mathfrak{g}\). Then choose a set \(\pi\) of simple roots for \((\mathfrak{g},\,\mathfrak{h})\) and denote by \(\mathfrak{b}\) the Borel subalgebra of \(\mathfrak{g}\) associated with it. Let \(\mathfrak{p}\supset\mathfrak{b}\) be a proper parabolic subalgebra of \(\mathfrak{g}\). Denote by \(\mathfrak{n}\), resp. \(\mathfrak{n}^{-}\), the maximal nilpotent subalgebra of \(\mathfrak{g}\) generated by all positive, resp. negative, root vectors, so that \(\mathfrak{g}=\mathfrak{n}^{-}\oplus\mathfrak{h}\oplus\mathfrak{n}\) and \(\mathfrak{b}=\mathfrak{h}\oplus\mathfrak{n}\). Let \(\mathfrak{r}\) denote the Levi factor of \(\mathfrak{p}\) (so that \(\mathfrak{r}\) is a reductive Lie algebra) and \(\mathfrak{m}\) the nilpotent radical of \(\mathfrak{p}\). Then one has that \(\mathfrak{p}=\mathfrak{r}\oplus\mathfrak{m}\). Now consider the semi-direct product \(\tilde{\mathfrak{p}}=\mathfrak{r}\ltimes(\mathfrak{m})^{a}\) where \((\mathfrak{m})^{a}\) is isomorphic to \(\mathfrak{m}\) as an \(\mathfrak{r}\)-module, the superscript \(a\) meaning hat \((\mathfrak{m})^{a}\) is an abelian ideal of \(\tilde{\mathfrak{p}}\). The semi-direct product \(\tilde{\mathfrak{p}}\) is still a Lie algebra which may be viewed as a _degeneration_ of the parabolic subalgebra \(\mathfrak{p}\). It is called an Inonu-Wigner contraction, or a one-parameter contraction of \(\mathfrak{p}\) (see [46, Sect. 4]). Denoting by \(\mathfrak{a}^{\prime}\) the derived subalgebra of any Lie algebra \(\mathfrak{a}\), one has that \(\tilde{\mathfrak{p}}^{\prime}=\mathfrak{r}^{\prime}\ltimes(\mathfrak{m})^{a}\).
In this paper we are interested in the algebra \(Sy(\tilde{\mathfrak{p}})\) of symmetric semi-invariants in the symmetric algebra \(S(\tilde{\mathfrak{p}})\) of \(\tilde{\mathfrak{p}}\) under the adjoint action of \(\tilde{\mathfrak{p}}\)
which is also equal to the algebra \(S(\tilde{\mathfrak{p}})^{\tilde{\mathfrak{p}}^{\prime}}\) of symmetric invariants under the adjoint action of \(\tilde{\mathfrak{p}}^{\prime}\). In some cases (especially when \(\mathfrak{p}\) is a maximal parabolic subalgebra), we have that \(Sy(\tilde{\mathfrak{p}})=S(\tilde{\mathfrak{p}}^{\prime})^{\tilde{\mathfrak{p} }^{\prime}}\). For the natural Poisson structure on \(S(\tilde{\mathfrak{p}})\), the algebra \(Sy(\tilde{\mathfrak{p}})\) is also equal to the Poisson semicentre of \(S(\tilde{\mathfrak{p}})\). Roughly speaking, we may view the algebra \(Sy(\tilde{\mathfrak{p}})\) as a _degeneration_ of the algebra of symmetric semi-invariants \(Sy(\mathfrak{p})=S(\mathfrak{p})^{\mathfrak{p}^{\prime}}\) in \(S(\mathfrak{p})\). The aim of the present paper is to construct a lower bound for the formal character of \(Sy(\tilde{\mathfrak{p}})\) (when the latter is well defined), which will be shown to be equal to the lower bound for the formal character of \(Sy(\mathfrak{p})\), as computed in [15, Sect. 6] (see also [14, Prop. 3.1]). We hope then, when \(\mathfrak{p}\) is a maximal parabolic subalgebra of \(\mathfrak{g}\), to compare this lower bound with an upper bound, given by an adapted pair of \(\tilde{\mathfrak{p}}^{\prime}\) and to show that both bounds coincide : this will imply that in this case, the algebra \(Sy(\tilde{\mathfrak{p}})\) is a polynomial algebra, for which we can give the number of algebraically independent generators, their weight and degree.
Similar semi-direct products were studied extensively by Panyushev and Yakimova in [35], [37], [38], [39], [46], [47]. In particular these authors studied the polynomiality of the algebra of symmetric invariants \(S(\mathfrak{q})^{\mathfrak{q}}\) for semi-direct products \(\mathfrak{q}=\mathfrak{a}\ltimes V\) where \(\mathfrak{a}\) is a **simple** Lie algebra and \(V\) is a finite-dimensional representation of \(\mathfrak{a}\). For any type of simple Lie algebra \(\mathfrak{a}\) (except in type A where their study is partial) they established a list of all representations \(V\), up to isomorphism, of \(\mathfrak{a}\) for which the algebra \(S(\mathfrak{q})^{\mathfrak{q}}\) is polynomial and they gave the number of algebraically independent generators. Observe that in our paper we deal with a semi-direct product \(\mathfrak{q}=\mathfrak{a}\ltimes V\) with \(\mathfrak{a}=\mathfrak{r}^{\prime}\) being **semisimple** (and not necessarily simple in general) and \(V=\mathfrak{m}\). Note that it is shown in [37, Th. 1.1] that the bi-homogeneous components of highest degree relative to \(\mathfrak{m}\) of homogeneous elements in \(S(\mathfrak{p}^{\prime})^{\mathfrak{p}^{\prime}}\) lie in \(S(\tilde{\mathfrak{p}}^{\prime})^{\tilde{\mathfrak{p}}^{\prime}}=S(\mathfrak{ p}^{\prime})^{\tilde{\mathfrak{p}}^{\prime}}\). Moreover by [45, Th. 3.8], if \(S(\mathfrak{p}^{\prime})^{\mathfrak{p}^{\prime}}\) is polynomial, generated by a set of algebraically independent homogeneous generators satisfying further conditions, one may know whether \(S(\mathfrak{p}^{\prime})^{\tilde{\mathfrak{p}}^{\prime}}\) is also polynomial : it happens if and only if the sum of their degrees relative to \(\mathfrak{m}\) is equal to \(\dim\mathfrak{m}\). Unfortunately, even when the degree is known for each generator of \(S(\mathfrak{p}^{\prime})^{\mathfrak{p}^{\prime}}\), it does not seem to be easy to compute its degree relative to \(\mathfrak{m}\).
### The method
The method we use in this paper is completely different from this of Panyushev and Yakimova. Our method is partly inspired by this used in [14], [15] to study the polynomiality of the algebra of symmetric semi-invariants \(Sy(\mathfrak{p})\). The study of the latter algebra will be called _the nondegenerate case_, while we will call the study of \(Sy(\tilde{\mathfrak{p}})\)_the degenerate case_.
Our aim is to construct a lower bound for the algebra \(Sy(\tilde{\mathfrak{p}})\) of semi-invariants. This bound will be given by the algebra of matrix coefficients on some degenerate module built from the irreducible highest weight \(\mathfrak{g}\)-module \(V(\lambda)\) of highest weight \(\lambda\), for \(\lambda\in P^{+}(\pi)\), where \(P^{+}(\pi)\) is the set of dominant integral weights of \((\mathfrak{g},\,\mathfrak{h})\).
Let us describe our method and our main result more precisely.
* In subsections 3.1 and 3.2 we fix \(\lambda\in P^{+}(\pi)\) and denote by \(\mathfrak{p}^{-}=\mathfrak{r}\oplus\mathfrak{m}^{-}\supset\mathfrak{b}^{-}= \mathfrak{h}\oplus\mathfrak{n}^{-}\) the opposite parabolic subalgebra of \(\mathfrak{p}\), where \(\mathfrak{m}^{-}\) is the nilpotent radical of \(\mathfrak{p}^{-}\) and by \(\tilde{\mathfrak{p}}^{-}=\mathfrak{r}\ltimes(\mathfrak{m}^{-})^{a}\) the one-parameter contraction of \(\mathfrak{p}^{-}\). Then, inspired by the construction in [20], we define what we call a _generalized PBW filtration_\((\mathscr{F}_{k}(V(\lambda))_{k\in\mathbb{N}}\) on \(V(\lambda)\), which is an increasing and exhaustive filtration on \(V(\lambda)\), induced by the canonical (or standard degree) filtration \((U_{k}(\mathfrak{m}^{-}))_{k\in\mathbb{N}}\) on the enveloping algebra \(U(\mathfrak{m}^{-})\) of \(\mathfrak{m}^{-}\). The associated graded space, that we call the _degenerate highest weight module associated with \(\lambda\)_, is denoted by \[\widetilde{V}(\lambda):=gr_{\mathscr{F}}(V(\lambda))=\bigoplus_{k\in\mathbb{N }}gr_{k}(V(\lambda))\] where \(gr_{k}(V(\lambda))=\frac{\mathscr{F}_{k}(V(\lambda))}{\mathscr{F}_{k-1}(V( \lambda))}\) for all \(k\in\mathbb{N}\) with \(\mathscr{F}_{-1}(V(\lambda)):=\{0\}\). If \(v_{\lambda}\) is a nonzero vector of highest weight \(\lambda\) in \(V(\lambda)\), we denote by \(V^{\prime}(\lambda)\) the irreducible \(U(\mathfrak{r})\)-submodule of \(V(\lambda)\) generated by \(v_{\lambda}\) and by \(\widetilde{V^{\prime}}(\lambda)\subset\widetilde{V}(\lambda)\) the canonical image of \(V^{\prime}(\lambda)\) in \(\widetilde{V}(\lambda)\). We will observe that, as \(U(\mathfrak{r})\)-modules, we have \(\widetilde{V}(\lambda)\simeq V(\lambda)\). Set \(\tilde{v}_{\lambda}\) the canonical image of \(v_{\lambda}\) in \(\widetilde{V}(\lambda)\). We define a left \(U(\tilde{\mathfrak{p}}^{-})\)-module structure on \(\widetilde{V}(\lambda)\), for which we have that \(\widetilde{V^{\prime}}(\lambda)=U(\mathfrak{r}).\tilde{v}_{\lambda}\) and that \[\widetilde{V}(\lambda)=U(\tilde{\mathfrak{p}}^{-}).\tilde{v}_{\lambda}=S( \mathfrak{m}^{-}).\widetilde{V^{\prime}}(\lambda)=U(\tilde{\mathfrak{p}}^{-} ).\widetilde{V^{\prime}}(\lambda).\]
* In subsections 4.1, 4.2, 4.4, denoting by \(T(\mathfrak{m})\) the tensor algebra of \(\mathfrak{m}\), we consider the associative algebra \(A=T(\mathfrak{m})\#U(\mathfrak{r})\), which is the Hopf smash product of the left \(U(\mathfrak{r})\)-algebra \(T(\mathfrak{m})\) by the Hopf algebra \(U(\mathfrak{r})\), as defined for example in [24, 1.1.8]. As \(T(\mathfrak{m})\) is also equipped with a coproduct, we obtain that this smash product \(A\) also inherits a structure of a bialgebra. We then consider the coadjoint action, which we denote by \(ad^{*}\), of \(U(\tilde{\mathfrak{p}})\) on \(\mathfrak{p}^{-}\simeq\tilde{\mathfrak{p}}^{*}\) (as vector spaces). Then \(ad^{*}\) extends uniquely by derivation to an action of \(U(\tilde{\mathfrak{p}})\) on \(S(\mathfrak{p}^{-})\). From this action \(ad^{*}\), we define what we call a _generalized adjoint action_\(ad^{**}\) of \(A\) on \(U(\tilde{\mathfrak{p}}^{-})\), which coincides with the adjoint action on \(U(\tilde{\mathfrak{p}}^{-})\), when restricted to \(U(\mathfrak{r})\).
* In subsections 5.1 and 5.3, we consider spaces of matrix coefficients. For \(\lambda\in P^{+}(\pi)\), we set \(\tilde{v}_{w_{0}\lambda}\) the canonical image in \(\widetilde{V}(\lambda)\) of a chosen nonzero lowest weight vector in \(V(\lambda)\) and by \(\widetilde{V^{\prime\prime}}(\lambda)\) the \(U(\mathfrak{r})\)-submodule of \(\widetilde{V}(\lambda)\) generated by \(\tilde{v}_{w_{0}\lambda}\). We denote by \(\widetilde{V}(\lambda)^{*}\) the dual space of \(\widetilde{V}(\lambda)\). For all \(\xi\in\widetilde{V}(\lambda)^{*}\) and \(v\in\widetilde{V^{\prime}}(\lambda)\), the matrix coefficient \(c_{\xi,\,v}\in U(\tilde{\mathfrak{p}}^{-})^{*}\) is defined by : \[c_{\xi,\,v}(u)=\xi(u.\,v)\,\,\,\text{for all}\,\,u\in U(\tilde{\mathfrak{p}}^{ -}).\]
Then we define \(\widetilde{C}_{\mathfrak{p}}(\lambda)\) to be the subspace of \(U(\tilde{\mathfrak{p}}^{-})^{*}\) generated by \[\{c_{\xi,\,v}\mid\xi\in\widetilde{V}(\lambda)^{*},\,v\in\widetilde{V^{\prime}} (\lambda)\}\] and \(\widetilde{C}_{\mathfrak{r}}(\lambda)\) to be the subspace of \(\widetilde{C}_{\mathfrak{p}}(\lambda)\) generated by \[\{c_{\xi,\,v}\mid\xi\in\widetilde{V^{\prime\prime}}(\lambda)^{*},\,v\in \widetilde{V^{\prime}}(\lambda)\}.\] We set \(\widetilde{C}_{\mathfrak{p}}=\sum_{\lambda\in P^{+}(\pi)}\widetilde{C}_{ \mathfrak{p}}(\lambda)\) and \(\widetilde{C}_{\mathfrak{r}}=\sum_{\lambda\in P^{+}(\pi)}\widetilde{C}_{ \mathfrak{r}}(\lambda)\). We show that these are direct sums and that \(\widetilde{C}_{\mathfrak{r}}\) is a subalgebra of \(U(\tilde{\mathfrak{p}}^{-})^{*}\).
* In subsection 5.5, we consider the dual representation of \(ad^{**}\), which defines a left \(A\)-module structure on \(U(\tilde{\mathfrak{p}}^{-})^{*}\). When restricted to \(U(\mathfrak{r})\), the dual representation of \(ad^{**}\) defines a left \(U(\mathfrak{r})\)-module structure on every \(\widetilde{C}_{\mathfrak{r}}(\lambda)\), \(\lambda\in P^{+}(\pi)\), and then on \(\widetilde{C}_{\mathfrak{r}}\), which coincides with the coadjoint representation.
* In subsections 6.1 and 6.2, for all \(\lambda\in P^{+}(\pi)\), we denote by \(\widetilde{C}_{\mathfrak{r}}(\lambda)^{U(\mathfrak{r}^{\prime})}\), resp. \(\widetilde{C}_{\mathfrak{r}}^{U(\mathfrak{r}^{\prime})}\) the vector space, resp. the algebra, of invariants in \(\widetilde{C}_{\mathfrak{r}}(\lambda)\), resp. in \(\widetilde{C}_{\mathfrak{r}}\), by the coadjoint representation of \(U(\mathfrak{r}^{\prime})\). We have that \[\widetilde{C}_{\mathfrak{r}}^{U(\mathfrak{r}^{\prime})}=\bigoplus_{\lambda \in P^{+}(\pi)}\widetilde{C}_{\mathfrak{r}}(\lambda)^{U(\mathfrak{r}^{\prime})}.\] Denote by \(\pi^{\prime}\subset\pi\) the subset of simple roots of \((\mathfrak{g},\mathfrak{h})\) associated with the parabolic subalgebra \(\mathfrak{p}\), set \(\mathfrak{h}_{\pi^{\prime}}=\mathfrak{h}\cap\mathfrak{p}^{\prime}\), and denote by \((\,\ )\) the non degenerate symmetric bilinear form on \(\mathfrak{h}^{*}\times\mathfrak{h}^{*}\) induced by the Killing form on \(\mathfrak{g}\). Since, for all \(\lambda\in P^{+}(\pi)\), \(\widetilde{V^{\prime}}(\lambda)\) is an irreducible \(U(\mathfrak{r})\)-module, the Jacobson density theorem implies that the \(U(\mathfrak{r})\)-module \(\widetilde{C}_{\mathfrak{r}}(\lambda)\) is isomorphic to the \(U(\mathfrak{r})\)-module \(\widetilde{V^{\prime\prime}}(\lambda)^{*}\otimes\widetilde{V^{\prime}}(\lambda)\) where the latter is endowed with the diagonal action of \(U(\mathfrak{r})\). It follows that, for all \(\lambda\in P^{+}(\pi)\), \(\widetilde{C}_{\mathfrak{r}}(\lambda)^{U(\mathfrak{r}^{\prime})}\) is of dimension less or equal to one, and equal to one if and only if \[(w_{0}^{\prime}\lambda-w_{0}\lambda,\,\pi^{\prime})=0\] where \(w_{0}^{\prime}\), resp. \(w_{0}\), is the longest element in the Weyl group of \((\mathfrak{r}^{\prime},\,\mathfrak{h}_{\pi^{\prime}})\), resp. of \((\mathfrak{g},\,\mathfrak{h})\). As a consequence we show (as in [14, prop. 3.1]) that \(\widetilde{C}_{\mathfrak{r}}^{U(\mathfrak{r}^{\prime})}\) is a polynomial algebra, for which we can compute the weight of each vector of a set of algebraically independent generators.
* In subsections 7.1, 7.3, 7.4 and 7.5, inspired by [15, 6.1], one defines on the algebra \(U(\tilde{\mathfrak{p}}^{-})^{*}\) what we call _the generalized Kostant filtration_\((\mathscr{F}_{K}^{k}(U(\tilde{\mathfrak{p}}^{-})^{*}))_{k\in\mathbb{N}}\), which is a decreasing, exhaustive and separated ring filtration. This filtration is invariant under the action of \(A\) given by the dual representation of \(ad^{**}\).
One denotes by \(gr_{K}(U(\tilde{\mathfrak{p}}^{-})^{*})=\bigoplus_{k\in\mathbb{N}}gr_{K}^{k}(U( \tilde{\mathfrak{p}}^{-})^{*})\) the graded algebra associated with this filtration where, for all \(k\in\mathbb{N}\), \[gr_{K}^{k}(U(\tilde{\mathfrak{p}}^{-})^{*})=\frac{\mathscr{F}_{K}^{k}(U(\tilde {\mathfrak{p}}^{-})^{*})}{\mathscr{F}_{K}^{k+1}(U(\tilde{\mathfrak{p}}^{-})^{* })}.\] The dual representation of \(ad^{**}\) induces a left action of \(A\) on this graded algebra and one checks that, for all \(x\in\mathfrak{m}\), for all \(f\in\widetilde{C}_{\mathfrak{r}}\cap\mathscr{F}_{K}^{k}(U(\tilde{\mathfrak{p} }^{-})^{*})\), one has for this action \[x.f\in\mathscr{F}_{K}^{k+1}(U(\tilde{\mathfrak{p}}^{-})^{*})\] that is, that \[x.gr_{K}^{k}(f)=0\] (\(\diamond\)) where \(gr_{K}^{k}(f)\) denotes the canonical image of \(f\) in \(gr_{K}(U(\tilde{\mathfrak{p}}^{-})^{*})\). Then for all \(k\in\mathbb{N}\) and all vector space \(V\), denoting by \(S_{k}(V)\) the vector subspace of the symmetric algebra \(S(V)\) of \(V\) formed by all homogeneous polynomials of degree \(k\), one defines a morphism \(\psi_{k}:gr_{K}^{k}(U(\tilde{\mathfrak{p}}^{-})^{*})\longrightarrow S_{k}( \mathfrak{p}^{-})^{*}\). It is easily checked that actually \(\psi_{k}\) is an isomorphism of left \(U(\tilde{\mathfrak{p}})\)-modules, where the left structure on \(gr_{K}^{k}(U(\tilde{\mathfrak{p}}^{-})^{*})\) is induced by the dual representation of \(ad^{**}\) and where the left structure on \(S_{k}(\mathfrak{p}^{-})^{*}\) is given by the dual representation of \(ad^{*}\). With this structure, it is easily checked that \(S_{k}(\mathfrak{p}^{-})^{*}\) is isomorphic to the \(U(\tilde{\mathfrak{p}})\)-module \(S_{k}(\tilde{\mathfrak{p}})=S_{k}(\mathfrak{p})\) where the action of \(\tilde{\mathfrak{p}}\) is the adjoint action which extends by derivation the Lie bracket in \(\tilde{\mathfrak{p}}\). Thus we obtain an isomorphism of \(U(\tilde{\mathfrak{p}})\)-modules and of algebras from \(gr_{K}(U(\tilde{\mathfrak{p}}^{-})^{*})\) to \(S(\tilde{\mathfrak{p}})\).
* Denote by \(gr_{K}(\widetilde{C}_{\mathfrak{r}}^{U(\mathfrak{r}^{\prime})})\) the graded algebra associated with the induced generalized Kostant filtration on \(\widetilde{C}_{\mathfrak{r}}^{U(\mathfrak{r}^{\prime})}\). The former may be viewed as a subalgebra of \(gr_{K}(U(\tilde{\mathfrak{p}}^{-})^{*})\), which by equation (\(\diamond\)) is invariant under the action of \(U(\tilde{\mathfrak{p}}^{\prime})\) induced by the action of \(A\) on \(U(\tilde{\mathfrak{p}}^{-})^{*}\) given by the dual representation of \(ad^{**}\). Finally one can establish the main result of our paper (see subsection 7.6). **Theorem**.: _There is an injection of algebras and of \(U(\mathfrak{h})\)-modules from \(gr_{K}(\widetilde{C}_{\mathfrak{r}}^{U(\mathfrak{r}^{\prime})})\) into the Poisson semicentre \(Sy(\tilde{\mathfrak{p}})=S(\tilde{\mathfrak{p}})^{\tilde{\mathfrak{p}}^{ \prime}}\). This implies a lower bound for the formal character of \(Sy(\tilde{\mathfrak{p}})\), when the latter is well defined._
## 2. Notation.
### General notation
Let \(\mathfrak{g}\) be a simple Lie algebra over \(\Bbbk\), \(\mathfrak{h}\) be a Cartan subalgebra of \(\mathfrak{g}\) and choose a set \(\pi\) of simple roots for \((\mathfrak{g},\,\mathfrak{h})\). Denote by \(\Delta^{\pm}\) the set of positive, resp. negative, roots of \((\mathfrak{g},\,\mathfrak{h})\) with respect to \(\pi\) and \(\Delta=\Delta^{+}\sqcup\Delta^{-}\) the set of roots of \((\mathfrak{g},\,\mathfrak{h})\). Denote by \([\,,\,]\) the Lie bracket in \(\mathfrak{g}\) and by \(\langle\,,\,\rangle\) the natural duality between \(\mathfrak{h}\) and \(\mathfrak{h}^{*}\). Then for all root \(\alpha\in\Delta\)
set \(\mathfrak{g}_{\alpha}=\{x\in\mathfrak{g}\mid\forall h\in\mathfrak{h}\), \([h,\,x]=\langle h,\,\alpha\rangle x\}\) and fix a nonzero root vector \(x_{\alpha}\) in \(\mathfrak{g}_{\alpha}\).
Denote by \(\mathfrak{n}=\bigoplus_{\alpha\in\Delta^{+}}\mathfrak{g}_{\alpha}\), resp. \(\mathfrak{n}^{-}=\bigoplus_{\alpha\in\Delta^{-}}\mathfrak{g}_{\alpha}\), the maximal nilpotent subalgebra of \(\mathfrak{g}\) generated by positive, resp. negative, root vectors, so that \(\mathfrak{g}=\mathfrak{n}\oplus\mathfrak{h}\oplus\mathfrak{n}^{-}\). Let \(\mathfrak{b}=\mathfrak{n}\oplus\mathfrak{h}\) be the Borel subalgebra of \(\mathfrak{g}\).
For each subset \(\pi^{\prime}\) of \(\pi\), we denote by \(\Delta^{\pm}_{\pi^{\prime}}\) the subset of \(\Delta^{\pm}\) generated by \(\pi^{\prime}\) that is, \(\Delta^{\pm}_{\pi^{\prime}}=(\pm\mathbb{N}\pi^{\prime})\cap\Delta^{\pm}\). Set \(\mathfrak{n}_{\pi^{\prime}}=\bigoplus_{\alpha\in\Delta^{+}_{\pi^{\prime}}} \mathfrak{g}_{\alpha}\), resp. \(\mathfrak{n}^{-}_{\pi^{\prime}}=\bigoplus_{\alpha\in\Delta^{-}_{\pi^{\prime}} }\mathfrak{g}_{\alpha}\). Then the (standard) parabolic subalgebra \(\mathfrak{p}\supset\mathfrak{b}\) of \(\mathfrak{g}\) associated with \(\pi^{\prime}\) is
\[\mathfrak{p}=\mathfrak{n}\oplus\mathfrak{h}\oplus\mathfrak{n}^{-}_{\pi^{ \prime}}.\]
The Levi factor \(\mathfrak{r}\) of \(\mathfrak{p}\) is
\[\mathfrak{r}=\mathfrak{n}_{\pi^{\prime}}\oplus\mathfrak{h}\oplus\mathfrak{n} ^{-}_{\pi^{\prime}}\]
and its derived subalgebra (which is semisimple) is \(\mathfrak{r}^{\prime}=\mathfrak{n}_{\pi^{\prime}}\oplus\mathfrak{h}_{\pi^{ \prime}}\oplus\mathfrak{n}^{-}_{\pi^{\prime}}\), where \(\mathfrak{h}_{\pi^{\prime}}=\mathfrak{h}\cap\mathfrak{p}^{\prime}\), with \(\mathfrak{p}^{\prime}=[\mathfrak{p},\,\mathfrak{p}]\) being the derived subalgebra of \(\mathfrak{p}\). If for all \(\alpha\in\pi\), \(\alpha\) denotes the coroot associated with \(\alpha\), we have that \(\mathfrak{h}_{\pi^{\prime}}\) is the \(\Bbbk\)-vector space generated by the coroots \(\alpha\), with \(\alpha\in\pi^{\prime}\).
The longest element of the Weyl group \(W\), resp. \(W^{\prime}\), of \((\mathfrak{g},\,\mathfrak{h})\), resp. of \((\mathfrak{r}^{\prime},\,\mathfrak{h}_{\pi^{\prime}})\), is denoted by \(w_{0}\), resp. \(w^{\prime}_{0}\).
Set \(\mathfrak{h}^{\pi\setminus\pi^{\prime}}=\{h\in\mathfrak{h}\mid\langle h,\, \pi^{\prime}\rangle=0\}\) so that \(\mathfrak{h}=\mathfrak{h}_{\pi^{\prime}}\oplus\mathfrak{h}^{\pi\setminus\pi^{ \prime}}\). Denote by \(\mathfrak{m}\) the nilpotent radical of \(\mathfrak{p}\), so that \(\mathfrak{p}=\mathfrak{r}\oplus\mathfrak{m}\). We have that \(\mathfrak{n}=\mathfrak{n}_{\pi^{\prime}}\oplus\mathfrak{m}\) and that \(\mathfrak{m}=\bigoplus_{\alpha\in\Delta^{+}\setminus\Delta^{+}_{\pi^{\prime}}} \mathfrak{g}_{\alpha}\). The opposite subalgebra \(\mathfrak{p}^{-}\) of \(\mathfrak{p}\) is the parabolic subalgebra of \(\mathfrak{g}\) defined by
\[\mathfrak{p}^{-}=\mathfrak{n}^{-}\oplus\mathfrak{h}\oplus\mathfrak{n}_{\pi^{ \prime}}.\]
We denote by \(\mathfrak{m}^{-}\) the nilpotent radical of \(\mathfrak{p}^{-}\) (so that \(\mathfrak{p}^{-}=\mathfrak{r}\oplus\mathfrak{m}^{-}\)). The Killing form \(K\) on \(\mathfrak{g}\times\mathfrak{g}\) induces an isomorphism between the dual space \(\mathfrak{p}^{*}\) of \(\mathfrak{p}\) and the vector space \(\mathfrak{p}^{-}\), since \(K\) is non degenerate on \(\mathfrak{p}\times\mathfrak{p}^{-}\). Moreover since \(K\) is also non degenerate on \(\mathfrak{h}\times\mathfrak{h}\), it induces a non degenerate symmetric bilinear form \((\,\ )\) on \(\mathfrak{h}^{*}\times\mathfrak{h}^{*}\) which is invariant under the action of \(W\) (see for instance [16, 5.2.2]).
For all \(\alpha\in\pi\), resp. \(\alpha\in\pi^{\prime}\), let \(\varpi_{\alpha}\), resp. \(\varpi^{\prime}_{\alpha}\), be the fundamental weight associated with \(\alpha\) with respect to \((\mathfrak{g},\,\mathfrak{h})\), resp. with respect to \((\mathfrak{r}^{\prime},\,\mathfrak{h}_{\pi^{\prime}})\). Then \(P(\pi)=\sum_{\alpha\in\pi}\mathbb{Z}\varpi_{\alpha}\), resp. \(P(\pi^{\prime})=\sum_{\alpha\in\pi^{\prime}}\mathbb{Z}\varpi^{\prime}_{\alpha}\), is the weight lattice of \((\mathfrak{g},\,\mathfrak{h})\), resp. \((\mathfrak{r}^{\prime},\,\mathfrak{h}_{\pi^{\prime}})\). Moreover \(P^{+}(\pi)=\sum_{\alpha\in\pi}\mathbb{N}\varpi_{\alpha}\), resp. \(P^{+}(\pi^{\prime})=\sum_{\alpha\in\pi^{\prime}}\mathbb{N}\varpi^{\prime}_{\alpha}\), is the set of dominant integral weights of \((\mathfrak{g},\,\mathfrak{h})\), resp. \((\mathfrak{r}^{\prime},\,\mathfrak{h}_{\pi^{\prime}})\). By [15, 2.5], there exists some positive integer \(r\) such that
\[P(\pi)\subset P(\pi^{\prime})\oplus\frac{1}{r}\sum_{\alpha\in\pi\setminus\pi^{ \prime}}\mathbb{Z}\varpi_{\alpha} \tag{1}\]
and for all \(\alpha\in\pi^{\prime}\), the projection of \(\varpi_{\alpha}\) in \(P(\pi^{\prime})\) with respect to this decomposition (1) is \(\varpi^{\prime}_{\alpha}\). For \(\lambda=\sum_{\alpha\in\pi}m_{\alpha}\varpi_{\alpha}\in P(\pi)\) (\(m_{\alpha}\in\mathbb{Z}\) for each \(\alpha\in\pi\)), we denote by \(\lambda^{\prime}=\sum_{\alpha\in\pi^{\prime}}m_{\alpha}\varpi^{\prime}_{\alpha}\) its projection in \(P(\pi^{\prime})\) with respect to the decomposition (1).
For any finite-dimensional Lie algebra \(\mathfrak{a}\), we denote by \(U(\mathfrak{a})\) its universal enveloping algebra and by \(S(\mathfrak{a})\) its symmetric algebra, which may be viewed as the (commutative) graded algebra associated with the canonical filtration \((U_{k}(\mathfrak{a}))_{k\in\mathbb{N}}\) on \(U(\mathfrak{a})\) (see [9, 2.3]). We may also identify \(S(\mathfrak{a})\) with the algebra \(\Bbbk[\mathfrak{a}^{*}]\) of polynomial functions on the dual space \(\mathfrak{a}^{*}\) of \(\mathfrak{a}\). For all \(k\in\mathbb{N}\), we denote by \(S_{k}(\mathfrak{a})\) the vector subspace of \(S(\mathfrak{a})\) formed by all homogeneous polynomials of degree \(k\).
For all \(\lambda\in P^{+}(\pi)\), the irreducible highest weight \(\mathfrak{g}\)-module of highest weight \(\lambda\) (which is obtained by quotienting the corresponding Verma module by its largest proper sub-\(\mathfrak{g}\)-module, as defined for example in [9, 7.1.11]) is denoted by \(V(\lambda)\) : recall ([9, 7.2.6]) that this is a finite-dimensional \(U(\mathfrak{g})\)-module. We may pay attention that (unlike the notation in [9, 7.1.4, 7.1.12]) the highest weight of \(V(\lambda)\) in our paper is \(\lambda\) and not \(\lambda-\rho\), where \(\rho\) is the sum of all fundamental weights of \((\mathfrak{g},\,\mathfrak{h})\).
### Semi-direct product
Recall the parabolic subalgebra \(\mathfrak{p}=\mathfrak{r}\oplus\mathfrak{m}\) and its opposite parabolic subalgebra \(\mathfrak{p}^{-}=\mathfrak{r}\oplus\mathfrak{m}^{-}\), with \(\mathfrak{m}\), resp. \(\mathfrak{m}^{-}\), the nilpotent radical of \(\mathfrak{p}\), resp. \(\mathfrak{p}^{-}\).
We now consider the semi-direct product \(\tilde{\mathfrak{p}}=\mathfrak{r}\ltimes(\mathfrak{m})^{a}\), resp. \(\tilde{\mathfrak{p}}^{-}=\mathfrak{r}\ltimes(\mathfrak{m}^{-})^{a}\), where \((\mathfrak{m})^{a}\), resp. \((\mathfrak{m}^{-})^{a}\), is isomorphic to \(\mathfrak{m}\), resp. \(\mathfrak{m}^{-}\), as an \(\mathfrak{r}\)-module, but where the superscript \(a\) means that \((\mathfrak{m})^{a}\), resp. \((\mathfrak{m}^{-})^{a}\), is an abelian ideal of \(\tilde{\mathfrak{p}}\), resp. of \(\tilde{\mathfrak{p}}^{-}\). Such a semi-direct product is still a Lie algebra by [46, Sect. 4] for example, called an Inonu-Wigner contraction, or a one-parameter contraction of \(\mathfrak{p}\), resp. of \(\mathfrak{p}^{-}\). The \(\Bbbk\)-vector space \(\tilde{\mathfrak{p}}\), resp. \(\tilde{\mathfrak{p}}^{-}\), is equal to \(\mathfrak{p}\), resp. \(\mathfrak{p}^{-}\), as a vector space and if we denote by \([\,,\,]_{\tilde{\mathfrak{p}}}\), resp. \([\,,\,]_{\tilde{\mathfrak{p}}^{-}}\) the Lie bracket in \(\tilde{\mathfrak{p}}\), resp. \(\tilde{\mathfrak{p}}^{-}\), and by \([\,,\,]\) the Lie bracket in \(\mathfrak{g}\), then one has that
\[\forall z,\,z^{\prime}\in\mathfrak{r},\,\forall x,\,x^{\prime}\in\mathfrak{m}, \ [z,\,x]_{\tilde{\mathfrak{p}}}=[z,\,x],\ \ [z,\,z^{\prime}]_{\tilde{\mathfrak{p}}}=[z,\,z^{\prime}],\ \ [x,\,x^{\prime}]_{\tilde{\mathfrak{p}}}=0 \tag{2}\]
\[\forall z,\,z^{\prime}\in\mathfrak{r},\,\forall y,\,y^{\prime}\in\mathfrak{m}^ {-},\ [z,\,y]_{\tilde{\mathfrak{p}}^{-}}=[z,\,y],\ \ [z,\,z^{\prime}]_{\tilde{\mathfrak{p}}^{-}}=[z,\,z^{\prime}],\ \ [y,\,y^{\prime}]_{\tilde{ \mathfrak{p}}^{-}}=0. \tag{3}\]
## 3. The degenerate highest weight module.
In this section, we fix \(\lambda\in P^{+}(\pi)\) and we will define, from the irreducible highest weight module \(V(\lambda)\) of highest weight \(\lambda\), some vector space denoted by \(\widetilde{V}(\lambda)\) which can be endowed with a left \(U(\tilde{\mathfrak{p}}^{-})\)-module structure, so that it is isomorphic to \(V(\lambda)\) as a left \(U(\mathfrak{r})\)-module.
### The generalized PBW filtration and the degenerate highest weight module \(\widetilde{V}(\lambda)\)
Consider \(V(\lambda)\) the irreducible highest weight \(\mathfrak{g}\)-module of highest weight \(\lambda\) as defined in subsection 2.1.
Generalizing the PBW filtration on a highest weight irreducible \(\mathfrak{g}\)-module introduced in [20], when \(\mathfrak{p}=\mathfrak{b}\) is a Borel subalgebra of \(\mathfrak{g}\) (that is, when \(\pi^{\prime}=\emptyset\)), we define what we call _the generalized Poincare-Birkhoff-Witt filtration_ on \(V(\lambda)\) as follows.
Choose \(v_{\lambda}\) a nonzero weight vector in \(V(\lambda)\) of highest weight \(\lambda\) and \(v_{w_{0}\lambda}\) a nonzero weight vector in \(V(\lambda)\) of lowest weight \(w_{0}\lambda\). Since \(\mathfrak{n}^{-}=\mathfrak{n}^{-}_{\pi^{\prime}}\oplus\mathfrak{m}^{-}\), the multiplication in the enveloping algebra gives, by the Poincare-Birkhoff-Witt theorem [9, 2.1.11], an isomorphism of vector spaces \(U(\mathfrak{n}^{-}_{\pi^{\prime}})\otimes U(\mathfrak{m}^{-})\simeq U( \mathfrak{n}^{-})\). Then we have that
\[V(\lambda)=U(\mathfrak{n}^{-}_{\pi^{\prime}}).(U(\mathfrak{m}^{-}).v_{\lambda} )=U(\mathfrak{r}).(U(\mathfrak{m}^{-}).v_{\lambda})=U(\mathfrak{m}^{-}).(U( \mathfrak{n}^{-}_{\pi^{\prime}}).v_{\lambda})\]
since \(\mathfrak{m}^{-}\) is an ideal of \(\mathfrak{p}^{-}\). Set \(V^{\prime}(\lambda)=U(\mathfrak{n}^{-}_{\pi^{\prime}}).v_{\lambda}\). The latter is an irreducible \(U(\mathfrak{r})\)-module.
Recall \((U_{k}(\mathfrak{m}^{-}))_{k\in\mathbb{N}}\) the canonical filtration (also called standard degree filtration in [20]) on the enveloping algebra \(U(\mathfrak{m}^{-})\) of \(\mathfrak{m}^{-}\). More precisely \(U_{k}(\mathfrak{m}^{-})\) is the vector subspace of \(U(\mathfrak{m}^{-})\) generated by the products \(y_{1}\cdots y_{p}\) where \(y_{i}\in\mathfrak{m}^{-}\) for all \(i\), \(1\leq i\leq p\), and \(p\leq k\).
For all \(k\in\mathbb{N}\), let \(\mathscr{F}_{k}(V(\lambda))\) be the vector subspace of \(V(\lambda)\) generated by
\[\begin{array}{c}\{v\in V(\lambda)\mid\exists p\in\mathbb{N},\,p\leq k,\, \exists y_{1},\,\ldots,\,y_{p}\in\mathfrak{m}^{-},\,\exists u^{\prime}\in U( \mathfrak{r});\\ v=u^{\prime}\,y_{1}\cdots y_{p}.v_{\lambda}\}.\end{array}\]
where \(u^{\prime}\,y_{1}\cdots y_{p}\) denotes an element in \(U(\mathfrak{p}^{-})\). Observe that we also have that \(\mathscr{F}_{k}(V(\lambda))\) is the vector subspace of \(V(\lambda)\) generated by
\[\begin{array}{c}\{v\in V(\lambda)\mid\exists p\in\mathbb{N},\,p\leq k,\, \exists y_{1},\,\ldots,\,y_{p}\in\mathfrak{m}^{-},\,\exists u^{\prime}\in U( \mathfrak{r});\\ v=y_{1}\cdots y_{p}\,u^{\prime}.v_{\lambda}\}\end{array}\]
since \([\mathfrak{r},\,\mathfrak{m}^{-}]\subset\mathfrak{m}^{-}\).
In other words, one has that \(\mathscr{F}_{0}(V(\lambda))=U(\mathfrak{r}).v_{\lambda}=U(\mathfrak{n}^{-}_{ \pi^{\prime}}).v_{\lambda}=V^{\prime}(\lambda)\) and for all \(k\in\mathbb{N}\), \(\mathscr{F}_{k}(V(\lambda))=U_{k}(\mathfrak{m}^{-}).V^{\prime}(\lambda)\) is a left \(U(\mathfrak{r})\)-module. Then \(\mathscr{F}:=(\mathscr{F}_{k}(V(\lambda)))_{k\in\mathbb{N}}\) is an increasing and exhaustive filtration on \(V(\lambda)\). We call it the generalized Poincare-Birkhoff-Witt filtration on \(V(\lambda)\) since when \(\pi^{\prime}=\emptyset\), it coincides with the PBW filtration on \(V(\lambda)\) introduced in [20]. The associated graded space is denoted by
\[\widetilde{V}(\lambda):=gr_{\mathscr{F}}(V(\lambda))=\bigoplus_{k\in\mathbb{N }}\frac{\mathscr{F}_{k}(V(\lambda))}{\mathscr{F}_{k-1}(V(\lambda))}\]
where \(\mathscr{F}_{-1}(V(\lambda)):=\{0\}\) and we call \(\widetilde{V}(\lambda)\) the _degenerate highest weight module associated with \(\lambda\)_. For all \(v\in\mathscr{F}_{k}(V(\lambda))\), we denote by \(gr_{k}(v)\) its canonical image in \(gr_{k}(V(\lambda)):=\frac{\mathscr{F}_{k}(V(\lambda))}{\mathscr{F}_{k-1}(V( \lambda))}\). Denote by \(\widetilde{V^{\prime}}(\lambda)\) the canonical image of \(V^{\prime}(\lambda)\) in \(\widetilde{V}(\lambda)\) that is, \(\widetilde{V^{\prime}}(\lambda)=gr_{0}(V^{\prime}(\lambda))=gr_{0}(V(\lambda)) \subset\widetilde{V}(\lambda)\).
### Left \(U(\tilde{\mathfrak{p}}^{-})\)-module structure on \(\widetilde{V}(\lambda)\)
Recall that, for all \(k\in\mathbb{N}\), \(\mathscr{F}_{k}(V(\lambda))\) is a finite-dimensional left \(U(\mathfrak{r})\)-module and that the Lie algebra \(\mathfrak{r}\) is reductive and the elements of its centre act reductively in \(\mathscr{F}_{k}(V(\lambda))\). Then by [9, 1.6.4] one has that \(\mathscr{F}_{k}(V(\lambda))\) is a semisimple \(U(\mathfrak{r})\)-module. Moreover \(\mathscr{F}_{k-1}(V(\lambda))\) is a submodule of \(\mathscr{F}_{k}(V(\lambda))\). Then there exists a left
\(U(\mathfrak{r})\)-submodule \(\mathscr{F}^{k}(V(\lambda))\) of \(\mathscr{F}_{k}(V(\lambda))\) such that \(\mathscr{F}_{k}(V(\lambda))=\mathscr{F}^{k}(V(\lambda))\oplus\mathscr{F}_{k-1}(V (\lambda))\) and we have that
\[\mathscr{F}_{k}(V(\lambda))=\bigoplus_{i=0}^{k}\mathscr{F}^{i}(V(\lambda))\]
where \(\mathscr{F}^{0}(V(\lambda))=\mathscr{F}_{0}(V(\lambda))\). One deduces that
\[V(\lambda)=\bigoplus_{k\in\mathbb{N}}\mathscr{F}^{k}(V(\lambda)).\]
It allows us to define, for all \(k\in\mathbb{N}\), an isomorphism of vector spaces
\[\beta_{\lambda}^{k}:gr_{k}(V(\lambda))\longrightarrow\mathscr{F}^{k}(V( \lambda))\]
such that, for all \(v\in\mathscr{F}_{k}(V(\lambda))\), \(v=\sum_{i=0}^{k}v_{i}\) with \(v_{i}\in\mathscr{F}^{i}(V(\lambda))\), for all \(0\leq i\leq k\),
\[\beta_{\lambda}^{k}(gr_{k}(v))=v_{k}.\]
Then the direct sum \(\beta_{\lambda}=\bigoplus_{k\in\mathbb{N}}\beta_{\lambda}^{k}\) is an isomorphism between the vector spaces \(\widetilde{V}(\lambda)\) and \(V(\lambda)\).
Set, for all \(y\in\mathfrak{m}^{-}\), \(z\in\mathfrak{r}\) and \(v\in\mathscr{F}_{k}(V(\lambda))\),
\[y.gr_{k}(v)=gr_{k+1}(y.v) \tag{4}\]
and
\[z.gr_{k}(v)=gr_{k}(z.v). \tag{5}\]
We will see below that equations (4) and (5) extend to a left \(U(\tilde{\mathfrak{p}}^{-})\)-module structure on \(\widetilde{V}(\lambda)\) and that \(\beta_{\lambda}\) is an isomorphism of \(U(\mathfrak{r})\)-modules.
Set \(\tilde{\mathfrak{n}}^{-}=\mathfrak{n}^{-}_{\pi^{\prime}}\ltimes(\mathfrak{m}^ {-})^{a}\) : it is a Lie subalgebra of \(\tilde{\mathfrak{p}}^{-}\). Set also \(\tilde{v}_{\lambda}=gr_{0}(v_{\lambda})\).
Denote by \(\theta:S(\mathfrak{p}^{-})\longrightarrow U(\mathfrak{p}^{-})\) the symmetrisation, as defined in [9, 2.4.6]. More precisely for \(k\in\mathbb{N}^{*}\), and for all \(y_{1},\dots,\)\(y_{k}\in\mathfrak{p}^{-}\),
\[\theta(y_{1}\cdots y_{k})=\frac{1}{k!}\sum_{\sigma\in\mathfrak{S}_{k}}y_{ \sigma(1)}\cdots y_{\sigma(k)}\]
where \(\mathfrak{S}_{k}\) is the set of permutations of \(k\) elements, the product in the left hand side lying in \(S_{k}(\mathfrak{p}^{-})\) and the product in the right hand side lying in \(U_{k}(\mathfrak{p}^{-})\). Endow the symmetric algebra \(S(\mathfrak{p}^{-})\), resp. the enveloping algebra \(U(\mathfrak{p}^{-})\), with the adjoint action of \(U(\mathfrak{r})\), denoted by \(ad\), which extends uniquely by derivation the adjoint action of \(\mathfrak{r}\) on \(\mathfrak{p}^{-}\) given by Lie bracket. By [9, 2.4.10] the map \(\theta\) is an isomorphism of \(ad\,U(\mathfrak{r})\)-modules. For all \(k\in\mathbb{N}\), set \(U^{k}(\mathfrak{m}^{-})=\theta(S_{k}(\mathfrak{m}^{-}))\). Then \(U^{k}(\mathfrak{m}^{-})\) is a left \(ad\,U(\mathfrak{r})\)-submodule of \(U_{k}(\mathfrak{m}^{-})\) and actually one has that \(U_{k}(\mathfrak{m}^{-})=U^{k}(\mathfrak{m}^{-})\oplus U_{k-1}(\mathfrak{m}^{ -})\) by [9, 2.4.4, 2.4.5]. Denote by \(pr_{U^{k}(\mathfrak{m}^{-})}\) the projection onto \(U^{k}(\mathfrak{m}^{-})\) with respect to the above decomposition. We have the following.
**Lemma**.: _Let \(\lambda\in P^{+}(\pi)\) and \(k\in\mathbb{N}\)._
1. _Equations (_4_) and (_5_) extend to a left_ \(U(\tilde{\mathfrak{p}}^{-})\)_-action on the vector space_ \(\widetilde{V}(\lambda)\) _and for this structure we have the following equalities :_ (6) \[\widetilde{V^{\prime}}(\lambda)=U(\mathfrak{r}).\tilde{v}_{\lambda}\]
2. \(\widetilde{V}(\lambda)=U(\tilde{\mathfrak{p}}^{-}).\tilde{v}_{\lambda}=U( \tilde{\mathfrak{n}}^{-}).\tilde{v}_{\lambda}=S(\mathfrak{m}^{-}).\widetilde{ V^{\prime}}(\lambda)=U(\tilde{\mathfrak{p}}^{-}).\widetilde{V^{\prime}}(\lambda)\)_._
3. _For all_ \(s\in S_{k}(\mathfrak{m}^{-})\)_,_ \(u^{\prime}\in U(\mathfrak{r})\) _and_ \(u\in U_{k}(\mathfrak{m}^{-})\) _one has :_ (8) \[su^{\prime}.\tilde{v}_{\lambda}=gr_{k}(\theta(s)u^{\prime}.v_{\lambda})\] (9) \[gr_{k}(uu^{\prime}.v_{\lambda})=gr_{k}(pr_{U^{k}(\mathfrak{m}^{-})}(u)u^{ \prime}.v_{\lambda})\] (10) \[gr_{k}(V(\lambda))=S_{k}(\mathfrak{m}^{-}).\widetilde{V^{\prime}}(\lambda).\]
4. _The map_ \(\beta_{\lambda}\) _is an isomorphism of_ \(U(\mathfrak{r})\)_-modules between_ \(\widetilde{V}(\lambda)\) _and_ \(V(\lambda)\)_. Then_ \(\widetilde{V^{\prime}}(\lambda)\) _is a left irreducible_ \(U(\mathfrak{r})\)_-module and_ \(\widetilde{V}(\lambda)\) _has the same set of weights as_ \(V(\lambda)\)_, especially_ \(\lambda\) _is the highest weight of_ \(\widetilde{V}(\lambda)\) _and_ \(w_{0}\lambda\) _is its lowest weight._
5. _One may choose_ \(\mathscr{F}^{k}(V(\lambda))\) _to be included in_ \(U^{k}(\mathfrak{m}^{-}).V^{\prime}(\lambda)\)_._
Proof.: By [9, 2.1.1] and (3) of subsection 2.2, one may observe that the algebra \(U(\tilde{\mathfrak{p}}^{-})\) is the quotient of the tensor algebra \(T(\tilde{\mathfrak{p}}^{-})=T(\mathfrak{p}^{-})\) of the vector space \(\tilde{\mathfrak{p}}^{-}=\mathfrak{p}^{-}\) by the two-sided ideal generated by the set
\[\{z\otimes z^{\prime}-z^{\prime}\otimes z-[z,\,z^{\prime}],\,\,z\otimes y-y \otimes z-[z,\,y],\,\,y\otimes y^{\prime}-y^{\prime}\otimes y;\,\,z,\,z^{ \prime}\in\mathfrak{r},\,y,\,y^{\prime}\in\mathfrak{m}^{-}\}\]
and that, by the Poincare-Birkhoff-Witt theorem [9, 2.1.11], the multiplication is an isomorphism between the \(\Bbbk\)-vector spaces \(U(\mathfrak{r})\otimes S(\mathfrak{m}^{-})\) and \(U(\tilde{\mathfrak{p}}^{-})\).
Fix \(k\in\mathbb{N}\). For all \(x\in\mathfrak{m}^{-}\oplus\mathfrak{r}=\mathfrak{p}^{-}\), denote by \(x.\mathscr{F}_{k}(V(\lambda))\) the vector subspace of \(V(\lambda)\) formed by all the vectors \(x.v\), with \(v\in\mathscr{F}_{k}(V(\lambda))\) (where \(x.v\) denotes the action of \(x\) on \(v\) by the left \(U(\mathfrak{g})\)-module structure on \(V(\lambda)\)).
Then for all \(y\in\mathfrak{m}^{-}\), one has that \(y.\mathscr{F}_{k}(V(\lambda))\subset\mathscr{F}_{k+1}(V(\lambda))\), and for all \(z\in\mathfrak{r}\), one has that \(z.\mathscr{F}_{k}(V(\lambda))\subset\mathscr{F}_{k}(V(\lambda))\). It follows that equation (4) extends to a left action of \(S(\mathfrak{m}^{-})\) on \(\widetilde{V}(\lambda)\) since moreover, for \(y,\,y^{\prime}\in\mathfrak{m}^{-}\) and \(v\in\mathscr{F}_{k}(V(\lambda))\), we have:
\[y.(y^{\prime}.(gr_{k}(v))-y^{\prime}.(y.gr_{k}(v))=gr_{k+2}((yy^{\prime}-y^{ \prime}y).v)=gr_{k+2}([y,\,y^{\prime}].v)=0.\]
Similarly equation (5) extends to a left action of \(U(\mathfrak{r})\) on \(\widetilde{V}(\lambda)\) induced by the left action of \(U(\mathfrak{r})\) on \(V(\lambda)\). Finally both equations (4) and (5) extend to a left action of \(U(\tilde{\mathfrak{p}}^{-})\) on \(\widetilde{V}(\lambda)\) (by say, [9, 2.2.1, 2.2.2]). Equation (6) follows since \(\widetilde{V^{\prime}}(\lambda)=gr_{0}(V^{\prime}(\lambda))=gr_{0}(U(\mathfrak{ r}).v_{\lambda})\).
Let \(\tilde{v}\in\widetilde{V}(\lambda)\). There exists \(k\in\mathbb{N}\) and \(v_{i}\in\mathscr{F}_{i}(V(\lambda))\), for \(0\leq i\leq k\), such that \(\tilde{v}=\sum_{i=0}^{k}gr_{i}(v_{i})\) with, for all \(i\), \(v_{i}=\sum_{j=1}^{n_{i}}u^{\prime}_{ij}u_{ij}.v_{\lambda}\) where \(u^{\prime}_{ij}\in U(\mathfrak{r})\) and \(u_{ij}\in U_{i}(\mathfrak{m}^{-})\). Then by equation (5), one has
\[gr_{i}(v_{i})=\sum_{j=1}^{n_{i}}u^{\prime}_{ij}.gr_{i}(u_{ij}.v_{\lambda})\]
and by equation (4),
\[gr_{i}(u_{ij}.v_{\lambda})\in S_{i}(\mathfrak{m}^{-}).gr_{0}(v_{\lambda}).\]
Actually we may take the \(u^{\prime}_{ij}\) in \(U(\mathfrak{n}^{-}_{\pi^{\prime}})\), since
\[V(\lambda)=U(\mathfrak{n}^{-}).v_{\lambda}=U(\mathfrak{n}^{-}_{\pi^{\prime}}).( U(\mathfrak{m}^{-}).v_{\lambda}).\]
We then have \(\widetilde{V}(\lambda)=U(\tilde{\mathfrak{p}}^{-}).\tilde{v}_{\lambda}=U( \tilde{\mathfrak{n}}^{-}).\tilde{v}_{\lambda}\). Since \(\widetilde{V^{\prime}}(\lambda)=U(\mathfrak{r}).\tilde{v}_{\lambda}\) and since the multiplication gives the isomorphism of vector spaces \(U(\tilde{\mathfrak{p}}^{-})\simeq S(\mathfrak{m}^{-})\otimes U(\mathfrak{r})\), we also have that \(\widetilde{V}(\lambda)=S(\mathfrak{m}^{-}).\widetilde{V^{\prime}}(\lambda)=U (\tilde{\mathfrak{p}}^{-}).\widetilde{V^{\prime}}(\lambda)\). Hence equation (7).
Let \(k\in\mathbb{N}^{*}\) and set \(s=y_{1}\cdots y_{k}\in S_{k}(\mathfrak{m}^{-})\) with \(y_{i}\in\mathfrak{m}^{-}\) for all \(1\leq i\leq k\). Then \(\theta(s)=y_{1}\cdots y_{k}+u\in U^{k}(\mathfrak{m}^{-})\) with \(u\in U_{k-1}(\mathfrak{m}^{-})\) and \(y_{1}\cdots y_{k}\in U_{k}(\mathfrak{m}^{-})\). Then equations (4) and (5) and the fact that \(U_{k-1}(\mathfrak{m}^{-})U(\mathfrak{r}).v_{\lambda}=\mathscr{F}_{k-1}(V( \lambda))\) give equation (8). Equation (9) is obvious by the decomposition \(U_{k}(\mathfrak{m}^{-})=U^{k}(\mathfrak{m}^{-})\oplus U_{k-1}(\mathfrak{m}^{-})\). Both equations imply equation (10).
By equation (5), we have that \(gr_{k}(V(\lambda))\) is an \(U(\mathfrak{r})\)-module. Moreover if \(\tilde{v}\in gr_{k}(V(\lambda))\) is such that \(\tilde{v}=gr_{k}(v_{k})\) with \(v_{k}\in\mathscr{F}^{k}(V(\lambda))\), we have that \(\beta_{\lambda}^{k}(\tilde{v})=v_{k}\). Let \(z\in\mathfrak{r}\). Then by equation (5), one has that \(z.\tilde{v}=gr_{k}(z.v_{k})\) which implies that
\[\beta_{\lambda}^{k}(z.\tilde{v})=z.v_{k}=z.\beta_{\lambda}^{k}(\tilde{v}),\]
since \(z.v_{k}\in\mathscr{F}^{k}(V(\lambda))\) because \(\mathscr{F}^{k}(V(\lambda))\) is an \(U(\mathfrak{r})\)-module. This shows \((iii)\). Finally to prove \((iv)\) it suffices to observe that \(U^{k}(\mathfrak{m}^{-}).V^{\prime}(\lambda)\) is a finite dimensional \(U(\mathfrak{r})\)-module. Set \(W_{k}=U^{k}(\mathfrak{m}^{-}).V^{\prime}(\lambda)\cap\mathscr{F}_{k-1}(V( \lambda))\). Then \(W_{k}\) is a left \(U(\mathfrak{r})\)-submodule of \(U^{k}(\mathfrak{m}^{-}).V^{\prime}(\lambda)\) and then there exists a left \(U(\mathfrak{r})\)-submodule \(W^{\prime}_{k}\) such that \(U^{k}(\mathfrak{m}^{-}).V^{\prime}(\lambda)=W_{k}\oplus W^{\prime}_{k}\). Now \(W^{\prime}_{k}\cap\mathscr{F}_{k-1}(V(\lambda))=\{0\}\) and then one may choose the \(U(\mathfrak{r})\)-module \(\mathscr{F}^{k}(V(\lambda))\) to contain \(W^{\prime}_{k}\). But \(\mathscr{F}_{k}(V(\lambda))=\mathscr{F}_{k-1}(V(\lambda))\oplus\mathscr{F}^{k} (V(\lambda))=U_{k}(\mathfrak{m}^{-}).V^{\prime}(\lambda)\subset U^{k}( \mathfrak{m}^{-}).V^{\prime}(\lambda)\) since \(U_{k}(\mathfrak{m}^{-})=U^{k}(\mathfrak{m}^{-})\oplus U_{k-1}(\mathfrak{m}^{-})\). It follows that \(W^{\prime}_{k}=\mathscr{F}^{k}(V(\lambda))\), which completes the proof.
### Left \(U(\mathfrak{p})\)-module structure on \(\widetilde{V}(\lambda)\)
Recall the isomorphism \(\beta_{\lambda}\) of \(U(\mathfrak{r})\)-modules from \(\widetilde{V}(\lambda)\) into \(V(\lambda)\) (lemma 3.2\((iii)\)). For all \(\tilde{v}\in\widetilde{V}(\lambda)\) and all \(x\in\mathfrak{p}\), one sets
\[\rho_{\lambda}(x)(\tilde{v})=\beta_{\lambda}^{-1}(x.\beta_{\lambda}(\tilde{v})) \tag{11}\]
where \(x.\beta_{\lambda}(\tilde{v})\) stands for the left action of \(x\in\mathfrak{g}\) on \(\beta_{\lambda}(\tilde{v})\in V(\lambda)\).
It is easily checked that \(\rho_{\lambda}\) is a morphism of Lie algebras from \(\mathfrak{p}\) to \(\mathfrak{gl}(\widetilde{V}(\lambda))\), hence that it extends to a left action of \(U(\mathfrak{p})\) on \(\widetilde{V}(\lambda)\) (again by [9, 2.2.1, 2.2.2]). Moreover for all \(x\in\mathfrak{r}\) and \(\tilde{v}\in\widetilde{V}(\lambda)\), since \(\beta_{\lambda}\) is a morphism of \(U(\mathfrak{r})\)-modules, one has that \(\rho_{\lambda}(x)(\tilde{v})=x.\tilde{v}\) where the right hand side denotes the left action of \(\mathfrak{r}\) on \(\widetilde{V}(\lambda)\) defined in subsection 3.2.
**Remark.** By [15, 2.7], one has that \(V^{\prime}(\lambda)=\{v\in V(\lambda)\mid\mathfrak{m}.v=0\}\). Hence
\[\widetilde{V^{\prime}}(\lambda)=\{\tilde{v}\in\widetilde{V}(\lambda)\mid\rho_{ \lambda}(\mathfrak{m})(\tilde{v})=0\} \tag{12}\]
since \(\widetilde{V^{\prime}}(\lambda)=\beta_{\lambda}^{-1}(V^{\prime}(\lambda))\).
## 4. Action of a smash product on \(U(\tilde{\mathfrak{p}}^{-})\)
In this section, we will define a smash product \(A=T(\mathfrak{m})\#U(\mathfrak{r})\), containing the enveloping algebra \(U(\mathfrak{r})\) and the tensor algebra \(T(\mathfrak{m})\), where the action of \(U(\mathfrak{r})\) on \(T(\mathfrak{m})\) derives from the adjoint action of \(\mathfrak{r}\) in \(T(\mathfrak{m})\) which extends uniquely by derivation the adjoint action given by Lie bracket. This algebra \(A\) is an associative algebra, which is actually a Hopf algebra. We will define what we call a generalized adjoint action (denoted by \(ad^{**}\)) of the algebra \(A\) on the enveloping algebra \(U(\tilde{\mathfrak{p}}^{-})\) and another left action of \(A\) on \(U(\tilde{\mathfrak{p}}^{-})\), where the latter is simply left multiplication when restricted to \(U(\mathfrak{r})\). The action \(ad^{**}\) derives from the coadjoint action, denoted by \(ad^{*}\), of \(\tilde{\mathfrak{p}}\) on \(\mathfrak{p}^{-}\) (note that, as vector spaces, one has \(\tilde{\mathfrak{p}}^{*}\simeq\mathfrak{p}^{-}\)). We will see in subsection 7.3 why we need to take this coadjoint action \(ad^{*}\).
### A smash product
Recall that \(\mathfrak{m}\) denotes the nilpotent radical of \(\mathfrak{p}\) and that \(T(\mathfrak{m})\) denotes the tensor algebra of \(\mathfrak{m}\). Since \([\mathfrak{r},\,\mathfrak{m}]\subset\mathfrak{m}\), the algebra \(T(\mathfrak{m})\) is an \(U(\mathfrak{r})\)-algebra (in the sense of [24, 1.1.6]) with the adjoint action of \(\mathfrak{r}\) on \(T(\mathfrak{m})\) (denoted by \(ad\)) extending by derivation the adjoint action of \(\mathfrak{r}\) on \(\mathfrak{m}\) given by the Lie bracket in \(\mathfrak{g}\). Then we may consider the Hopf smash product \(A=T(\mathfrak{m})\#U(\mathfrak{r})\) in the sense of [24, 1.1.8]. More precisely \(A\) is equal as a vector space to the tensor product \(T(\mathfrak{m})\otimes U(\mathfrak{r})\), with multiplication given by \((s\otimes u)(s^{\prime}\otimes u^{\prime})=s\,ad\,u_{1}(s^{\prime})\otimes u_{ 2}u^{\prime}\) where \(\Delta(u)=u_{1}\otimes u_{2}\) (Sweedler notation), \(\Delta\) being the coproduct in \(U(\mathfrak{r})\), \(s,\,s^{\prime}\in T(\mathfrak{m})\) and \(u,\,u^{\prime}\in U(\mathfrak{r})\).
For example for all \(z\in\mathfrak{r}\), \(s,\,s^{\prime}\in T(\mathfrak{m})\) and \(u\in U(\mathfrak{r})\), one has that \((s^{\prime}\otimes z)(s\otimes u)=s^{\prime}\,ad\,z(s)\otimes u+s^{\prime}s \otimes zu\). By setting \(s\otimes 1=s\) and \(1\otimes u=u\) we may view \(T(\mathfrak{m})\) and \(U(\mathfrak{r})\) as subalgebras of \(A\). Then one has in \(A\) that \(s\otimes u=(s\otimes 1)(1\otimes u)=su\) and that
\[\forall z\in\mathfrak{r},\,\,\forall s\in T(\mathfrak{m}),\,ad\,z(s)=zs-sz \tag{13}\]
and in particular
\[\forall z\in\mathfrak{r},\,\,\forall x\in\mathfrak{m},\,\,[z,\,x]=zx-xz. \tag{14}\]
Observe that \(A\) is an associative unitary algebra (see [24, 1.1.8]) which is also a bialgebra thanks to the coproducts in \(T(\mathfrak{m})\) and in \(U(\mathfrak{r})\). More precisely denoting also by \(\Delta\) the coproduct in \(T(\mathfrak{m})\), and by \(\Delta_{A}\) the coproduct in \(A\), we set for \(s\in T(\mathfrak{m})\) and \(u\in U(\mathfrak{r})\), \(\Delta_{A}(s\otimes u)=(s_{1}\otimes u_{1})\otimes(s_{2}\otimes u_{2})\) if \(\Delta(s)=s_{1}\otimes s_{2}\) and \(\Delta(u)=u_{1}\otimes u_{2}\) with Sweedler notation. We then have that \(\Delta_{A}((s\otimes 1)(1\otimes u))=\Delta_{A}(s\otimes 1)\Delta_{A}(1\otimes u)\) and more generally for \(s\), \(s^{\prime}\in T(\mathfrak{m})\) and \(u\), \(u^{\prime}\in U(\mathfrak{r})\), \(\Delta_{A}((s\otimes u)(s^{\prime}\otimes u^{\prime}))=\Delta_{A}(s\otimes u) \Delta_{A}(s^{\prime}\otimes u^{\prime})\) by the cocommutativity of \(\Delta\). Note that the coproduct \(\Delta_{A}\) extends the coproduct \(\Delta\) in \(T(\mathfrak{m})\) and in \(U(\mathfrak{r})\). Actually the bialgebra \(A\) is a Hopf algebra with the coidentity \(\varepsilon\) given by \(\varepsilon(x)=0\) for all \(x\in\mathfrak{p}\) and the antipode given by \(a\in A\mapsto a^{\top}\in A\), where
\[a^{\top}=(-1)^{r}x_{r}\cdots x_{1}\in A \tag{15}\]
if \(a=x_{1}\cdots x_{r}\in A\) (product in \(A\)) with \(x_{1},\,\ldots,\,x_{r}\in\mathfrak{p}\) extended by linearity to every element in \(A\). One checks easily that the coidentity and the antipode (which coincide respectively with the coidentity and the antipode on \(T(\mathfrak{m})\) and on \(U(\mathfrak{r})\), see for instance [24, 1.2.5]) are compatible with equation (14) which defines the smash product \(A\).
Roughly speaking, the Hopf algebra \(A\) coincides with the enveloping algebra \(U(\mathfrak{p})\) or even \(U(\tilde{\mathfrak{p}})\), except that no relations are required for the associative product of elements in \(\mathfrak{m}\).
### The coadjoint action of \(\tilde{\mathfrak{p}}\) on \(\mathfrak{p}^{-}\)
Recall the opposite parabolic subalgebra \(\mathfrak{p}^{-}\) of \(\mathfrak{p}\). Thanks to the Killing form on \(\mathfrak{g}\), we have the isomorphism of vector spaces \(\tilde{\mathfrak{p}}^{*}\simeq\mathfrak{p}^{-}\). As it was already mentioned in [35, 2], \(\mathfrak{p}^{-}\) is a \(\tilde{\mathfrak{p}}\)-module, by the socalled coadjoint representation (denoted by \(ad^{*}\)) of \(\tilde{\mathfrak{p}}=\mathfrak{r}\ltimes(\mathfrak{m})^{a}\) in \(\mathfrak{p}^{-}\) defined as follows.
\[\forall x\in\mathfrak{r},\;\forall\;y\in\mathfrak{p}^{-},\;ad^{*}x(y)=[x,\,y]. \tag{16}\]
\[\forall x\in\mathfrak{m},\;\forall\;y\in\mathfrak{p}^{-},\;ad^{*}x(y)=pr_{ \mathfrak{r}}([x,\,y]) \tag{17}\]
where \(pr_{\mathfrak{r}}\) is the projection of \(\mathfrak{g}=\mathfrak{r}\oplus\mathfrak{m}\oplus\mathfrak{m}^{-}\) onto \(\mathfrak{r}\). In particular
\[\forall x\in\mathfrak{m},\;\forall\;y\in\mathfrak{r},\;ad^{*}x(y)=0. \tag{18}\]
**Lemma**.: _The map \(ad^{*}:\tilde{\mathfrak{p}}\longrightarrow\mathfrak{gl}(\mathfrak{p}^{-})\) is a morphism between the Lie algebras \(\tilde{\mathfrak{p}}\) and \(\mathfrak{gl}(\mathfrak{p}^{-})\). In other words it gives a representation of \(\tilde{\mathfrak{p}}\) in \(\mathfrak{p}^{-}\), which extends uniquely to a representation of \(U(\tilde{\mathfrak{p}})\) in \(\mathfrak{p}^{-}\). This representation also extends uniquely by derivation to a representation of \(U(\tilde{\mathfrak{p}})\) in the symmetric algebra \(S(\mathfrak{p}^{-})\), which we still denote by \(ad^{*}\)._
Proof.: We give a proof of the lemma for the reader's convenience. It suffices to prove that, for all \(x,\,x^{\prime}\in\tilde{\mathfrak{p}}\), and for all \(y\in\mathfrak{p}^{-}\), we have
( \[\star\] ) \[(ad^{*}x\circ ad^{*}x^{\prime})(y)-(ad^{*}x^{\prime}\circ ad^{*}x)(y)-ad^{*}[ x,\,x^{\prime}]_{\tilde{\mathfrak{p}}}(y)=0.\]
Assume that \(x,\,x^{\prime}\in\mathfrak{m}\). Then \([x,\,x^{\prime}]_{\tilde{\mathfrak{p}}}=0\) by equation (2) in subsection 2.2. Moreover for all \(y\in\mathfrak{p}^{-}\),
\[(ad^{*}x\circ ad^{*}x^{\prime})(y)=ad^{*}x(pr_{\mathfrak{r}}([x^{\prime},\,y]) )=0\]
by (17) and (18). Then equality (\(\star\)) follows in this case.
Assume that \(x,\,x^{\prime}\in\mathfrak{r}\). Then equality (\(\star\)) follows from equations (2) and (16).
It remains to prove equality (\(\star\)) for \(x\in\mathfrak{m}\) and \(x^{\prime}\in\mathfrak{r}\). By equations (2), (16) and (17) one has that, for all \(y\in\mathfrak{p}^{-}\),
\[(ad^{*}x\circ ad^{*}x^{\prime})(y)-(ad^{*}x^{\prime}\circ ad^{*}x) (y)-ad^{*}[x,\,x^{\prime}]_{\tilde{\mathfrak{p}}}(y)\] \[=ad^{*}x([x^{\prime},\,y])-ad^{*}x^{\prime}(pr_{\mathfrak{r}}([x, \,y]))-pr_{\mathfrak{r}}([[x,\,x^{\prime}],\,y])\] \[=pr_{\mathfrak{r}}([x,\,[x^{\prime},\,y]])-[x^{\prime},\,pr_{ \mathfrak{r}}([x,\,y])]-pr_{\mathfrak{r}}([[x,\,x^{\prime}],\,y]).\]
Denote by \(pr_{\mathfrak{m}}\), resp. \(pr_{\mathfrak{m}^{-}}\), the projection of \(\mathfrak{g}=\mathfrak{r}\oplus\mathfrak{m}\oplus\mathfrak{m}^{-}\) onto \(\mathfrak{m}\), resp. onto \(\mathfrak{m}^{-}\).
Then
\[[x^{\prime},\,pr_{\mathfrak{r}}([x,\,y])]=\big{[}x^{\prime},\,[x,\,y]-pr_{ \mathfrak{m}^{-}}([x,\,y])-pr_{\mathfrak{m}}([x,\,y])\big{]}\]
and we have
\[[x^{\prime},\,pr_{\mathfrak{m}^{-}}([x,\,y])]\in\mathfrak{m}^{-},\;[x^{\prime},\,pr_{\mathfrak{m}}([x,\,y])]\in\mathfrak{m},\;[x^{\prime},\,pr_{\mathfrak{r} }([x,\,y])]\in\mathfrak{r}.\]
Then by \((\star\star)\) and \((\star\star\star)\) we have that
\[[x^{\prime},\,pr_{\mathfrak{r}}([x,\,y])]=pr_{\mathfrak{r}}([x^{\prime},\,[x, \,y]]).\]
It follows by \((\star\star\star\star)\) that
\[(ad^{*}x\circ ad^{*}x^{\prime})(y)-(ad^{*}x^{\prime}\circ ad^{*}x )(y)-ad^{*}[x,\,x^{\prime}]_{\tilde{\mathfrak{p}}}(y)\] \[=pr_{\mathfrak{r}}\big{(}[x,\,[x^{\prime},\,y]]-[x^{\prime},\,[x, \,y]]-[[x,\,x^{\prime}],\,y]\big{)}=0\]
by Jacobi identity in \(\mathfrak{g}\).
Applying [9, 2.2.1] and [9, 1.2.14] completes the proof of the lemma.
### Partial symmetrisation
Recall the symmetrisation \(\theta:S(\mathfrak{p}^{-})\longrightarrow U(\mathfrak{p}^{-})\) which is an isomorphism of \(ad\,U(\mathfrak{r})\)-modules, when \(S(\mathfrak{p}^{-})\) and \(U(\mathfrak{p}^{-})\) are endowed with the adjoint action \(ad\) (see 3.2). Denote also by \(ad\) the adjoint action of \(U(\mathfrak{r})\) on \(U(\tilde{\mathfrak{p}}^{-})\), extending by derivation the Lie bracket of \(\mathfrak{r}\) on \(\tilde{\mathfrak{p}}^{-}\) (see equation (3)).
Set
\[\tilde{\theta}=Id_{S(\mathfrak{m}^{-})}\otimes\theta_{|S(\mathfrak{r})}:S( \mathfrak{p}^{-})\simeq S(\mathfrak{m}^{-})\otimes S(\mathfrak{r}) \longrightarrow U(\tilde{\mathfrak{p}}^{-})\simeq S(\mathfrak{m}^{-})\otimes U (\mathfrak{r}),\]
that is,
\[\forall s\in S(\mathfrak{m}^{-}),\;\forall s^{\prime}\in S(\mathfrak{r}),\;\; \tilde{\theta}(ss^{\prime})=s\,\theta(s^{\prime}). \tag{19}\]
We call the map \(\tilde{\theta}\) a _partial symmetrisation_. Observe that \(\tilde{\theta}\) does not coincide with the symmetrisation \(\tilde{\tilde{\theta}}\) of \(S(\tilde{\mathfrak{p}}^{-})=S(\mathfrak{p}^{-})\) in \(U(\tilde{\mathfrak{p}}^{-})\). For instance, for \(y\in\mathfrak{m}^{-}\) and \(z\in\mathfrak{r}\), one has that \(\tilde{\theta}(yz)=yz\), while \(\tilde{\tilde{\theta}}(yz)=\frac{1}{2}(yz+zy)=yz+\frac{1}{2}[z,\,y]\).
**Lemma**.: _The map \(\tilde{\theta}\) is an isomorphism of \(ad\,U(\mathfrak{r})\)-modules, when \(S(\mathfrak{p}^{-})\) and \(U(\tilde{\mathfrak{p}}^{-})\) are endowed with the adjoint action._
Proof.: Since \(Id_{S(\mathfrak{m}^{-})}\) and \(\theta_{|S(\mathfrak{r})}:S(\mathfrak{r})\longrightarrow U(\mathfrak{r})\) are isomorphisms, it follows that \(\tilde{\theta}\) is an isomorphism too.
Let \(z\in\mathfrak{r}\), \(s\in S(\mathfrak{m}^{-})\) and \(s^{\prime}\in S(\mathfrak{r})\). Observe that one has that \(ad\,z(s)\in S(\mathfrak{m}^{-})\), and this element may be viewed equally as an element in \(S(\mathfrak{p}^{-})\) or in \(U(\tilde{\mathfrak{p}}^{-})\). Moreover \(ad\,z(s^{\prime})\in S(\mathfrak{r})\) and \(ad\,z(\theta(s^{\prime}))\in U(\mathfrak{r})\). Since \(ad\,z\)
is a derivation and since \(\theta\) is a morphism of \(ad\,U(\mathfrak{r})\)-modules we have, by equation (19):
\[\tilde{\theta}(ad\,z(ss^{\prime})) =\tilde{\theta}(ad\,z(s)s^{\prime}+s\,ad\,z(s^{\prime}))\] \[=ad\,z(s)\theta(s^{\prime})+s\,\theta(ad\,z(s^{\prime}))\] \[=ad\,z(s)\theta(s^{\prime})+s\,ad\,z(\theta(s^{\prime}))\] \[=ad\,z\,(s\,\theta(s^{\prime}))\] \[=ad\,z(\tilde{\theta}(ss^{\prime}))\]
This proves the lemma.
### Generalized adjoint action of \(A\) on \(U(\tilde{\mathfrak{p}}^{-})\)
Recall the isomorphism of vector spaces \(U(\tilde{\mathfrak{p}}^{-})\simeq S(\mathfrak{m}^{-})\otimes U(\mathfrak{r})\). Then one has that
\[U(\tilde{\mathfrak{p}}^{-})\simeq\bigoplus_{k\in\mathbb{N}}S_{k}(\mathfrak{m} ^{-})\otimes U(\mathfrak{r}) \tag{20}\]
and for all \(k\in\mathbb{N}\),
\[U_{k}(\tilde{\mathfrak{p}}^{-})\simeq\bigoplus_{0\leq j\leq k}S_{j}(\mathfrak{ m}^{-})\otimes U_{k-j}(\mathfrak{r}) \tag{21}\]
as vector spaces. Recall also the coadjoint representation of \(\tilde{\mathfrak{p}}\) in the symmetric algebra \(S(\mathfrak{p}^{-})\), which we have denoted by \(ad^{*}\) (subsection 4.2). Fix \(k\) and \(j\) in \(\mathbb{N}\) and set \(S_{-1}(\mathfrak{m}^{-})=\{0\}\). One has that
\[\forall x\in\mathfrak{m},\ \forall s\in S_{k}(\mathfrak{m}^{-}),\ ad^{*}x(s) \in S_{k-1}(\mathfrak{m}^{-})\mathfrak{r}\subset S_{k}(\mathfrak{p}^{-}) \tag{22}\]
by equation (17). Then one has that
\[\forall x\in\mathfrak{m},\ \forall s\in S_{k}(\mathfrak{m}^{-}), \ \forall u^{\prime}\in U_{j}(\mathfrak{r}),\\ \tilde{\theta}(ad^{*}x(s))u^{\prime}\in S_{k-1}(\mathfrak{m}^{-} )U_{j+1}(\mathfrak{r})\subset U_{k+j}(\tilde{\mathfrak{p}}^{-}) \tag{23}\]
and that
\[\forall z\in\mathfrak{r},\ \forall s\in S_{k}(\mathfrak{m}^{-}),\ \forall u^{\prime}\in U( \mathfrak{r}),ad\,z(su^{\prime})\in S_{k}(\mathfrak{m}^{-})U(\mathfrak{r}) \subset U(\tilde{\mathfrak{p}}^{-}). \tag{24}\]
We set
\[\forall x\in\mathfrak{m},\,\forall s\in S_{k}(\mathfrak{m}^{-}),\,\forall u^{ \prime}\in U(\mathfrak{r}),\ \ ad^{**}x(su^{\prime})=\tilde{\theta}(ad^{*}x(s))u^{\prime}\in U(\tilde{ \mathfrak{p}}^{-}) \tag{25}\]
and
\[\forall z\in\mathfrak{r},\,\forall s\in S_{k}(\mathfrak{m}^{-}),\,\forall u^{ \prime}\in U(\mathfrak{r}),\ \ ad^{**}z(su^{\prime})=ad\,z(su^{\prime})\in U(\tilde{ \mathfrak{p}}^{-}). \tag{26}\]
Observe that
\[\forall x\in\mathfrak{m},\,\forall u^{\prime}\in U(\mathfrak{r}),\,ad^{**}x(u^ {\prime})=0 \tag{27}\]
and
\[\forall x\in\mathfrak{m},\,\forall s\in S_{k}(\mathfrak{m}^{-}),\,\forall u ^{\prime}\in U(\mathfrak{r}),\ ad^{**}x(su^{\prime})=ad^{**}x(s)u^{\prime}. \tag{28}\]
**Lemma**.: _Equations (25) and (26) extend to a left action of \(A\) on the enveloping algebra \(U(\tilde{\mathfrak{p}}^{-})\). We call this action the generalized adjoint action of \(A\) on \(U(\tilde{\mathfrak{p}}^{-})\)._
Proof.: Since equation (26) is just the adjoint action, it extends to a left action of \(U(\mathfrak{r})\) on \(U(\tilde{\mathfrak{p}}^{-})\) by [9, 2.2.1, 2.4.9].
Now consider \(x\in\mathfrak{m}\). One can extend equation (25) by linearity so that \(ad^{**}x\in End(U(\tilde{\mathfrak{p}}^{-}))\). Let us explain why this is well defined.
Note first that, for \(y\in\mathfrak{m}^{-}\) and \(z\in\mathfrak{r}\), one sets
\[ad^{**}x(zy)=ad^{**}x(yz)+ad^{**}x([z,\,y]).\]
More generally by equation (20) every element in \(U(\tilde{\mathfrak{p}}^{-})\) may be written in the form \(\sum_{i\in I}s_{i}u^{\prime}_{i}\) with \(I\) a finite set and for all \(i\in I\), \(s_{i}\in S(\mathfrak{m}^{-})\) and \(u^{\prime}_{i}\in U(\mathfrak{r})\), with the \(s_{i}\), \(i\in I\), linearly independent. Then if such an element is zero, we have that \(u^{\prime}_{i}=0\) for all \(i\in I\), and then \(ad^{**}x(\sum_{i\in I}s_{i}u^{\prime}_{i})=\sum_{i\in I}\tilde{\theta}(ad^{*}x (s_{i}))u^{\prime}_{i}=0\).
Moreover for \(s\), \(s^{\prime}\in S(\mathfrak{m}^{-})\), one has
\[ad^{**}x(ss^{\prime})=\tilde{\theta}(ad^{*}x(ss^{\prime}))=\tilde{\theta}(ad^ {*}x(s^{\prime}s))=ad^{**}x(s^{\prime}s)\]
since \(ad^{*}x\) is an endomorphism of \(S(\mathfrak{p}^{-})\). Then \(ad^{**}x\) is well defined on \(U(\tilde{\mathfrak{p}}^{-})\).
Finally \(ad^{**}\) is a linear map from \(\mathfrak{m}\) to \(End(U(\tilde{\mathfrak{p}}^{-}))\), which extends naturally to a \(\Bbbk\)-algebras morphism from \(T(\mathfrak{m})\) to \(End(U(\tilde{\mathfrak{p}}^{-}))\). We then obtain left \(U(\mathfrak{r})\) and \(T(\mathfrak{m})\)-module structures on \(U(\tilde{\mathfrak{p}}^{-})\).
It remains to check that both structures imply a left \(A\)-module structure on \(U(\tilde{\mathfrak{p}}^{-})\), that is, are compatible with equation (13), which defines the smash product \(A\). For this it suffices to prove that
\[\forall x\in\mathfrak{m},\;\forall z\in\mathfrak{r},\;ad^{**}z\circ ad^{**}x- ad^{**}x\circ ad^{**}z=ad^{**}[z,\,x]. \tag{29}\]
Let \(x\in\mathfrak{m}\), \(z\in\mathfrak{r}\), \(s\in S(\mathfrak{m}^{-})\) and \(u^{\prime}\in U(\mathfrak{r})\). Since \(ad\,z\) is a derivation in \(U(\tilde{\mathfrak{p}}^{-})\), one has that
\[ad^{**}(zx-xz)(su^{\prime}) =(ad^{**}z\circ ad^{**}x-ad^{**}x\circ ad^{**}z)(su^{\prime})\] \[=ad\,z(\tilde{\theta}(ad^{*}x(s))u^{\prime})\] \[-ad^{**}x(ad\,z(s)u^{\prime}+s\,ad\,z(u^{\prime}))\] \[=ad\,z(\tilde{\theta}(ad^{*}x(s)))u^{\prime}+\tilde{\theta}(ad^{ *}x(s))ad\,z(u^{\prime})\] \[-\tilde{\theta}(ad^{*}x(ad\,z(s)))u^{\prime}-\tilde{\theta}(ad^{ *}x(s))ad\,z(u^{\prime})\]
since moreover \(ad\,z(s)\in S(\mathfrak{m}^{-})\) and \(ad\,z(u^{\prime})\in U(\mathfrak{r})\). Then
\[ad^{**}(zx-xz)(su^{\prime})=\tilde{\theta}((ad\,z\circ ad^{*}x)(s))u^{\prime}- \tilde{\theta}(ad^{*}x(ad\,z(s)))u^{\prime}\]
by lemma 4.3 and then
\[ad^{**}(zx-xz)(su^{\prime})=\tilde{\theta}((ad^{*}z\circ ad^{*}x)(s))u^{\prime }-\tilde{\theta}((ad^{*}x\circ ad^{*}z)(s))u^{\prime}\]
since, for all \(t\in S(\mathfrak{p}^{-})\), one has that \(ad\,z(t)=ad^{*}z(t)\) by equation (16).
Since
\[ad^{*}z\circ ad^{*}x-ad^{*}x\circ ad^{*}z=ad^{*}[z,\,x]\]
in \(End(S(\mathfrak{p}^{-}))\) by the proof of Lemma 4.2, the required equation (29) follows.
**Remark**.: _We will see in subsection 4.5 why we call this left action \(ad^{**}\) of \(A\) on \(U(\tilde{\mathfrak{p}}^{-})\) a generalized adjoint action._
### Left and right actions of \(A\) on \(U(\tilde{\mathfrak{p}}^{-})\)
Here we will define a right, resp. a left action, of \(A\) on \(U(\tilde{\mathfrak{p}}^{-})\) as follows.
\[\forall u^{\prime}\in U(\mathfrak{r}),\;\forall u\in U(\tilde{\mathfrak{p}}^{ -}),\;R(u^{\prime})(u)=uu^{\prime} \tag{30}\]
(product in \(U(\tilde{\mathfrak{p}}^{-})\)). Then \(R_{|U(\mathfrak{r})}\) is a right action of \(U(\mathfrak{r})\) on \(U(\tilde{\mathfrak{p}}^{-})\) called the regular right action (see [9, 2.2.21]). We extend this right action by setting
\[\forall x\in\mathfrak{m},\;R(x)=0. \tag{31}\]
One checks immediately that the map \(R\) induces a right action of \(A\) on \(U(\tilde{\mathfrak{p}}^{-})\) (still denoted by \(R\)). It follows that the map \(a\in A\mapsto R(a^{\top})\) is a left action of \(A\) on \(U(\tilde{\mathfrak{p}}^{-})\).
One also sets:
\[\forall u^{\prime}\in U(\mathfrak{r}),\;\forall u\in U(\tilde{\mathfrak{p}}^{ -}),\;L(u^{\prime})(u)=u^{\prime}u \tag{32}\]
(product in \(U(\tilde{\mathfrak{p}}^{-})\)).
Then \(L_{|U(\mathfrak{r})}\) is a left action of \(U(\mathfrak{r})\) on \(U(\tilde{\mathfrak{p}}^{-})\) called the regular left action (see [9, 2.2.21]). We extend this left action by setting
\[\forall x\in\mathfrak{m},\;L(x)=ad^{**}x \tag{33}\]
(see equation (25)), which extends by the proof of lemma 4.4 to a left action of \(T(\mathfrak{m})\) on \(U(\tilde{\mathfrak{p}}^{-})\).
**Lemma**.: _The map \(L\) extends to a left action of \(A\) on \(U(\tilde{\mathfrak{p}}^{-})\) (still denoted by \(L\)). Note that this is not in general a left action of \(U(\mathfrak{p})\) nor of \(U(\tilde{\mathfrak{p}})\) on \(U(\tilde{\mathfrak{p}}^{-})\)._
Proof.: We have to check that the map \(L\) preserves equation (13) which defines the smash product \(A\) and for this it suffices to check that \(L\) preserves equation (14). In other words we have to check that
\[\forall z\in\mathfrak{r},\;\forall x\in\mathfrak{m},\;L([z,\,x])=L(z)\circ L (x)-L(x)\circ L(z). \tag{34}\]
Let \(x\in\mathfrak{m}\), \(z\in\mathfrak{r}\), \(s\in S_{k}(\mathfrak{m}^{-})\) and \(u^{\prime}\in U(\mathfrak{r})\). One has that
\[(L(z)\circ L(x)-L(x)\circ L(z)-L([z,\,x]))(su^{\prime})\] \[=z\,ad^{**}x(su^{\prime})-ad^{**}x(zsu^{\prime})-ad^{**}[z,\,x](su ^{\prime})\] \[=z\,ad^{**}x(su^{\prime})-(ad^{**}x\circ ad^{**}z)(su^{\prime})-ad^ {**}x(su^{\prime}z)\] \[-ad^{**}[z,\,x](su^{\prime})\]
since \(ad^{**}z(su^{\prime})=ad\,z(su^{\prime})=zsu^{\prime}-su^{\prime}z\) in \(U(\tilde{\mathfrak{p}}^{-})\).
Recall that \(ad^{**}x(su^{\prime}z)=\tilde{\theta}(ad^{*}x(s))u^{\prime}z=ad^{**}x(su^{\prime})z\). Then
\[(L(z)\circ L(x)-L(x)\circ L(z)-L([z,\,x]))(su^{\prime})\] \[=z\,ad^{**}x(su^{\prime})-(ad^{**}x\circ ad^{**}z)(su^{\prime})-ad^ {**}x(su^{\prime})z\] \[-ad^{**}[z,\,x](su^{\prime})\] \[=(ad^{**}z\circ ad^{**}x)(su^{\prime})-(ad^{**}x\circ ad^{**}z)(su^ {\prime})-ad^{**}[z,\,x](su^{\prime})\]
since in \(U(\tilde{\mathfrak{p}}^{-})\) we have \((ad^{**}z\circ ad^{**}x)(su^{\prime})=z\,ad^{**}x(su^{\prime})-ad^{**}x(su^{ \prime})\,z\). Now equation (29) gives equation (34).
Recall (see [24, 1.3.1] for instance) that adjoint action in a Hopf algebra \(A\) may be expanded by using the right action \(R\) and the left action \(L\) of \(A\) as in the following proposition. Hence we may view \(ad^{**}\) as a generalized adjoint action (here the Hopf algebra \(A\) does not act on itself but on \(U(\tilde{\mathfrak{p}}^{-})\)).
**Proposition**.: _One has the following._
\[\forall a\in A,\,\,\,ad^{**}a=L(a_{1})\circ R(a_{2}^{\top}) \tag{35}\]
_where \(\Delta_{A}(a)=a_{1}\otimes a_{2}\) (with Sweedler notation)._
Proof.: Observe that, as vector spaces, one has
\[A\simeq U(\mathfrak{r})\oplus\mathfrak{m}\otimes T(\mathfrak{m})\otimes U( \mathfrak{r}).\]
Let \(a\in U(\mathfrak{r})\). Then in this case, \(ad^{**}a=ad\,a\) and for all \(u\in U(\tilde{\mathfrak{p}}^{-})\) one has \((L(a_{1})\circ R(a_{2}^{\top}))(u)=a_{1}ua_{2}^{\top}\) (product in \(U(\tilde{\mathfrak{p}}^{-})\)). Hence the required equation (35) in this case, by [24, 1.3.1].
Assume now that \(a=uu^{\prime}\) with \(u\in\mathfrak{m}\otimes T(\mathfrak{m})\) and \(u^{\prime}\in U(\mathfrak{r})\). Set \(\Delta_{A}(u)=u_{1}\otimes u_{2}\) and \(\Delta_{A}(u^{\prime})=u^{\prime}_{1}\otimes u^{\prime}_{2}\). We have that \(u^{\prime}_{1},\,u^{\prime}_{2}\in U(\mathfrak{r})\) and \(u_{1},\,u_{2}\in T(\mathfrak{m})\) and \(\Delta_{A}(a)=u_{1}u^{\prime}_{1}\otimes u_{2}u^{\prime}_{2}\). Moreover one has that \(\Delta_{A}(u)=u\otimes 1+\sum_{i\in I}u_{1i}\otimes u_{2i}\) with for all \(i\in I\), \(u_{2i}\in\mathfrak{m}\otimes T(\mathfrak{m})\). But if \(u_{2i}\in\mathfrak{m}\otimes T(\mathfrak{m})\) then \(R(u^{\top}_{2i})=0\). It follows that, for all \(v\in U(\tilde{\mathfrak{p}}^{-})\), one has
\[(L(a_{1})\circ R(a_{2}^{\top}))(v) =(L(uu^{\prime}_{1})\circ R({u^{\prime}}_{2}^{\top}))(v)\] \[=(L(u)\circ L(u^{\prime}_{1})\circ R({u^{\prime}}_{2}^{\top}))(v)\] \[=(L(u)\circ ad^{**}u^{\prime})(v)\] \[=(ad^{**}u\circ ad^{**}u^{\prime})(v)\] \[=ad^{**}a(v)\]
which completes the proof.
**Remark**.: _Actually one also has that_
\[\forall a\in A,\,\forall b\in A,\,R(b)\circ L(a)=L(a)\circ R(b). \tag{36}\]
Indeed if \(a,\,b\in U(\mathfrak{r})\) or if \(b\in\mathfrak{m}\otimes T(\mathfrak{m})\otimes U(\mathfrak{r})\) then equation (36) is immediate by the associativity of the product in \(U(\tilde{\mathfrak{p}}^{-})\) or because, in the second case, that \(R(b)=0\). Finally when \(b\in U(\mathfrak{r})\) and \(a\in\mathfrak{m}\otimes T(\mathfrak{m})\otimes U(\mathfrak{r})\), equation (36) follows from equation (28).
## 5. Matrix coefficients of \(\widetilde{V}(\lambda)\).
### Definitions and further notation
Let \(\lambda\in P^{+}(\pi)\). Here we use the notation and results of subsection 3.2. By Lemma 3.2 the degenerate highest weight module \(\widetilde{V}(\lambda)\) is endowed with a left \(U(\tilde{\mathfrak{p}}^{-})\)-module structure. Denote by \(\widetilde{V}(\lambda)^{*}\) its dual vector space. Let \(\tilde{v}\in\widetilde{V}(\lambda)\), \(\tilde{\xi}\in\widetilde{V}(\lambda)^{*}\), \(u\in U(\tilde{\mathfrak{p}}^{-})\). Denoting by \(u.\tilde{v}\) the action of \(u\) on \(\tilde{v}\) for this left \(U(\tilde{\mathfrak{p}}^{-})\)-module structure on \(\widetilde{V}(\lambda)\), we denote by \(\tilde{\xi}.u\) the right action it implies on \(\widetilde{V}(\lambda)^{*}\), namely \((\tilde{\xi}.u)(\tilde{v})=\tilde{\xi}(u.\tilde{v})\). Recall \(\tilde{v}_{\lambda}=gr_{0}(v_{\lambda})\) and the isomorphism \(\beta_{\lambda}:\widetilde{V}(\lambda)\longrightarrow V(\lambda)\) which is an isomorphism of left \(U(\mathfrak{r})\)-modules. In particular this isomorphism preserves the weights. Its dual map \({}^{t}\beta_{\lambda}:V(\lambda)^{*}\longrightarrow\widetilde{V}(\lambda)^{*}\) is also an isomorphism of right \(U(\mathfrak{r})\)-modules. By definition of \(\beta_{\lambda}\), one has that \(\beta_{\lambda}(\tilde{v}_{\lambda})=v_{\lambda}\) and then \(\beta_{\lambda}(\widetilde{V}^{\prime}(\lambda))=\beta(U(\mathfrak{r}).\tilde {v}_{\lambda})=U(\mathfrak{r}).v_{\lambda}=V^{\prime}(\lambda)\).
Recall \(\tilde{\mathfrak{n}}^{-}=\mathfrak{n}_{\overline{\tau}^{\prime}}^{-}\ltimes( \mathfrak{m}^{-})^{a}\subset\tilde{\mathfrak{p}}^{-}\). Set \(\tilde{\xi}_{\lambda}={}^{t}\beta_{\lambda}(\xi_{\lambda})\) where \(\xi_{\lambda}\) is the unique vector in \(V(\lambda)^{*}\) of (right) weight \(\lambda\) such that \(\xi_{\lambda}(v_{\lambda})=1\). Then \(\tilde{\xi}_{\lambda}\) is the unique vector in \(\widetilde{V}(\lambda)^{*}\) of weight \(\lambda\) such that \(\tilde{\xi}_{\lambda}(\tilde{v}_{\lambda})=1\) and by weight considerations one has that, for all \(y\in\tilde{\mathfrak{n}}^{-}\), \(\tilde{\xi}_{\lambda}.y=0\) since \(\xi_{\lambda}\) is a vector of lowest weight in \(V(\lambda)^{*}\). Recall that \(v_{w_{0}\lambda}\) is a chosen nonzero vector in \(V(\lambda)\) of weight \(w_{0}\lambda\) and that it is a lowest weight vector in \(V(\lambda)\). Then set \(\xi_{w_{0}\lambda}\in V(\lambda)^{*}\) the vector of (right) weight \(w_{0}\lambda\) such that \(\xi_{w_{0}\lambda}(v_{w_{0}\lambda})=1:\xi_{w_{0}\lambda}\) is a highest weight vector in \(V(\lambda)^{*}\). Set also \(\tilde{v}_{w_{0}\lambda}=\beta_{\lambda}^{-1}(v_{w_{0}\lambda})\in\widetilde{ V}(\lambda)\). By weight considerations one has that \(y.\tilde{v}_{w_{0}\lambda}=0\) for all \(y\in\tilde{\mathfrak{n}}^{-}\). Set also \(\tilde{\xi}_{w_{0}\lambda}={}^{t}\beta_{\lambda}(\xi_{w_{0}\lambda})\in \widetilde{V}(\lambda)^{*}\). Then \(\tilde{\xi}_{w_{0}\lambda}(\tilde{v}_{w_{0}\lambda})=1\). Since all vectors in \(V(\lambda)\) of weight \(w_{0}\lambda\) are proportional, one may observe that there exists \(k_{0}\in\mathbb{N}\) such that
\[v_{w_{0}\lambda}\in\mathscr{F}^{k_{0}}(V(\lambda)). \tag{37}\]
Then one has that
\[\tilde{v}_{w_{0}\lambda}=gr_{k_{0}}(v_{w_{0}\lambda}). \tag{38}\]
Set \(V^{\prime\prime}(\lambda)=U(\mathfrak{r}).v_{w_{0}\lambda}=U(\mathfrak{n}_{ \overline{\tau}^{\prime}}).v_{w_{0}\lambda}:\) it is an irreducible \(U(\mathfrak{r})\)-module of lowest weight \(w_{0}\lambda\). Setting \(\widetilde{V^{\prime\prime}}(\lambda)=\beta_{\lambda}^{-1}(V^{\prime\prime}( \lambda))\subset\widetilde{V}(\lambda)\), we have that \(\widetilde{V^{\prime\prime}}(\lambda)=U(\mathfrak{r}).\tilde{v}_{w_{0}\lambda}\). Then its dual space \(\widetilde{V^{\prime\prime}}(\lambda)^{*}\) is such that \(\widetilde{V^{\prime\prime}}(\lambda)^{*}={}^{t}\beta_{\lambda}(V^{\prime\prime }(\lambda)^{*})=\tilde{\xi}_{w_{0}\lambda}.U(\mathfrak{n}_{\overline{\tau}^{ \prime}}^{-})=\tilde{\xi}_{w_{0}\lambda}.U(\mathfrak{r})\subset\widetilde{V}( \lambda)^{*}\).
Since \(U(\tilde{\mathfrak{p}}^{-})\) is a representation in \(\widetilde{V}(\lambda)\) by \((i)\) of Lemma 3.2, we may consider by [9, 2.7.8] the space \(C(\widetilde{V}(\lambda))\) of matrix coefficients of \(\widetilde{V}(\lambda)\) which is the \(\Bbbk\)-vector subspace of \(U(\tilde{\mathfrak{p}}^{-})^{*}\) generated by the linear forms \(c_{\xi,\,v}^{\lambda}\) or simply \(c_{\xi,\,v}\) defined by
\[c_{\xi,\,v}:\;u\in U(\tilde{\mathfrak{p}}^{-})\mapsto\xi(u.v)\in\Bbbk\]
for all \(\xi\in\widetilde{V}(\lambda)^{*}\) and \(v\in\widetilde{V}(\lambda)\). By equation (7) of Lemma 3.2, we may also define the \(\Bbbk\)-vector subspace of \(C(\widetilde{V}(\lambda))\) generated by the matrix coefficients \(c_{\xi,\,v^{\prime}}\) with \(\xi\in\widetilde{V}(\lambda)^{*}\) and \(v^{\prime}\in\widetilde{V^{\prime}}(\lambda)\subset\widetilde{V}(\lambda)\), which we will denote by \(\widetilde{C}_{\mathfrak{p}}(\lambda)\).
Finally denote by \(\widetilde{C}_{\mathfrak{r}}(\lambda)\) the subspace of \(\widetilde{C}_{\mathfrak{p}}(\lambda)\) generated by the matrix coefficients \(c_{\xi,\,v}\) where \(\xi\in\widetilde{V^{\prime\prime}}(\lambda)^{*}\) and \(v\in\widetilde{V^{\prime}}(\lambda)\).
### Tensor decomposition
**Lemma**.: _Let \(\lambda,\,\mu\in P^{+}(\pi)\). Then \(\widetilde{V^{\prime}}(\lambda)\otimes\widetilde{V^{\prime}}(\mu)\), resp. \(\widetilde{V^{\prime\prime}}(\lambda)^{*}\otimes\widetilde{V^{\prime\prime}}( \mu)^{*}\), is a direct sum of some copies of \(\widetilde{V^{\prime}}(\nu)\), resp. \(\widetilde{V^{\prime\prime}}(\nu)^{*}\), for \(\nu\in P^{+}(\pi)\). Each of them contains the unique copy of \(\widetilde{V^{\prime}}(\lambda+\mu)\), resp. of \(\widetilde{V^{\prime\prime}}(\lambda+\mu)^{*}\)._
Proof.: The proof is similar as the proof of [14, lemma 2.2]. We give it for the reader's convenience. Let \(\nu\) be an \(\mathfrak{h}\)-weight of \(\widetilde{V^{\prime}}(\lambda)\). Then \(\nu\in\lambda-\mathbb{N}\pi^{\prime}\). Since \(\lambda\in P^{+}(\pi)\) we have that, for all \(\alpha\in\pi\setminus\pi^{\prime}\), \(\langle\alpha\!,\,\nu\rangle\in\mathbb{N}\). Moreover every vector in \(\widetilde{V^{\prime}}(\lambda)\) is annihilated by \(\rho_{\lambda}(\mathfrak{m})\) by remark 12. Since \(\mathfrak{r}^{\prime}\) is a semisimple Lie algebra, the finite dimensional \(U(\mathfrak{r}^{\prime})\)-module (for diagonal action) \(\widetilde{V^{\prime}}(\lambda)\otimes\widetilde{V^{\prime}}(\mu)\) decomposes into a direct sum of irreducible \(U(\mathfrak{r}^{\prime})\)-modules, each of them being generated by a highest weight nonzero vector whose \(\mathfrak{h}\)-weight actually belongs to \(P^{+}(\pi)\) : this highest weight nonzero vector \(\tilde{v}\) is indeed such that \((\rho_{\lambda}\otimes\rho_{\mu})(x)(\tilde{v})=0\) for all \(x\in\mathfrak{n}\), where \(\rho_{\lambda}\otimes\rho_{\mu}\) is the tensor product of the representations \(\rho_{\lambda}\) and \(\rho_{\mu}\) as defined for instance in [9, 1.2.14] and its \(\mathfrak{h}\)-weight belongs to \(\lambda+\mu-\mathbb{N}\pi^{\prime}\). Thus the tensor product \(\widetilde{V^{\prime}}(\lambda)\otimes\widetilde{V^{\prime}}(\mu)\) is a direct sum of some copies of \(\widetilde{V^{\prime}}(\nu)\), for \(\nu\in P^{+}(\pi)\) such that \(\nu\in\lambda+\mu-\mathbb{N}\pi^{\prime}\). Moreover \(U(\mathfrak{r}).(\widetilde{v}_{\lambda}\otimes\widetilde{v}_{\mu})\) is the unique copy of \(\widetilde{V^{\prime}}(\lambda+\mu)\) which occurs in this tensor product.
Observe that \(\widetilde{V}(\lambda)^{*}\) may be viewed as a left \(U(\mathfrak{r})\)-module, by setting for all \(u\in U(\mathfrak{r})\), for all \(\xi\in\widetilde{V}(\lambda)^{*}\), \(u.\xi=\xi.u^{\top}\). Then \(\widetilde{V^{\prime\prime}}(\lambda)^{*}\simeq\widetilde{V^{\prime}}(-w_{0}\lambda)\) as left \(U(\mathfrak{r})\)-modules. Then one obtains similarly the second part of the lemma, \(\widetilde{V^{\prime\prime}}(\lambda)^{*}\otimes\widetilde{V^{\prime\prime}}( \mu)^{*}\) being a direct sum of some copies of \(\widetilde{V^{\prime\prime}}(\nu)^{*}\), with \(\nu\in P^{+}(\pi)\) such that \(w_{0}\nu\in w_{0}\lambda+w_{0}\mu+\mathbb{N}\pi^{\prime}\). Finally \((\widetilde{\xi}_{w_{0}\lambda}\otimes\widetilde{\xi}_{w_{0}\mu}).U(\mathfrak{r})\) is the unique copy of \(\widetilde{V^{\prime\prime}}(\lambda+\mu)^{*}\) which occurs in the tensor product \(\widetilde{V^{\prime\prime}}(\lambda)^{*}\otimes\widetilde{V^{\prime\prime}}( \mu)^{*}\).
### Direct sums
Recall that the dual vector space \(U(\tilde{\mathfrak{p}}^{-})^{*}\) of \(U(\tilde{\mathfrak{p}}^{-})\) is an associative algebra with product given by the dual map of the coproduct in \(U(\tilde{\mathfrak{p}}^{-})\) (see for instance [9, 2.7.4]).
**Lemma**.: _The sum \(\widetilde{C}_{\mathfrak{p}}=\sum_{\lambda\in P^{+}(\pi)}\widetilde{C}_{ \mathfrak{p}}(\lambda)\) is a direct sum. The same holds for \(\widetilde{C}_{\mathfrak{r}}=\sum_{\lambda\in P^{+}(\pi)}\widetilde{C}_{ \mathfrak{r}}(\lambda)\). Moreover the latter is a subalgebra of \(U(\tilde{\mathfrak{p}}^{-})^{*}\)._
Proof.: Let \(\lambda\in P^{+}(\pi)\) and for all \(\xi\in\widetilde{V}(\lambda)^{*}\), denote by \(h_{\xi}:\widetilde{V^{\prime}}(\lambda)\longrightarrow U(\tilde{\mathfrak{p}} ^{-})^{*}\) the map such that \(h_{\xi}(v)=c_{\xi,\,v}\) for all \(v\in\widetilde{V^{\prime}}(\lambda)\). For all \(u^{\prime}\in U(\mathfrak{r})\), recall \(R(u^{\prime})\) the (regular) right action of \(u^{\prime}\) on \(U(\tilde{\mathfrak{p}}^{-})\) defined by \(R(u^{\prime})(u)=uu^{\prime}\) for all \(u\in U(\tilde{\mathfrak{p}}^{-})\), where \(uu^{\prime}\) is the product in \(U(\tilde{\mathfrak{p}}^{-})\) (see subsection 4.5). Then its dual map \({}^{t}R(u^{\prime}):U(\tilde{\mathfrak{p}}^{-})^{*}\longrightarrow U(\tilde{ \mathfrak{p}}^{-})^{*}\) defines a left action on \(U(\tilde{\mathfrak{p}}^{-})^{*}\), called the coregarit right representation of \(U(\mathfrak{r})\) on \(U(\tilde{\mathfrak{p}}^{-})^{*}\) (see [9, 2.7.7]). One sees easily that \(h_{\xi}\) is a morphism of \(U(\mathfrak{r})\)-modules, when \(U(\tilde{\mathfrak{p}}^{-})^{*}\) is endowed with the coregarit right representation of \(U(\mathfrak{r})\) (see also [9,
2.7.11]). When \(\xi\neq 0\), one checks that \(h_{\xi}(\widetilde{V^{\prime}}(\lambda))\neq\{0\}\) and then \(h_{\xi}(\widetilde{V^{\prime}}(\lambda))\) is an irreducible \(U(\mathfrak{r})\)-module for the coregular right representation. Then \(\widetilde{C}_{\mathfrak{p}}(\lambda)=\sum_{\xi\in\widetilde{V}(\lambda)^{*} \setminus\{0\}}h_{\xi}(\widetilde{V^{\prime}}(\lambda))\) is a sum of irreducible \(U(\mathfrak{r})\)-modules all isomorphic to \(\widetilde{V^{\prime}}(\lambda)\). Since, for \(\lambda\neq\,\mu\in P^{+}(\pi)\), \(\widetilde{V^{\prime}}(\lambda)\) and \(\widetilde{V^{\prime}}(\mu)\) are not isomorphic as \(U(\mathfrak{r})\)-modules, it follows that \(\sum_{\lambda\in P^{+}(\pi)}\widetilde{C}_{\mathfrak{p}}(\lambda)\) is a direct sum. Obviously \(\widetilde{C}_{\mathfrak{r}}=\sum_{\lambda\in P^{+}(\pi)}\widetilde{C}_{ \mathfrak{r}}(\lambda)\) is also a direct sum, since \(\widetilde{C}_{\mathfrak{r}}(\lambda)\subset\widetilde{C}_{\mathfrak{p}}(\lambda)\) for all \(\lambda\in P^{+}(\pi)\). Finally \(\widetilde{C}_{\mathfrak{r}}\) is an algebra by [9, 2.7.10], as a consequence of lemma 5.2.
### Isomorphisms
Let \(\lambda\in P^{+}(\pi)\) and set \(\Phi^{\lambda}_{\mathfrak{p}}:\widetilde{V}(\lambda)^{*}\otimes\widetilde{V^ {\prime}}(\lambda)\longrightarrow\widetilde{C}_{\mathfrak{p}}(\lambda)\) defined by \(\xi\otimes v^{\prime}\mapsto c_{\xi,\,v^{\prime}}\) and extended by linearity. Similarly one sets \(\Phi^{\lambda}_{\mathfrak{r}}:\widetilde{V^{\prime\prime}}(\lambda)^{*} \otimes\widetilde{V^{\prime}}(\lambda)\longrightarrow\widetilde{C}_{\mathfrak{ r}}(\lambda)\) defined by \(\xi\otimes v^{\prime}\mapsto c_{\xi,\,v^{\prime}}\) extended by linearity.
**Lemma**.: _The maps \(\Phi^{\lambda}_{\mathfrak{p}}\) and \(\Phi^{\lambda}_{\mathfrak{r}}\) are isomorphisms of vector spaces._
Proof.: Firstly these maps are obviously well defined and onto. It remains to verify the injectivity. Assume that there exists \(I\) a finite set, and for all \(i\in I\), \(\xi_{i}\in\widetilde{V}(\lambda)^{*}\) and \(v^{\prime}_{i}\in\widetilde{V^{\prime}}(\lambda)=U(\mathfrak{r}).\tilde{v}_{\lambda}\) such that \(\sum_{i\in I}c_{\xi_{i},\,v^{\prime}_{i}}=0\). We can also assume that the \(v^{\prime}_{i}\), \(i\in I\), are linearly independent. We want to show that for all \(i\in I\), \(\xi_{i}=0\). Assume that there exists \(i_{0}\in I\) such that \(\xi_{i_{0}}\neq 0\) and complete \(\xi_{i_{0}}\) in a basis of \(\widetilde{V}(\lambda)^{*}\). By taking the dual basis, there exists \(v_{i_{0}}\in\widetilde{V}(\lambda)\) such that \(\xi_{i_{0}}(v_{i_{0}})=1\). By \((i)\) of lemma 3.2 there exists \(u_{0}\in U(\tilde{\mathfrak{p}}^{-})\) such that \(v_{i_{0}}=u_{0}.\tilde{v}_{\lambda}\). Now recall that \(\widetilde{V^{\prime}}(\lambda)\) is a left irreducible \(U(\mathfrak{r})\)-module. Then by Jacobson density theorem (see [42, Chap. 3,SS 3, 2]), there exists \(a\in U(\mathfrak{r})\) such that for all \(i\in I\setminus\{i_{0}\}\), \(a.v^{\prime}_{i}=0\) and \(a.v^{\prime}_{i_{0}}=\tilde{v}_{\lambda}\). Since \(u_{0}a\in U(\tilde{\mathfrak{p}}^{-})\) we obtain that \(\sum_{i\in I}c_{\xi_{i},\,v^{\prime}_{i}}(u_{0}a)=0=\xi_{i_{0}}(u_{0}.(a.v^{ \prime}_{i_{0}}))=\xi_{i_{0}}(u_{0}.\tilde{v}_{\lambda})=\xi_{i_{0}}(v_{i_{0} })=1\) which is a contradiction. Hence the lemma for the map \(\Phi^{\lambda}_{\mathfrak{p}}\) and of course also for \(\Phi^{\lambda}_{\mathfrak{r}}\).
### The dual representation of \(ad^{**}\)
Recall the left representation of \(A\) in \(U(\tilde{\mathfrak{p}}^{-})\) defined in subsection 4.4 we have denoted by \(ad^{**}\). Then the dual representation of \(A\) in \(U(\tilde{\mathfrak{p}}^{-})^{*}\) is defined as follows.
\[\forall a\in A,\,\forall f\in U(\tilde{\mathfrak{p}}^{-})^{*},\,\,\,a.f=f\circ ad ^{**}a^{\top} \tag{39}\]
(where recall \(a^{\top}\) was defined in equation (15)).
This defines a left action of \(A\) on \(U(\tilde{\mathfrak{p}}^{-})^{*}\) (see for instance [9, 2.2.19]) and by proposition 4.5, we deduce that one has
\[\forall a\in A,\,\forall f\in U(\tilde{\mathfrak{p}}^{-})^{*},\,\,\,a.f=f\circ L (a_{2}^{\top})\circ R(a_{1})=({}^{t}R(a_{1})\circ{}^{t}L(a_{2}^{\top}))(f) \tag{40}\]
where \(\Delta_{A}(a)=a_{1}\otimes a_{2}\).
In particular one has that
\[\forall x\in\mathfrak{p},\,\,\forall f\in U(\tilde{\mathfrak{p}}^{-})^{*},\,\,x. f={}^{t}R(x)(f)-{}^{t}L(x)(f). \tag{41}\]
Then one deduces the following lemma.
**Lemma**.: _Let \(\lambda\in P^{+}(\pi)\). One has that_
\[\forall x\in\mathfrak{r},\;\forall\xi\in\widetilde{V}(\lambda)^{*},\;\forall v \in\widetilde{V^{\prime}}(\lambda),\;x.c_{\xi,\,v}=c_{\xi,\,x.v}-c_{\xi.x,\,v}. \tag{42}\]
Proof.: Let \(x\in\mathfrak{r},\;\xi\in\widetilde{V}(\lambda)^{*},\;v\in\widetilde{V^{ \prime}}(\lambda)\). One checks easily that \({}^{t}R(x)(c_{\xi,\,v})=c_{\xi,\,x.v}\), resp. that \({}^{t}L(x)(c_{\xi,\,v})=c_{\xi.x,\,v}\), by definition of \(R\), resp. of \(L\), given in equation (30), resp. in equation (32). Then the lemma follows from equation (41).
Let \(\lambda\in P^{+}(\pi)\). Endow \(U(\tilde{\mathfrak{p}}^{-})^{*}\) with the dual representation of \(A\) given by equation (39) and in particular with the dual representation of \(U(\mathfrak{r})\subset A\), which coincides in the latter case with the coadjoint representation of \(U(\mathfrak{r})\). By equation (42) every \(\widetilde{C}_{\mathfrak{p}}(\lambda)\), resp. \(\widetilde{C}_{\mathfrak{r}}(\lambda)\), is a left \(U(\mathfrak{r})\)-module for the coadjoint representation.
On the other hand, endow \(\widetilde{V}(\lambda)\) with the left action of \(U(\mathfrak{r})\) described in subsection 3.2 and \(\widetilde{V}(\lambda)^{*}\) with the left action of \(U(\mathfrak{r})\) corresponding with its dual representation, namely for all \(\xi\in\widetilde{V}(\lambda)^{*}\), for all \(u\in U(\mathfrak{r})\), \(u.\xi=\xi.u^{\top}\). Then endow the tensor product \(\widetilde{V}(\lambda)^{*}\otimes\widetilde{V^{\prime}}(\lambda)\) with the diagonal action of \(U(\mathfrak{r})\), namely for \(u\in U(\mathfrak{r})\) such that \(\Delta(u)=u_{1}\otimes u_{2}\), for all \(\xi\in\widetilde{V}(\lambda)^{*}\), for all \(v^{\prime}\in\widetilde{V^{\prime}}(\lambda)\),
\[u.(\xi\otimes v^{\prime})=u_{1}.\xi\otimes u_{2}.v^{\prime}=\xi.u_{1}^{\top} \otimes u_{2}.v^{\prime}. \tag{43}\]
In particular, one has that
\[\forall x\in\mathfrak{r},\,\forall\xi\in\widetilde{V}(\lambda)^{*},\,\forall v \in\widetilde{V^{\prime}}(\lambda),\,x.(\xi\otimes v)=-\xi.x\otimes v+\xi \otimes x.v. \tag{44}\]
Recall the isomorphisms of vector spaces \(\Phi_{\mathfrak{p}}^{\lambda}\) and \(\Phi_{\mathfrak{r}}^{\lambda}\) defined in subsection 5.4.
**Proposition**.: _Let \(\lambda\in P^{+}(\pi)\). With the left actions of \(U(\mathfrak{r})\) given by equation (43) and equation (39) respectively, the isomorphisms of vector spaces \(\Phi_{\mathfrak{p}}^{\lambda}\) and \(\Phi_{\mathfrak{r}}^{\lambda}\) are isomorphisms of left \(U(\mathfrak{r})\)-modules._
Proof.: It is immediate by equations (42) and (44).
## 6. \(\widetilde{C}_{\mathfrak{r}}^{U(\mathfrak{r}^{\prime})}\) is a polynomial algebra.
### The semigroup \(\mathscr{D}\)
Recall \(\mathfrak{r}^{\prime}\) the derived subalgebra of \(\mathfrak{r}\) : the former is a semi-simple Lie algebra. Denote by \((U(\tilde{\mathfrak{p}}^{-})^{*})^{U(\mathfrak{r}^{\prime})}\) the set of elements in \(U(\tilde{\mathfrak{p}}^{-})^{*}\) which are invariant under the coadjoint action of \(U(\mathfrak{r}^{\prime})\). Since for all \(z\in\mathfrak{r}^{\prime}\), for all \(u\in U(\tilde{\mathfrak{p}}^{-})\) such that \(\Delta(u)=u_{1}\otimes u_{2}\), we have that \(\Delta(ad\,z(u))=ad\,z(u_{1})\otimes u_{2}+u_{1}\otimes ad\,z(u_{2})\), the set \((U(\tilde{\mathfrak{p}}^{-})^{*})^{U(\mathfrak{r}^{\prime})}\) is an algebra.
For all \(\lambda\in P^{+}(\pi)\), recall that \(\widetilde{C}_{\mathfrak{r}}(\lambda)\) is a left \(U(\mathfrak{r}^{\prime})\)-module (for the coadjoint representation of \(U(\mathfrak{r}^{\prime})\)) by equation (42). Then define \(\widetilde{C}_{\mathfrak{r}}(\lambda)^{U(\mathfrak{r}^{\prime})}\) as the set of elements in \(\widetilde{C}_{\mathfrak{r}}(\lambda)\) which are invariant under the coadjoint action of \(U(\mathfrak{r}^{\prime})\) : this is of course a vector space.
Denote by \(\widetilde{C}_{\mathfrak{r}}^{U(\mathfrak{r}^{\prime})}\subset(U(\tilde{\mathfrak{p }}^{-})^{*})^{U(\mathfrak{r}^{\prime})}\) the set of elements in \(\widetilde{C}_{\mathfrak{r}}\) which are invariant under the coadjoint action of \(U(\mathfrak{r}^{\prime})\). Since \(\widetilde{C}_{\mathfrak{r}}\) is an algebra by lemma 5.3 and by what we said above, \(\widetilde{C}_{\mathfrak{r}}^{U(\mathfrak{r}^{\prime})}\) is an algebra.
Since moreover the sum of the \(\widetilde{C}_{\mathfrak{r}}(\lambda)\)'s is a direct sum by lemma 5.3, we have that
\[\widetilde{C}_{\mathfrak{r}}^{U(\mathfrak{r}^{\prime})}=\bigoplus_{\lambda\in P ^{+}(\pi)}\widetilde{C}_{\mathfrak{r}}(\lambda)^{U(\mathfrak{r}^{\prime})}. \tag{45}\]
Let \(\mathscr{D}\) be the set of all \(\lambda\in P^{+}(\pi)\) such that
\[(w_{0}^{\prime}\lambda-w_{0}\lambda,\,\pi^{\prime})=0. \tag{46}\]
**Proposition**.: _One has that, for all \(\lambda\in P^{+}(\pi)\), \(\dim\widetilde{C}_{\mathfrak{r}}(\lambda)^{U(\mathfrak{r}^{\prime})}\leq 1\) with equality if and only if \(\lambda\in\mathscr{D}\) and then_
\[\widetilde{C}_{\mathfrak{r}}^{U(\mathfrak{r}^{\prime})}=\bigoplus_{\lambda\in \mathscr{D}}\widetilde{C}_{\mathfrak{r}}(\lambda)^{U(\mathfrak{r}^{\prime})}. \tag{47}\]
Proof.: The proof is quite similar as the proof in [13, Thm. SS3]. We give it for the reader's convenience. Fix \(\lambda\in P^{+}(\pi)\). Denote by \(\operatorname{Hom}(\widetilde{V^{\prime}}(\lambda)^{*},\,\widetilde{V^{{}^{ \prime\prime}}}(\lambda)^{*})\) the set of all morphisms between the vector spaces \(\widetilde{V^{\prime}}(\lambda)^{*}\) and \(\widetilde{V^{\prime\prime}}(\lambda)^{*}\), endowed with the \(U(\mathfrak{r}^{\prime})\)-module structure given by
\[\forall u\in U(\mathfrak{r}^{\prime}),\,\forall\varphi\in \operatorname{Hom}(\widetilde{V^{\prime}}(\lambda)^{*},\,\widetilde{V^{{}^{ \prime\prime}}}(\lambda)^{*}),\,\forall\xi\in\widetilde{V^{\prime}}(\lambda)^ {*},\\ (u.\varphi)(\xi)=u_{2}.\varphi(u_{1}^{\top}.\xi) \tag{48}\]
where \(\Delta(u)=u_{1}\otimes u_{2}\). Then (see for instance [24, A.2.16]) the morphism
\[\Phi:\;\xi\otimes v^{\prime}\in\widetilde{V^{\prime\prime}}(\lambda)^{*} \otimes\widetilde{V^{\prime}}(\lambda)\mapsto(\xi^{\prime}\in\widetilde{V^{ \prime}}(\lambda)^{*}\mapsto\xi^{\prime}(v^{\prime})\xi) \tag{49}\]
is an isomorphism of \(U(\mathfrak{r}^{\prime})\)-modules between \(\widetilde{V^{{}^{\prime\prime}}}(\lambda)^{*}\otimes\widetilde{V^{\prime}}(\lambda)\), endowed with the diagonal action of \(U(\mathfrak{r}^{\prime})\) given by equation (43) and \(\operatorname{Hom}(\widetilde{V^{\prime}}(\lambda)^{*},\,\widetilde{V^{{}^{ \prime\prime}}}(\lambda)^{*})\), endowed with the action given by equation (48).
Denote by \(\operatorname{Hom}_{U(\mathfrak{r}^{\prime})}(\widetilde{V^{\prime}}(\lambda)^ {*},\,\widetilde{V^{{}^{\prime\prime}}}(\lambda)^{*})\) the set of all \(U(\mathfrak{r}^{\prime})\)-morphisms between \(\widetilde{V^{\prime}}(\lambda)^{*}\) and \(\widetilde{V^{\prime\prime}}(\lambda)^{*}\) and by \(\big{(}\widetilde{V^{{}^{\prime\prime}}}(\lambda)^{*}\otimes\widetilde{V^{ \prime}}(\lambda)\big{)}^{U(\mathfrak{r}^{\prime})}\) the set of elements in the tensor product \(\widetilde{V^{{}^{\prime\prime}}}(\lambda)^{*}\otimes\widetilde{V^{\prime}}(\lambda)\) which are invariant under the diagonal action of \(U(\mathfrak{r}^{\prime})\) given by equation (43). Then we have
\[\Phi(\big{(}\widetilde{V^{{}^{\prime\prime}}}(\lambda)^{*}\otimes\widetilde{V^ {\prime}}(\lambda)\big{)}^{U(\mathfrak{r}^{\prime})})=\operatorname{Hom}_{U( \mathfrak{r}^{\prime})}(\widetilde{V^{\prime}}(\lambda)^{*},\,\widetilde{V^{{}^ {\prime\prime}}}(\lambda)^{*}). \tag{50}\]
Moreover the \(U(\mathfrak{r}^{\prime})\)-modules \(\widetilde{V^{{}^{\prime\prime}}}(\lambda)^{*}\) and \(\widetilde{V^{\prime}}(\lambda)\) (and also \(\widetilde{V^{\prime}}(\lambda)^{*}\)) are irreducible. Then by Schur lemma (see for instance [42, Chap. 3, SS 3, 1]),
\[\dim\operatorname{Hom}_{U(\mathfrak{r}^{\prime})}(\widetilde{V^{\prime}}( \lambda)^{*},\,\widetilde{V^{{}^{\prime\prime}}}(\lambda)^{*})\leq 1 \tag{51}\]
with equality if and only if the irreducible \(U(\mathfrak{r}^{\prime})\)-modules \(\widetilde{V^{\prime\prime}}(\lambda)^{*}\) and \(\widetilde{V^{\prime}}(\lambda)^{*}\) are isomorphic that is, if and only if
\[w^{\prime}_{0}\lambda-w_{0}\lambda=\sum_{\alpha\in\pi\setminus\pi^{\prime}}m_{ \alpha}\varpi_{\alpha}\text{ with }m_{\alpha}\in\mathbb{N},\,\forall\alpha\in\pi \setminus\pi^{\prime} \tag{52}\]
or equivalently if and only if \(\lambda\) verifies equation (46) that is, if and only if \(\lambda\in\mathscr{D}\).
Indeed we have that \(\widetilde{V^{\prime\prime}}(\lambda)^{*}\simeq\widetilde{V^{\prime}}(-w_{0}\lambda)\) as left \(U(\mathfrak{r})\)-modules by what we already said in the proof of Lemma 5.2. Similarly since \(\widetilde{V^{\prime}}(\lambda)=U(\mathfrak{n}_{\pi^{\prime}}^{-}).\tilde{v}_ {\lambda}=U(\mathfrak{n}_{\pi^{\prime}}).\tilde{v}_{w^{\prime}_{0}\lambda}\) where \(\tilde{v}_{w^{\prime}_{0}\lambda}\) is a chosen nonzero weight vector in \(\widetilde{V^{\prime}}(\lambda)\) of weight \(w^{\prime}_{0}\lambda\), we have that \(\widetilde{V^{\prime}}(\lambda)^{*}\simeq\widetilde{V^{\prime}}(-w^{\prime}_ {0}\lambda)\) as left \(U(\mathfrak{r})\)-modules. Then the irreducible \(U(\mathfrak{r}^{\prime})\)-modules \(\widetilde{V^{\prime\prime}}(\lambda)^{*}\) and \(\widetilde{V^{\prime}}(\lambda)^{*}\) are isomorphic if and only if \((-w_{0}\lambda)^{\prime}=(-w^{\prime}_{0}\lambda)^{\prime}\) where recall that the superscript "prime" denotes the projection in \(P(\pi^{\prime})\) of an element in \(P(\pi)\) with respect to the decomposition (1).
By proposition 5.5 one has that
\[\widetilde{C}_{\mathfrak{r}}(\lambda)^{U(\mathfrak{r}^{\prime})}=\Phi_{ \mathfrak{r}}^{\lambda}\Big{(}\big{(}\widetilde{V^{\prime\prime}}(\lambda)^{* }\otimes\widetilde{V^{\prime}}(\lambda)\big{)}^{U(\mathfrak{r}^{\prime})} \Big{)}. \tag{53}\]
Then by equations (50) and (51) we have that \(\dim\widetilde{C}_{\mathfrak{r}}(\lambda)^{U(\mathfrak{r}^{\prime})}\leq 1\) with equality if and only if \(\lambda\in\mathscr{D}\). For all \(\lambda\in\mathscr{D}\) set \(c_{\lambda}\in\widetilde{C}_{\mathfrak{r}}(\lambda)^{U(\mathfrak{r}^{\prime}) }\setminus\{0\}\) so that \(\widetilde{C}_{\mathfrak{r}}(\lambda)^{U(\mathfrak{r}^{\prime})}=\Bbbk c_{\lambda}\). By equation (45) we also have
\[\widetilde{C}_{\mathfrak{r}}^{U(\mathfrak{r}^{\prime})}=\bigoplus_{\lambda\in \mathscr{D}}\Bbbk c_{\lambda}. \tag{54}\]
This completes the proof.
Let \(\lambda\in\mathscr{D}\). Choose \(\varphi_{\lambda}\in\operatorname{Hom}_{U(\mathfrak{r}^{\prime})}(\widetilde{ V^{\prime}}(\lambda)^{*},\,\widetilde{V^{\prime\prime}}(\lambda)^{*})\setminus\{0\}\) and denote by \(U(\mathfrak{r}^{\prime})_{+}\) the kernel of the coidentity in the enveloping algebra \(U(\mathfrak{r}^{\prime})\). By [24, 7.1.16] we have that
\[\Phi^{-1}:\,\operatorname{Hom}(\widetilde{V^{\prime}}(\lambda)^{*},\, \widetilde{V^{\prime\prime}}(\lambda)^{*})\overset{\sim}{\longrightarrow}U( \mathfrak{r}^{\prime})_{+}.(\tilde{\xi}_{w_{0}\lambda}\otimes\tilde{v}_{w^{ \prime}_{0}\lambda})\oplus\Bbbk\Phi^{-1}(\varphi_{\lambda}). \tag{55}\]
It follows that we have, up to a nonzero scalar
\[(\Phi_{\mathfrak{r}}^{\lambda})^{-1}(c_{\lambda})=\tilde{\xi}_{w_{0}\lambda} \otimes\tilde{v}_{w^{\prime}_{0}\lambda}+\sum_{i\in I}u_{i}^{-}.\tilde{\xi}_{ w_{0}\lambda}\otimes u_{i}^{+}.\tilde{v}_{w^{\prime}_{0}\lambda} \tag{56}\]
where \(u_{i}^{\pm}\in\mathfrak{n}_{\pi^{\prime}}^{\pm}U(\mathfrak{n}_{\pi^{\prime}}^{ \pm})\) for all \(i\in I\), \(I\) a finite set, since moreover \(\tilde{\xi}_{w_{0}\lambda}\otimes\tilde{v}_{w^{\prime}_{0}\lambda}\) is a cyclic vector for the \(U(\mathfrak{r}^{\prime})\)-module \(\widetilde{V^{\prime\prime}}(\lambda)^{*}\otimes\widetilde{V^{\prime}}(\lambda)\) endowed with the diagonal action. Hence the \(\mathfrak{h}\)-weight of \(c_{\lambda}\) is equal to
\[w^{\prime}_{0}\lambda-w_{0}\lambda. \tag{57}\]
By equation (52) this weight belongs to \(P^{+}(\pi)\) and by equation (46) it annihilates on \(\pi^{\prime}\).
Let \(i\) and \(j\) denote the permutations in \(\pi\) defined below.
\[\forall\alpha\in\pi,\,j(\alpha)=-w_{0}(\alpha) \tag{58}\]
\[\forall\alpha\in\pi^{\prime},\,i(\alpha)=-w_{0}^{\prime}(\alpha) \tag{59}\]
\[\forall\alpha\in\pi\setminus\pi^{\prime},\,\begin{cases}i(\alpha)=j(\alpha)& \text{if }j(\alpha)\not\in\pi^{\prime}\\ i(\alpha)=j(ij)^{r_{\alpha}}(\alpha)&\text{otherwise}\end{cases} \tag{60}\]
where \(r_{\alpha}\) is the smallest integer such that \(j(ij)^{r_{\alpha}}(\alpha)\not\in\pi^{\prime}\). Let \(E(\pi^{\prime})\) be the set of \(\langle ij\rangle\)-orbits in \(\pi\), where \(\langle ij\rangle\) denotes the subgroup generated by the composition map \(ij\).
For instance, if \(\mathfrak{p}\) is a maximal parabolic subalgebra of \(\mathfrak{g}\) that is, if \(\pi\setminus\pi^{\prime}=\{\alpha\}\), then \(i(\alpha)=\alpha\) and the \(\langle ij\rangle\)-orbit of \(\alpha\) is \(\Gamma_{\alpha}=\{(ji)^{s}j(\alpha),\;0\leq s\leq r_{\alpha}\}\).
Recall [14, Thm. 1] (see also [10, 4.1]):
**Theorem**.: _The set \(\mathscr{D}\) is a free additive semigroup generated by the \(\mathbb{Z}\)-linearly independent elements \(d_{\Gamma}=\sum_{\gamma\in\Gamma}\varpi_{\gamma}\), \(\Gamma\in E(\pi^{\prime})\)._
### A filtration on the algebra \(\widetilde{C}_{\mathfrak{r}}\)
Assume that \(\pi=\{\alpha_{1},\,\ldots,\,\alpha_{n}\}\). Then for all \(\lambda\in P^{+}(\pi)\), there exist \(k_{i}\in\mathbb{Q}_{+}\) for all \(i\), \(1\leq i\leq n\), such that \(\lambda=\sum_{i=1}^{n}k_{i}\alpha_{i}\). Set \(deg(\lambda)=2\sum_{i=1}^{n}k_{i}\). By [24, 7.1.25], \(deg(\lambda)\in\mathbb{N}\). For all \(m\in\mathbb{N}\), we set \(\mathscr{F}^{\prime}_{m}(\widetilde{C}_{\mathfrak{r}})=\bigoplus_{\lambda\in P ^{+}(\pi)|deg(\lambda)\leq m}\widetilde{C}_{\mathfrak{r}}(\lambda)\), which is a left \(U(\mathfrak{r})\)-submodule of \(\widetilde{C}_{\mathfrak{r}}\) for coadjoint action. Then \((\mathscr{F}^{\prime}_{m}(\widetilde{C}_{\mathfrak{r}}))_{m\in\mathbb{N}}\) is an increasing filtration \(\mathscr{F}^{\prime}\) of the algebra \(\widetilde{C}_{\mathfrak{r}}\) since for all \(\lambda\), \(\mu\in P^{+}(\pi)\),
\[\widetilde{C}_{\mathfrak{r}}(\lambda)\widetilde{C}_{\mathfrak{r}}(\mu)\subset \bigoplus_{\nu\in\mathbb{N}\pi^{\prime}|\lambda+\mu-\nu\in P^{+}(\pi)} \widetilde{C}_{\mathfrak{r}}(\lambda+\mu-\nu) \tag{61}\]
by the proof of lemma 5.2. Then denote by \(gr_{\mathscr{F}^{\prime}}(\widetilde{C}_{\mathfrak{r}})\) the associated graded algebra and for all \(c\in\mathscr{F}^{\prime}_{m}(\widetilde{C}_{\mathfrak{r}})\), denote by \(gr_{m,\,\mathscr{F}^{\prime}}(c)\) its canonical image in \(gr_{\mathscr{F}^{\prime}}(\widetilde{C}_{\mathfrak{r}})\). Recall the notation in subsection 6.1.
**Lemma**.: _Let \(\lambda\), \(\mu\in\mathscr{D}\). Set \(m=deg(\lambda)\) and \(m^{\prime}=deg(\mu)\)._
_Then \(gr_{m,\,\mathscr{F}^{\prime}}(c_{\lambda})gr_{m^{\prime},\,\mathscr{F}^{ \prime}}(c_{\mu})\) is a nonzero multiple of \(gr_{m+m^{\prime},\,\mathscr{F}^{\prime}}(c_{\lambda+\mu})\)._
Proof.: By definition of the multiplication in the graded algebra \(gr_{\mathscr{F}^{\prime}}(\widetilde{C}_{\mathfrak{r}})\), one has that
\[gr_{m,\,\mathscr{F}^{\prime}}(c_{\lambda})gr_{m^{\prime},\,\mathscr{F}^{ \prime}}(c_{\mu})=gr_{m+m^{\prime},\,\mathscr{F}^{\prime}}(c_{\lambda}c_{\mu}).\]
Now by equation (56) in the product \(c_{\lambda}c_{\mu}\) appears, up to a nonzero scalar, the term
\[c_{\tilde{\xi}_{w_{0}\lambda}\otimes\tilde{\xi}_{w_{0}\mu},\,\tilde{v}_{w_{0 }^{\prime}\lambda}\otimes\tilde{v}_{w_{0}^{\prime}\mu}}=c_{\tilde{\xi}_{w_{0}( \lambda+\mu)},\,\tilde{v}_{w_{0}^{\prime}(\lambda+\mu)}}\in\widetilde{C}_{ \mathfrak{r}}(\lambda+\mu).\]
Indeed this term cannot be annihilated by the other terms, by lemmas 5.3 and 5.4.
Since moreover \(c_{\lambda}c_{\mu}\in\widetilde{C}_{\mathfrak{r}}^{U(\mathfrak{r}^{\prime})}\), equations (54), (56) and (61) imply that
\[\begin{array}{rcl}gr_{m+m^{\prime},\,\mathscr{F}^{\prime}}(c_{\lambda}c_{\mu} )&=&\sum_{\nu\in\mathscr{D},\,deg(\nu)=m+m^{\prime},\,\nu\in\lambda+\mu- \mathbb{N}\pi^{\prime}}gr_{m+m^{\prime},\,\mathscr{F}^{\prime}}(c_{\nu})\\ &=&gr_{m+m^{\prime},\,\mathscr{F}^{\prime}}(c_{\lambda+\mu})\end{array}\]
up to multiplication by a nonzero scalar.
Now we can conclude the following.
**Proposition**.: _The algebra of invariants \(\widetilde{C}^{U(\mathfrak{r}^{\prime})}_{\mathfrak{r}}\) is a polynomial algebra over \(\Bbbk\), whose number of algebraically independent generators is equal to the cardinality of the set \(E(\pi^{\prime})\)._
Proof.: It follows as in the proof of [14, Thm. 1] (see also [14, Prop. 3.1]). Let \(\lambda_{i},\,i\in I\), be a set of \(\mathbb{Z}\)-linearly independent generators of \(\mathscr{D}\) and set \(m_{i}=deg(\lambda_{i})\) for all \(i\in I\) (one has that \(|I|=|E(\pi^{\prime})|\) by Thm 6.1). Denote by \(gr_{\mathscr{F}^{\prime}}(\widetilde{C}^{U(\mathfrak{r}^{\prime})}_{\mathfrak{ r}})\) the graded algebra of the algebra \(\widetilde{C}^{U(\mathfrak{r}^{\prime})}_{\mathfrak{r}}\) associated with the induced filtration. Note that the above lemma also holds in this graded algebra. Then equation (54) and the above lemma imply that \(gr_{m_{i},\,\mathscr{F}^{\prime}}(c_{\lambda_{i}}),i\in I\), are \(\Bbbk\)-algebraically independent and generate \(gr_{\mathscr{F}^{\prime}}(\widetilde{C}^{U(\mathfrak{r}^{\prime})}_{ \mathfrak{r}})\). Hence \(gr_{\mathscr{F}^{\prime}}(\widetilde{C}^{U(\mathfrak{r}^{\prime})}_{ \mathfrak{r}})\) is a polynomial algebra over \(\Bbbk\) in \(|E(\pi^{\prime})|\) generators and it follows (see [5, Chap. III, SS 2, n\({}^{\circ}\) 9, Prop. 10]) that the algebra \(\widetilde{C}^{U(\mathfrak{r}^{\prime})}_{\mathfrak{r}}\) is also a polynomial algebra over \(\Bbbk\) in \(|E(\pi^{\prime})|\) generators \(c_{\lambda_{i}},\,i\in I\), whose \(\mathfrak{h}\)-weight is equal to \(\delta_{i}=w^{\prime}_{0}\lambda_{i}-w_{0}\lambda_{i}\) by equation (57).
## 7. Generalized Kostant filtration and morphism.
### Generalized Kostant filtration
In [15, 6.1] we have defined what we called the Kostant filtration (denoted by \(\mathscr{F}_{K}\)) on the Hopf dual of the enveloping algebra of the simple Lie algebra \(\mathfrak{g}\). Here we will consider what we call _the generalized Kostant filtration_ on the dual algebra \(U(\tilde{\mathfrak{p}}^{-})^{*}\) of \(U(\tilde{\mathfrak{p}}^{-})\). More precisely, we set
\[\forall k\in\mathbb{N},\,\mathscr{F}_{K}^{k}(U(\tilde{\mathfrak{p}}^{-})^{*}) =\{f\in U(\tilde{\mathfrak{p}}^{-})^{*}\mid f(U_{k-1}(\tilde{\mathfrak{p}}^{- }))=0\} \tag{62}\]
where \((U_{k}(\tilde{\mathfrak{p}}^{-}))_{k\in\mathbb{N}\cup\{-1\}}\) is the canonical filtration on \(U(\tilde{\mathfrak{p}}^{-})\), with \(U_{-1}(\tilde{\mathfrak{p}}^{-})=\{0\}\).
**Lemma**.: _The generalized Kostant filtration \((\mathscr{F}_{K}^{k}(U(\tilde{\mathfrak{p}}^{-})^{*}))_{k\in\mathbb{N}}\) is a decreasing, exhaustive and separated filtration on the algebra \(U(\tilde{\mathfrak{p}}^{-})^{*}\). Moreover this filtration is invariant by the left action of \(A\) defined by equation (39)._
Proof.: The first assertions are obvious.
Let us show the invariance by the left action of \(A\). Let \(a\in A\), \(k\in\mathbb{N}\) and \(f\in\mathscr{F}_{K}^{k}(U(\tilde{\mathfrak{p}}^{-})^{*})\).
If \(a=z\in\mathfrak{r}\), then \(ad^{**}z=ad\,z\) by equation (26) and for all \(u\in U_{k-1}(\tilde{\mathfrak{p}}^{-})\), \(ad\,z(u)\in U_{k-1}(\tilde{\mathfrak{p}}^{-})\). Then \(z.f\in\mathscr{F}_{K}^{k}(U(\tilde{\mathfrak{p}}^{-})^{*})\).
Now assume that \(a=x\in\mathfrak{m}\), and let \(u\in U_{k-1}(\tilde{\mathfrak{p}}^{-})\). Recall equation (21). Then \(u=\sum_{j=0}^{k-1}s_{j}u^{\prime}_{j}\) with \(s_{j}\in S_{j}(\mathfrak{m}^{-})\) and \(u^{\prime}_{j}\in U_{k-1-j}(\mathfrak{r})\), for all \(0\leq j\leq k-1\).
Then by equation (25) one has :
\[ad^{**}x(u)=\sum_{j=0}^{k-1}\tilde{\theta}(ad^{*}x(s_{j}))u^{\prime}_{j}\in \sum_{j=0}^{k-1}S_{j-1}(\mathfrak{m}^{-})U_{k-j}(\mathfrak{r})\subset U_{k-1}( \tilde{\mathfrak{p}}^{-})\]
by equation (23).
It follows that \(x.f(u)=0\) and the lemma.
### The graded algebra associated with the generalized Kostant filtration
Set
\[gr_{K}(U(\tilde{\mathfrak{p}}^{-})^{*})=\bigoplus_{k\in\mathbb{N}}gr_{K}^{k}(U( \tilde{\mathfrak{p}}^{-})^{*}) \tag{63}\]
where, for all \(k\in\mathbb{N}\),
\[gr_{K}^{k}(U(\tilde{\mathfrak{p}}^{-})^{*})=\mathscr{F}_{K}^{k}(U(\tilde{ \mathfrak{p}}^{-})^{*})/\mathscr{F}_{K}^{k+1}(U(\tilde{\mathfrak{p}}^{-})^{*}). \tag{64}\]
Then \(gr_{K}(U(\tilde{\mathfrak{p}}^{-})^{*})\) is the graded algebra associated with the generalized Kostant filtration \(\mathscr{F}_{K}\) on the algebra \(U(\tilde{\mathfrak{p}}^{-})^{*}\). For all \(f\in\mathscr{F}_{K}^{k}(U(\tilde{\mathfrak{p}}^{-})^{*})\), one denotes by \(gr_{K}^{k}(f)\) its canonical image in \(gr_{K}^{k}(U(\tilde{\mathfrak{p}}^{-})^{*})\).
By lemma 7.1 the dual representation of \(A\) on \(U(\tilde{\mathfrak{p}}^{-})^{*}\) (given by equation (39)) induces a left action on \(gr_{K}(U(\tilde{\mathfrak{p}}^{-})^{*})\) defined by
\[\forall a\in A,\forall f\in\mathscr{F}_{K}^{k}(U(\tilde{\mathfrak{p}}^{-})^{* }),\,a.gr_{K}^{k}(f)=gr_{K}^{k}(a.f). \tag{65}\]
**Proposition**.: _Let \(x\in\mathfrak{m}\) and \(f\in\mathscr{F}_{K}^{k}(U(\tilde{\mathfrak{p}}^{-})^{*})\cap\widetilde{C}_{ \mathfrak{r}}\), \(k\in\mathbb{N}\). Then \(x.f\in\mathscr{F}_{K}^{k+1}(U(\tilde{\mathfrak{p}}^{-})^{*})\) and therefore_
( \[\diamond\] ) \[x.gr_{K}^{k}(f)=0\]
_where recall \(gr_{K}^{k}(g)=g+\mathscr{F}_{K}^{k+1}(U(\tilde{\mathfrak{p}}^{-})^{*})\), for all \(g\in\mathscr{F}_{K}^{k}(U(\tilde{\mathfrak{p}}^{-})^{*})\)._
Proof.: Take
\[f=\sum_{\lambda\in\Lambda}\sum_{i\in I_{\lambda}}c^{\lambda}_{\tilde{\xi}_{w_{ 0}\lambda}.u^{\prime}_{i},u^{\prime\prime}_{i}.\tilde{v}_{\lambda}}\in \mathscr{F}_{K}^{k}(U(\tilde{\mathfrak{p}}^{-})^{*})\cap\widetilde{C}_{ \mathfrak{r}}\]
with \(\Lambda\subset P^{+}(\pi)\) a finite set and for all \(\lambda\in\Lambda\), \(I_{\lambda}\) a finite set, \(u^{\prime}_{i}\), \(u^{\prime\prime}_{i}\in U(\mathfrak{n}^{-}_{\pi^{\prime}})\), for all \(i\in I_{\lambda}\). Moreover one may assume, if \(f\neq 0\), that \(u^{\prime}_{i}\), \(u^{\prime\prime}_{i}\), for all \(i\in I_{\lambda}\), are nonzero weight vectors. We need the lemma below.
**Lemma**.: _Let \(k\in\mathbb{N}\) and \(0\leq j\leq k\). With the above hypotheses, we have that_
\[\forall u\in U_{j-1}(\mathfrak{g}),\,\forall u^{\prime}\in U_{k-j}(\mathfrak{ r}),\,\sum_{\lambda\in\Lambda}\sum_{i\in I_{\lambda}}\xi_{w_{0}\lambda}(u^{\prime}_{i}uu^{ \prime}u^{\prime\prime}_{i}.v_{\lambda})=0. \tag{66}\]
Proof.: The lemma is obvious for \(j=0\) since \(U_{-1}(\mathfrak{g})=\{0\}\). Assume that \(j\in\mathbb{N}^{*}\). Take \(u\in U_{j-1}(\mathfrak{g})\) and \(u^{\prime}\in U_{k-j}(\mathfrak{r})\). Since \(\mathfrak{m}.V^{\prime}(\lambda)=\{0\}\) and since \(U_{j-1}(\mathfrak{g})=U_{j-1}(\mathfrak{p}^{-})\oplus U_{j-2}(\mathfrak{g}) \mathfrak{m}\), one may assume that \(u\in U_{j-1}(\mathfrak{p}^{-})\). One also may assume that \(u\) and \(u^{\prime}\) are nonzero weight vectors.
Since \(\tilde{\xi}_{w_{0}\lambda}\) vanishes on the weight vectors \(\beta_{\lambda}^{-1}(u^{\prime}_{i}uu^{\prime}u^{\prime\prime}_{i}.v_{\lambda})\) which are not of weight \(w_{0}\lambda\) and by equation (38), one has that
\[\sum_{\lambda\in\Lambda}\sum_{i\in I_{\lambda}}\xi_{w_{0}\lambda} (u^{\prime}_{i}uu^{\prime}u^{\prime\prime}_{i}.v_{\lambda}) =\sum_{\lambda\in\Lambda}\sum_{i\in I_{\lambda}}\tilde{\xi}_{w_{ 0}\lambda}(\beta_{\lambda}^{-1}(u^{\prime}_{i}uu^{\prime}u^{\prime\prime}_{i}. v_{\lambda}))\] \[=\sum_{\lambda\in\Lambda}\sum_{i\in I_{\lambda}^{\prime}}\tilde{ \xi}_{w_{0}\lambda}(gr_{k_{0}}(u^{\prime}_{i}uu^{\prime}u^{\prime\prime}_{i}. v_{\lambda}))\]
where for all \(\lambda\in\Lambda\), \(I^{\prime}_{\lambda}\subset I_{\lambda}\) is the set of indices \(i\in I_{\lambda}\) such that \(\beta_{\lambda}^{-1}(u^{\prime}_{i}uu^{\prime}u^{\prime\prime}_{i}.v_{\lambda})\) is a vector of weight \(w_{0}\lambda\).
Now write \(u=\sum_{t=0}^{j-1}u_{t}v_{t}\) with \(u_{t}=\theta(s_{t})\in U^{t}(\mathfrak{m}^{-})=\theta(S_{t}(\mathfrak{m}^{-}))\), \(s_{t}\in S_{t}(\mathfrak{m}^{-})\) and \(v_{t}\in U_{j-1-t}(\mathfrak{r})\) for all \(0\leq t\leq j-1\).
Then
\[gr_{k_{0}}(u^{\prime}_{i}uu^{\prime}u^{\prime\prime}_{i}.v_{\lambda})=u^{ \prime}_{i}\Bigl{(}\sum_{t=0}^{j-1}s_{t}v_{t}\Bigr{)}u^{\prime}u^{\prime\prime }_{i}.\tilde{v}_{\lambda}\]
by equation (8).
But \(\sum_{t=0}^{j-1}s_{t}v_{t}\in U_{j-1}(\tilde{\mathfrak{p}}^{-})\) and recall that \(u^{\prime}\in U_{k-j}(\mathfrak{r})\). Then
\[\Bigl{(}\sum_{t=0}^{j-1}s_{t}v_{t}\Bigr{)}u^{\prime}\in U_{k-1}(\tilde{ \mathfrak{p}}^{-})\]
and we obtain the required equation (66) since \(f(U_{k-1}(\tilde{\mathfrak{p}}^{-}))=0\).
Fix \(x\in\mathfrak{m}\) being a nonzero weight vector and for all \(0\leq j\leq k\), take \(s\in S_{j}(\mathfrak{m}^{-})\) and \(u^{\prime}\in U_{k-j}(\mathfrak{r})\) being weight vectors. If \(j\geq 1\), one may assume that \(s=y_{1}\cdots y_{j}\in S_{j}(\mathfrak{m}^{-})\) with \(y_{t}\in\mathfrak{m}^{-}\) being a weight vector for all \(1\leq t\leq j\).
Recall that
\[ad^{*}x(s)=\sum_{1\leq t\leq j|[x,\,y_{t}]\in\mathfrak{r}}y_{1}\cdots y_{t-1}[ x,\,y_{t}]y_{t+1}\cdots y_{j}\in S_{j}(\mathfrak{p}^{-}). \tag{67}\]
Set
\[ad_{\mathfrak{m}^{-}}x(s)=\sum_{1\leq t\leq j|[x,\,y_{t}]\in\mathfrak{m}^{-}} y_{1}\cdots y_{t-1}[x,\,y_{t}]y_{t+1}\cdots y_{j}\in S_{j}(\mathfrak{m}^{-}) \tag{68}\]
and
\[ad_{\mathfrak{m}}x(s)=\sum_{1\leq t\leq j|[x,\,y_{t}]\in\mathfrak{m}}y_{1} \cdots y_{t-1}[x,\,y_{t}]y_{t+1}\cdots y_{j}\in S_{j}(\mathfrak{g}). \tag{69}\]
By equations (39) and (25), one has that
\[-x.f(su^{\prime})=f(ad^{**}x(su^{\prime}))=f(\tilde{\theta}(ad^{*}x(s))u^{ \prime}).\]
Then if \(j=0\) one has obviously that \(x.f(su^{\prime})=0\), by equation (27).
From now on, assume that \(j\geq 1\). By the above and by what we said in the proof of the previous lemma we have that
\[-x.f(su^{\prime})=\sum_{\lambda\in\Lambda}\sum_{i\in I^{\prime}_{\lambda}} \tilde{\xi}_{w_{0}\lambda}\Bigl{(}u^{\prime}_{i}\tilde{\theta}(ad^{*}x(s))u^{ \prime}u^{\prime\prime}_{i}.\tilde{v}_{\lambda}\Bigr{)} \tag{70}\]
where for all \(\lambda\in\Lambda\), \(I^{\prime}_{\lambda}=\{i\in I_{\lambda}\mid\exists c_{i}\in\Bbbk^{*};\ u^{ \prime}_{i}\tilde{\theta}(ad^{*}x(s))u^{\prime}u^{\prime\prime}_{i}.\tilde{v} _{\lambda}=c_{i}\tilde{v}_{w_{0}\lambda}\}\).
Fix \(\lambda\in\Lambda\) and \(i\in I^{\prime}_{\lambda}\). One has that \(u^{\prime}_{i}\tilde{\theta}(ad^{*}x(s))u^{\prime}u^{\prime\prime}_{i}.\tilde {v}_{\lambda}\in gr_{j-1}(V(\lambda))\) by equations (23) and (10), and by equation (38) we have that \(j-1=k_{0}\).
Consider \(\theta:S(\mathfrak{g})\longrightarrow U(\mathfrak{g})\) the symmetrization (which is an isomorphism of \(ad\,U(\mathfrak{g})\)-modules) and the adjoint action (denoted by \(ad\)) of \(\mathfrak{m}\) on \(S(\mathfrak{g})\)
which extends uniquely by derivation the adjoint action of \(\mathfrak{m}\) on \(\mathfrak{g}\) given by Lie bracket. Observe that
\[ad\,x(s)=ad^{*}x(s)+ad_{\mathfrak{m}^{-}}x(s)+ad_{\mathfrak{m}}x(s). \tag{71}\]
Moreover for all \(1\leq t\leq j\), one has that
\[\theta(y_{1}\cdots y_{t-1}[x,\,y_{t}]y_{t+1}\cdots y_{j})= \tag{72}\] \[\theta(y_{1}\cdots y_{t-1}y_{t+1}\cdots y_{j})[x,\,y_{t}]\mod U_{ j-1}(\mathfrak{g}).\]
By equations (70), (72) and the previous lemma, it follows that
\[-x.f(su^{\prime})=\sum_{\lambda\in\Lambda}\sum_{i\in I^{\prime}_{\lambda}}\xi _{w_{0}\lambda}\Big{(}u^{\prime}_{i}\theta(ad^{*}x(s))u^{\prime}u^{\prime \prime}_{i}.v_{\lambda}\Big{)} \tag{73}\]
By equation (72) and the previous lemma, and since \(\mathfrak{m}.V^{\prime}(\lambda)=\{0\}\) one has that
\[\sum_{\lambda\in\Lambda}\sum_{i\in I^{\prime}_{\lambda}}\xi_{w_{0}\lambda}(u^ {\prime}_{i}\theta(ad_{\mathfrak{m}}x(s))u^{\prime}u^{\prime\prime}_{i}.v_{ \lambda})=0. \tag{74}\]
We claim that
\[-x.f(su^{\prime})=\sum_{\lambda\in\Lambda}\sum_{i\in I^{\prime}_{\lambda}}\xi _{w_{0}\lambda}\Big{(}u^{\prime}_{i}\theta(ad\,x(s))u^{\prime}u^{\prime\prime} _{i}.v_{\lambda}\Big{)}. \tag{75}\]
By equation (71) it remains to show that
\[\sum_{\lambda\in\Lambda}\sum_{i\in I^{\prime}_{\lambda}}\xi_{w_{0}\lambda}(u^ {\prime}_{i}\theta(ad_{\mathfrak{m}^{-}}x(s))u^{\prime}u^{\prime\prime}_{i}.v_ {\lambda})=0. \tag{76}\]
Fix an index \(t\) with \([x,\,y_{t}]\in\mathfrak{m}^{-}\). Take \(\lambda\in\Lambda\) and \(i\in I^{\prime}_{\lambda}\) (if there exist) such that
\[\xi_{w_{0}\lambda}(u^{\prime}_{i}\theta(y_{1}\cdots y_{t-1}[x,\,y_{t}]y_{t+1} \cdots y_{j})u^{\prime}u^{\prime\prime}_{i}.v_{\lambda})\neq 0.\]
Since all weight vectors non vanishing on \(\xi_{w_{0}\lambda}\) are proportional to \(v_{w_{0}\lambda}\), one has that \(u^{\prime}_{i}\theta(y_{1}\cdots y_{t-1}[x,\,y_{t}]y_{t+1}\cdots y_{j})u^{ \prime}u^{\prime\prime}_{i}.v_{\lambda}\) is proportional to \(v_{w_{0}\lambda}\).
On the other hand, one knows that \(v_{w_{0}\lambda}\in\mathscr{F}^{k_{0}}(V(\lambda))\subset U^{k_{0}}(\mathfrak{ m}^{-}).V^{\prime}(\lambda)\) by \((iv)\) of Lemma 3.2 and that \(k_{0}=j-1\) (otherwise \(I^{\prime}_{\lambda}=\emptyset\)). Then by the irreducibility of the \(U(\mathfrak{r})\)-module \(V^{\prime}(\lambda)\), there exists a nonzero weight vector \(u^{\prime}_{i,\,t}\in U^{j-1}(\mathfrak{m}^{-})U(\mathfrak{n}^{-}_{\pi^{\prime }})U(\mathfrak{n}_{\pi^{\prime}})\) such that
\[(u^{\prime}_{i}\theta(y_{1}\cdots y_{t-1}[x,\,y_{t}]y_{t+1}\cdots y_{j})-u^{ \prime}_{i,\,t}).(u^{\prime}u^{\prime\prime}_{i}.v_{\lambda})=0. \tag{77}\]
Set \(u_{t}=\theta(y_{1}\cdots y_{t-1}[x,\,y_{t}]y_{t+1}\cdots y_{j})\). Then \(u_{t}\in U^{j}(\mathfrak{m}^{-})\) is such that
\[u^{\prime}_{i}u_{t}-u^{\prime}_{i,\,t}\in\operatorname{Ann}_{U(\mathfrak{m}^{- })U(\mathfrak{n}^{-}_{\pi^{\prime}})U(\mathfrak{n}_{\pi^{\prime}})}(u^{\prime} u^{\prime\prime}_{i}.v_{\lambda}) \tag{78}\]
For all \(\gamma\in\Delta^{+}\setminus\Delta^{+}_{\pi^{\prime}}\), denote by \(r_{\gamma,\,i}\) the smallest positive integer such that \(x^{\gamma_{\gamma,\,i}}_{-\gamma}.(u^{\prime}u^{\prime\prime}_{i}.v_{\lambda})=0\). If we denote by \(\mu_{i}\) the weight of the vector \(u^{\prime}u^{\prime\prime}_{i}.v_{\lambda}\), we have that \(r_{\gamma,\,i}=\langle\gamma\tilde{,}\,\mu_{i}\rangle+1\), since \(x_{\gamma}.(u^{\prime}u^{\prime\prime}_{i}.v_{\lambda})=0\).
Similarly for all \(\beta\in\Delta^{+}_{\pi^{\prime}}\) denote by \(r^{\pm}_{\beta,\,i}\) the smallest positive integer such that \(x^{\pm}_{\beta,\,i}.(u^{\prime}u^{\prime\prime}_{i}.v_{\lambda})=0\) (see [22, 21]).
Then one has that
\[u^{\prime}_{i}u_{t}-u^{\prime}_{i,\,t} \in\sum_{\gamma\in\Delta^{+}\setminus\Delta^{+}_{\pi^{\prime}}}U( \mathfrak{m}^{-})U(\mathfrak{n}^{-}_{\pi^{\prime}})U(\mathfrak{n}_{\pi^{ \prime}})x^{r_{\gamma,\,i}}_{-\gamma}\] \[+\sum_{\beta\in\Delta^{+}_{\pi^{\prime}}}U(\mathfrak{m}^{-})U( \mathfrak{n}^{-}_{\pi^{\prime}})U(\mathfrak{n}_{\pi^{\prime}})x^{r^{\pm}_{ \beta,\,i}}_{\pm\beta}. \tag{79}\]
By the Poincare-Birkhoff-Witt theorem ([9, 2.1.11]) setting \(\Delta^{+}\setminus\Delta^{+}_{\pi^{\prime}}=\{\gamma_{1},\,\dots,\,\gamma_ {r}\}\), we have that
\[u_{t}=\sum_{\vec{\nu}}c_{\vec{\nu}}\,x^{\nu_{1}}_{-\gamma_{1}}\cdots x^{\nu_{ r}}_{-\gamma_{r}} \tag{80}\]
where the sum over the \(\vec{\nu}=(\nu_{1},\,\dots,\,\nu_{r})\in\mathbb{N}^{r}\) is finite and with, for all \(\vec{\nu}\), \(c_{\vec{\nu}}\in\Bbbk\).
One has \(U(\mathfrak{r})U^{j}(\mathfrak{m}^{-})=U^{j}(\mathfrak{m}^{-})U(\mathfrak{r})\) and \(U^{j}(\mathfrak{m}^{-})U(\mathfrak{r})\cap U^{j-1}(\mathfrak{m}^{-})U( \mathfrak{r})=\{0\}\).
Comparing equations (79) and (80) one deduces that there exists \(w_{\gamma,\,i}\in U(\mathfrak{m}^{-})\), for all \(\gamma\in\Delta^{+}\setminus\Delta^{+}_{\pi^{\prime}}\), and that there exists \(w_{\pm\beta,\,i}\in U^{j}(\mathfrak{m}^{-})U(\mathfrak{n}^{-}_{\pi^{\prime}}) U(\mathfrak{n}_{\pi^{\prime}})\), for all \(\beta\in\Delta^{+}_{\pi^{\prime}}\), such that
\[u^{\prime}_{i}u_{t}=\sum_{\gamma\in\Delta^{+}\setminus\Delta^{+}_{\pi^{\prime }}}u^{\prime}_{i}w_{\gamma,\,i}x^{r_{\gamma,\,i}}_{-\gamma}+\sum_{\beta\in \Delta^{+}_{\pi^{\prime}}}w_{\pm\beta,\,i}x^{r^{\pm}_{\beta,\,i}}_{\pm\beta}. \tag{81}\]
Then one has that
\[\sum_{\lambda\in\Lambda}\sum_{i\in I^{\prime}_{\lambda}}\xi_{w_{0}\lambda}(u^ {\prime}_{i}u_{t}u^{\prime}u^{\prime\prime}_{i}.v_{\lambda})=0 \tag{82}\]
and
\[\sum_{\lambda\in\Lambda}\sum_{i\in I^{\prime}_{\lambda}}\xi_{w_{0}\lambda}(u^ {\prime}_{i}\theta(ad_{\mathfrak{m}^{-}}x(s))u^{\prime}u^{\prime\prime}_{i}.v _{\lambda})=\]
\[\sum_{1\leq t\leq j||x,\,y_{t}|\in\mathfrak{m}^{-}}\sum_{\lambda\in\Lambda} \sum_{i\in I^{\prime}_{\lambda}}\xi_{w_{0}\lambda}(u^{\prime}_{i}u_{t}u^{ \prime}u^{\prime\prime}_{i}.v_{\lambda})=0\]
which is the required equation (76). Hence we obtain equation (75) and therefore, since \(\theta\) is a morphism of \(ad\,U(\mathfrak{g})\)-modules,
\[-x.f(su^{\prime}) =\sum_{\lambda\in\Lambda}\sum_{i\in I^{\prime}_{\lambda}}\xi_{w_{ 0}\lambda}\Big{(}u^{\prime}_{i}ad\,x(\theta(s))u^{\prime}u^{\prime\prime}_{i}.v_{\lambda}\Big{)}\] \[=\sum_{\lambda\in\Lambda}\sum_{i\in I^{\prime}_{\lambda}}\xi_{w_ {0}\lambda}\Big{(}u^{\prime}_{i}(x\theta(s)-\theta(s)x)u^{\prime}u^{\prime \prime}_{i}.v_{\lambda}\Big{)}\] \[=0\]
since moreover \(\mathfrak{m}.V^{\prime}(\lambda)=\{0\}\) and \(V^{\prime\prime}(\lambda)^{*}.\mathfrak{m}=\{0\}\).
### The dual representation of \(U(\tilde{\mathfrak{p}})\) in \(S(\mathfrak{p}^{-})^{*}\)
Recall subsection 4.2 and in particular the representation, denoted by \(ad^{*}\), of \(U(\tilde{\mathfrak{p}})\) in \(S(\mathfrak{p}^{-})\) (see lemma 4.2) and also in every \(S_{k}(\mathfrak{p}^{-})\) (\(k\in\mathbb{N}\)). We then can endow \(S_{k}(\mathfrak{p}^{-})^{*}\) with the dual representation of \(U(\tilde{\mathfrak{p}})\) given by
\[\forall u\in U(\tilde{\mathfrak{p}}),\,\forall f\in S_{k}(\mathfrak{p}^{-})^ {*},\ u.f=f\circ ad^{*}u^{\top}, \tag{83}\]
where \(u^{\top}\) denotes the image of \(u\) by the antipode defined similarly as in equation (15).
We have the following.
**Lemma**.: _Let \(k\in\mathbb{N}\). Then the \(U(\tilde{\mathfrak{p}})\)-module \(S_{k}(\mathfrak{p}^{-})^{*}\) is isomorphic to the \(U(\tilde{\mathfrak{p}})\)-module \(S_{k}(\tilde{\mathfrak{p}})=S_{k}(\mathfrak{p})\) when the latter is endowed with the adjoint action of \(\tilde{\mathfrak{p}}\) which extends by derivation the Lie bracket in \(\tilde{\mathfrak{p}}\)._
Proof.: For \(k=0\), the assertion is obvious. Recall (subsection 2.1) that we denote by \(K\) the Killing form on \(\mathfrak{g}\times\mathfrak{g}\). Then the vector space \(\tilde{\mathfrak{p}}\simeq\mathfrak{p}\) is isomorphic to the dual space \((\mathfrak{p}^{-})^{*}\) through the map
\[f:x\in\tilde{\mathfrak{p}}\mapsto K(x,\,-)_{|\mathfrak{p}^{-}}.\]
When \((\mathfrak{p}^{-})^{*}\) is endowed with the action of \(U(\tilde{\mathfrak{p}})\) given by equation (83) and \(\tilde{\mathfrak{p}}\) with the adjoint action of \(\tilde{\mathfrak{p}}\), the map \(f\) is an isomorphism of \(U(\tilde{\mathfrak{p}})\)-modules.
Indeed assume firstly that \(x^{\prime}\in\mathfrak{m}\). Then for all \(x\in\tilde{\mathfrak{p}}\) and for all \(y\in\mathfrak{p}^{-}\), \(x^{\prime}.f(x)(y)=-K(x,\,pr_{\mathfrak{r}}([x^{\prime},\,y]))\) by equations (17) and (83). If moreover \(x\in\mathfrak{m}\), then
\[K(x,\,pr_{\mathfrak{r}}([x^{\prime},\,y]))=0.\]
But one also has \([x^{\prime},\,x]_{\tilde{\mathfrak{p}}}=0\) by equation (2). Then
\[x^{\prime}.f(x)=f([x^{\prime},\,x]_{\tilde{\mathfrak{p}}})\]
in this case. If \(x\in\mathfrak{r}\), then
\[x^{\prime}.f(x)(y)=-K(x,\,pr_{\mathfrak{r}}([x^{\prime},\,y]))=-K(x,\,[x^{ \prime},\,y])\]
since \([x^{\prime},\,y]=pr_{\mathfrak{r}}([x^{\prime},\,y])+pr_{\mathfrak{m}}([x^{ \prime},\,y])+pr_{\mathfrak{m}^{-}}([x^{\prime},\,y])\). Then by the invariance of the Killing form, we obtain that
\[x^{\prime}.f(x)(y)=K([x^{\prime},\,x],\,y)=f([x^{\prime},\,x]_{\tilde{ \mathfrak{p}}})(y)\]
by equation (2). Now if \(x^{\prime}\in\mathfrak{r}\), then the assertion follows immediatly from the invariance of the Killing form and equation (16). This proves the lemma for \(k=1\).
Let now \(k\in\mathbb{N}^{*}\). Consider the map \(f_{k}:S_{k}(\tilde{\mathfrak{p}})\longrightarrow S_{k}(\mathfrak{p}^{-})^{*}\) defined by
\[f_{k}(x_{1}\cdots x_{k})=K_{k}(x_{1}\cdots x_{k},\,-)_{|S_{k}(\mathfrak{p}^{-})}\]
for all \(x_{1},\,\ldots,\,x_{k}\in\tilde{\mathfrak{p}}\) where \(K_{k}\) is defined as in [16, 2.7], namely : for \(y_{1},\,\ldots,\,y_{k}\in\mathfrak{p}^{-}\),
\[K_{k}(x_{1}\cdots x_{k},\,y_{1}\cdots y_{k})=\frac{1}{k!}\sum_{\sigma\in \mathfrak{S}_{k}}\prod_{i=1}^{k}K(x_{i},\,y_{\sigma(i)})\]
where we denote by \(\mathfrak{S}_{k}\) the group of permutations of \(k\) elements.
By [16, 2.7] we have that \(f_{k}\) is an isomorphism of vector spaces. It remains to show that \(f_{k}\) is an isomorphism of \(U(\tilde{\mathfrak{p}})\)-modules.
Let \(x_{1}\),..., \(x_{k}\in\tilde{\mathfrak{p}}\), \(y_{1}\),..., \(y_{k}\in\mathfrak{p}^{-}\) and \(x\in\tilde{\mathfrak{p}}\). Then one has
\[x.f_{k}(x_{1}\cdots x_{k})(y_{1}\cdots y_{k}) =-K_{k}(x_{1}\cdots x_{k},\,ad^{*}x(y_{1}\cdots y_{k}))\] \[=-\sum_{i=1}^{k}K_{k}(x_{1}\cdots x_{k},\,y_{1}\cdots y_{i-1}ad^{* }x(y_{i})y_{i+1}\cdots y_{k})\] \[=-\frac{1}{k!}\sum_{i=1}^{k}\sum_{\sigma\in\mathfrak{S}_{k}}\prod _{t\neq\sigma^{-1}(i)}K(x_{t},\,y_{\sigma(t)})K(x_{\sigma^{-1}(i)},\,ad^{*}x( y_{i}))\] \[=-\frac{1}{k!}\sum_{i=1}^{k}\sum_{\sigma\in\mathfrak{S}_{k}}\prod _{t\neq i}K(x_{t},\,y_{\sigma(t)})K(x_{i},\,ad^{*}x(y_{\sigma(i)}))\]
On the other hand, one has
\[f_{k}(ad_{\mathfrak{p}}x(x_{1}\cdots x_{k}))(y_{1}\cdots y_{k}) =\sum_{i=1}^{k}f_{k}(x_{1}\cdots x_{i-1}[x,\,x_{i}]_{\mathfrak{p }}x_{i+1}\cdots x_{k})(y_{1}\cdots y_{k})\] \[=\sum_{i=1}^{k}\frac{1}{k!}\sum_{\sigma\in\mathfrak{S}_{k}}\prod _{t\neq i}K(x_{t},\,y_{\sigma(t)})K([x,\,x_{i}]_{\tilde{\mathfrak{p}}},\,y_{ \sigma(i)})\]
By the case \(k=1\) one obtains that, for all \(1\leq i\leq k\), and all \(\sigma\in\mathfrak{S}_{k}\),
\[K([x,\,x_{i}]_{\tilde{\mathfrak{p}}},\,y_{\sigma(i)}))=-K(x_{i},\,ad^{*}x(y_{ \sigma(i)})).\]
This completes the lemma by the above.
### Kostant morphism
Recall subsection 4.3 and let \(k\in\mathbb{N}\).
We define
\[\psi_{k}:\mathscr{F}_{K}^{k}(U(\tilde{\mathfrak{p}}^{-})^{*})\longrightarrow S _{k}(\mathfrak{p}^{-})^{*}\]
by the following. For all \(f\in\mathscr{F}_{K}^{k}(U(\tilde{\mathfrak{p}}^{-})^{*})\), we set :
\[\forall j\in\mathbb{N},\;0\leq j\leq k,\,\forall s\in S_{j}(\mathfrak{m}^{-}),\,\forall s^{\prime}\in S_{k-j}(\mathfrak{r}),\;\psi_{k}(f)(ss^{\prime})=f(s \,\theta(s^{\prime})) \tag{84}\]
that we extend by linearity, so that \(\psi_{k}\) is a linear map. As in [15, 6.2] we call \(\psi_{k}\) the Kostant map.
**Proposition**.: _Let \(k\in\mathbb{N}\). The kernel of the linear map \(\psi_{k}\) is equal to \(\mathscr{F}_{K}^{k+1}(U(\tilde{\mathfrak{p}}^{-})^{*})\). Moreover \(\psi_{k}\) is onto._
Proof.: It follows from the fact that \(\bigoplus_{j=0}^{k}S_{j}(\mathfrak{m}^{-})\otimes\theta(S_{k-j}(\mathfrak{r}))\) is a complement of \(U_{k-1}(\tilde{\mathfrak{p}}^{-})\) in \(U_{k}(\tilde{\mathfrak{p}}^{-})\).
Endow \(U(\tilde{\mathfrak{p}}^{-})^{*}\) with the dual representation of \(A\) given by equation (39). Let \(k\in\mathbb{N}\). Then \(\mathscr{F}_{K}^{k}(U(\tilde{\mathfrak{p}}^{-})^{*})\) is a left \(A\)-module by lemma 7.1 and \(S_{k}(\mathfrak{p}^{-})^{*}\) is a left \(U(\tilde{\mathfrak{p}})\)-module (see subsection 7.3).
**Lemma**.: _Let \(k\in\mathbb{N}\). Then the Kostant map \(\psi_{k}\) is a morphism from the left \(A\)-module \(\mathscr{F}_{K}^{k}(U(\tilde{\mathfrak{p}}^{-})^{*})\) to the left \(U(\tilde{\mathfrak{p}})\)-module \(S_{k}(\mathfrak{p}^{-})^{*}\)._
Proof.: Let \(f\in\mathscr{F}_{K}^{k}(U(\tilde{\mathfrak{p}}^{-})^{*})\) and \(0\leq j\leq k\), \(j\in\mathbb{N}\), \(s\in S_{j}(\mathfrak{m}^{-})\) and \(s^{\prime}\in S_{k-j}(\mathfrak{r})\).
Assume firstly that \(x\in\mathfrak{m}\).
Then by equation (84)
\[\psi_{k}(x.f)(ss^{\prime}) =(x.f)(s\,\theta(s^{\prime}))\] \[=-f(ad^{**}x(s\,\theta(s^{\prime})))\] \[=-f(\tilde{\theta}(ad^{*}x(s))\theta(s^{\prime}))\]
by equations (39) and (25).
Write \(ad^{*}x(s)=\sum_{i\in I}s_{i}z_{i}\) with \(s_{i}\in S_{j-1}(\mathfrak{m}^{-})\) and \(z_{i}\in\mathfrak{r}\) for all \(i\in I\), by equation (22).
Then
\[\psi_{k}(x.f)(ss^{\prime})=-\sum_{i\in I}f(s_{i}\theta(z_{i})\theta(s^{\prime })) \tag{85}\]
by definition of \(\tilde{\theta}\) (see subsection 4.3).
On the other hand, one has by equation (83) :
\[(x.\psi_{k}(f))(ss^{\prime}) =-\psi_{k}(f)(ad^{*}x(ss^{\prime}))\] \[=-\psi_{k}(f)(ad^{*}x(s)s^{\prime})\]
since \(ad^{*}x(s^{\prime})=0\) by equation (18).
Then by equation (84)
\[(x.\psi_{k}(f))(ss^{\prime})=-\sum_{i\in I}f(s_{i}\theta(z_{i}s^{\prime})) \tag{86}\]
But, for all \(i\in I\), \(s_{i}\theta(z_{i}s^{\prime})=s_{i}\theta(z_{i})\theta(s^{\prime})\mod U_{k-1}( \tilde{\mathfrak{p}}^{-})\). Equations (85) and (86) imply that
\[(\psi_{k}(x.f)-(x.\psi_{k}(f)))(ss^{\prime})=0\]
since \(f(U_{k-1}(\tilde{\mathfrak{p}}^{-}))=0\).
Now assume that \(x\in\mathfrak{r}\).
\[\psi_{k}(x.f)(ss^{\prime}) =(x.f)(s\,\theta(s^{\prime}))\] \[=-f(ad\,x(s\,\theta(s^{\prime})))\] \[=-f(ad\,x(s)\theta(s^{\prime})+s\,ad\,x(\theta(s^{\prime})))\]
by equation (16).
Then
\[\psi_{k}(x.f)(ss^{\prime})=-f(ad\,x(s)\theta(s^{\prime})+s\,\theta(ad\,x(s^{ \prime})))\]
since \(\theta\) is a morphism of \(U(\mathfrak{r})\)-modules for the adjoint action.
On the other hand
\[x.\psi_{k}(f)(ss^{\prime}) =-\psi_{k}(f)(ad\,x(ss^{\prime}))\] \[=-\psi_{k}(f)(ad\,x(s)s^{\prime}+s\,ad\,x(s^{\prime}))\] \[=-f(ad\,x(s)\theta(s^{\prime})+s\,\theta(ad\,x(s^{\prime})))\]
This completes the lemma.
### An isomorphism of \(U(\tilde{\mathfrak{p}})\)-modules
Recall subsections 7.1 and 7.4.
By proposition and lemma 7.4 we have that:
**Lemma**.: _For all \(k\in\mathbb{N}\), the induced morphism (still denoted by \(\psi_{k}\)) is an isomorphism of left \(U(\tilde{\mathfrak{p}})\)-modules from \(gr^{k}_{K}(U(\tilde{\mathfrak{p}}^{-})^{*})\) to \(S_{k}(\mathfrak{p}^{-})^{*}\)._
Proof.: We already know that the left \(A\)-module structure on \(U(\tilde{\mathfrak{p}}^{-})^{*}\) given by equation (39) induces a left \(A\)-module structure on \(gr^{k}_{K}(U(\tilde{\mathfrak{p}}^{-})^{*})\) by the invariance of the Kostant filtration under the left action of \(A\) (see equation (65)). Then the induced morphism \(\psi_{k}\) is an isomorphism from the left \(A\)-module \(gr^{k}_{K}(U(\tilde{\mathfrak{p}}^{-})^{*})\) to the left \(U(\tilde{\mathfrak{p}})\)-module \(S_{k}(\mathfrak{p}^{-})^{*}\). Moreover since \(S_{k}(\mathfrak{p}^{-})^{*}\) is a left \(U(\tilde{\mathfrak{p}})\)-module, it follows that it is the same for \(gr^{k}_{K}(U(\tilde{\mathfrak{p}}^{-})^{*})\). Let us verify directly that \(gr^{k}_{K}(U(\tilde{\mathfrak{p}}^{-})^{*})\) is indeed a left \(U(\tilde{\mathfrak{p}})\)-module. Let \(x\), \(x^{\prime}\in\mathfrak{m}\) and \(u\in U_{k}(\tilde{\mathfrak{p}}^{-})\). One checks that
\[(ad^{**}x\circ ad^{**}x^{\prime}-ad^{**}x^{\prime}\circ ad^{**}x)(u)\in U_{k-1 }(\tilde{\mathfrak{p}}^{-}). \tag{87}\]
Indeed write \(u=su^{\prime}\) with \(s\in S_{j}(\mathfrak{m}^{-})\) and \(u^{\prime}\in U_{k-j}(\mathfrak{r})\) for \(0\leq j\leq k\). If \(j=0\) then \(ad^{**}x\circ ad^{**}x^{\prime}(su^{\prime})=0=ad^{**}x^{\prime}\circ ad^{**}x( su^{\prime})\) by equation (27). Now assume that \(1\leq j\leq k\) and take \(s=y_{1}\cdots y_{j}\in S_{j}(\mathfrak{m}^{-})\), with
for all \(1\leq i\leq j\). By definition of \(ad^{**}\) (see equation (25)), we obtain that
\[ad^{**}x\circ ad^{**}x^{\prime}(su^{\prime}) =\sum_{1\leq i\neq k\leq j}\prod_{t\not\in\{i,\,k\}}y_{t}\theta(ad^ {*}x(y_{k}))\theta(ad^{*}x^{\prime}(y_{i}))u^{\prime}\] \[=\sum_{1\leq i\neq k\leq j}\prod_{t\not\in\{i,\,k\}}y_{t}\theta(ad^ {*}x(y_{k})ad^{*}x^{\prime}(y_{i}))u^{\prime}\mod U_{k-1}(\tilde{\mathfrak{p}}^ {-})\] \[=\sum_{1\leq i\neq k\leq j}\prod_{t\not\in\{i,\,k\}}y_{t}\theta(ad^ {*}x(y_{i})ad^{*}x^{\prime}(y_{k}))u^{\prime}\mod U_{k-1}(\tilde{\mathfrak{p}}^ {-})\] \[=ad^{**}x^{\prime}\circ ad^{**}x(su^{\prime})\mod U_{k-1}( \tilde{\mathfrak{p}}^{-})\]
since for all \(a\), \(b\in\mathfrak{r}\), one has \(\theta(a)\theta(b)=\theta(ab)\mod U_{1}(\mathfrak{r})\).
One deduces that, for all \(f\in\mathscr{F}^{k}_{K}(U(\tilde{\mathfrak{p}}^{-})^{*})\), for all \(x\), \(x^{\prime}\in\mathfrak{m}\),
\[x.(x^{\prime}.f)-x^{\prime}.(x.f)\in\mathscr{F}^{k+1}_{K}(U(\tilde{\mathfrak{ p}}^{-})^{*})\]
and then
\[x.(x^{\prime}.gr^{k}_{K}(f))=x^{\prime}.(x.gr^{k}_{K}(f)).\]
Recall the notation in the proof of lemma 7.3. Let \(k\in\mathbb{N}\) and set \(j_{k}=f_{k}^{-1}\), which is an isomorphism of \(U(\tilde{\mathfrak{p}})\)-modules from \(S_{k}(\mathfrak{p}^{-})^{*}\) to \(S_{k}(\tilde{\mathfrak{p}})\). Moreover set \(\psi_{k}^{0}=j_{k}\circ\psi_{k}\) and \(\psi^{0}=\bigoplus_{k\in\mathbb{N}}\psi_{k}^{0}\) : this is by the above an isomorphism of \(U(\tilde{\mathfrak{p}})\)-modules from \(gr_{K}(U(\tilde{\mathfrak{p}}^{-})^{*})\) to \(S(\tilde{\mathfrak{p}})\). Now as in [15, 6.6] set \(\tilde{\psi}_{k}=\frac{1}{k!}\psi_{k}^{0}\) and \(\tilde{\psi}=\bigoplus_{k\in\mathbb{N}}\tilde{\psi}_{k}\). One deduces the following, as in [15, 6.6].
**Proposition**.: \(\tilde{\psi}\) _is an isomorphism of \(U(\tilde{\mathfrak{p}})\)-modules and of algebras from \(gr_{K}(U(\tilde{\mathfrak{p}}^{-})^{*})\) to \(S(\tilde{\mathfrak{p}})\)._
Denote by \(gr_{K}(\widetilde{C}_{\mathfrak{r}})\) the graded algebra associated to the induced generalized Kostant filtration on \(\widetilde{C}_{\mathfrak{r}}\), and by \(gr_{K}(\widetilde{C}^{U(\mathfrak{r}^{\prime})}_{\mathfrak{r}})\) the graded algebra associated to the induced generalized Kostant filtration on \(\widetilde{C}^{U(\mathfrak{r}^{\prime})}_{\mathfrak{r}}\). Denote by \((gr_{K}(U(\tilde{\mathfrak{p}}^{-})^{*}))^{U(\tilde{\mathfrak{p}}^{\prime})}\) the algebra of invariants in \(gr_{K}(U(\tilde{\mathfrak{p}}^{-})^{*})\) by the action of \(U(\tilde{\mathfrak{p}}^{\prime})\) given by equation (65). We have that
\[gr_{K}(\widetilde{C}_{\mathfrak{r}})\subset gr_{K}(U(\tilde{\mathfrak{p}}^{-}) ^{*})\]
and by Proposition 7.2 that
\[gr_{K}(\widetilde{C}^{U(\mathfrak{r}^{\prime})}_{\mathfrak{r}})\subset(gr_{K }(U(\tilde{\mathfrak{p}}^{-})^{*}))^{U(\tilde{\mathfrak{p}}^{\prime})}.\]
Denote also by \(S(\tilde{\mathfrak{p}})^{U(\tilde{\mathfrak{p}}^{\prime})}\) the algebra of invariants in \(S(\tilde{\mathfrak{p}})\) by the adjoint action of \(U(\tilde{\mathfrak{p}}^{\prime})\) : this is also the algebra of semi-invariants in \(S(\tilde{\mathfrak{p}})\), which we denote by \(Sy(\tilde{\mathfrak{p}})\). From Proposition 7.5, one deduces the following.
**Theorem**.: _One has that_
\[\tilde{\psi}(gr_{K}(\widetilde{C}^{U(\mathfrak{r}^{\prime})}_{\mathfrak{r}})) \subset Sy(\tilde{\mathfrak{p}}).\]
Let \(M\) denote a left \(\mathfrak{h}\)-module such that each of its weight spaces \(M_{\nu}=\{m\in M\mid\forall h\in\mathfrak{h}\), \(h.m=\nu(h)m\}\), for all \(\nu\in\mathfrak{h}^{*}\), is finite dimensional. Then one may define ([24, 3.4.7]) the formal character \(\operatorname{ch}M\) of \(M\) as follows.
\[\operatorname{ch}M=\sum_{\nu\in\mathfrak{h}^{*}}\dim M_{\nu}\,e^{\nu}.\]
Recall the set \(E(\pi^{\prime})\) in subsection 6.1, and for all \(\Gamma\in E(\pi^{\prime})\), set \(\delta_{\Gamma}=w_{0}^{\prime}d_{\Gamma}-w_{0}d_{\Gamma}\), with \(d_{\Gamma}=\sum_{\gamma\in\Gamma}\varpi_{\gamma}\). By Prop. 6.2 and Theorem 6.1, the algebra of invariants \(\widetilde{C}_{\mathfrak{r}}^{U(\mathfrak{r}^{\prime})}\) is a polynomial algebra in \(|E(\pi^{\prime})|\) variables, each of them having \(\delta_{\Gamma}\), \(\Gamma\in E(\pi^{\prime})\), as an \(\mathfrak{h}\)-weight. Moreover for \(\mathfrak{g}\) simple and \(\pi^{\prime}\subsetneq\pi\) that is, for \(\mathfrak{p}\subsetneq\mathfrak{g}\), one has that \(\delta_{\Gamma}\in P^{+}(\pi)\setminus\{0\}\) for all \(\Gamma\in E(\pi^{\prime})\) by [15, 5.4.3]. Then in this case \(\operatorname{ch}\widetilde{C}_{\mathfrak{r}}^{U(\mathfrak{r}^{\prime})}\) is well defined.
**Proposition**.: _Assume, for all \(\nu\in\mathfrak{h}^{*}\), that the weight space \(Sy(\tilde{\mathfrak{p}})_{\nu}\) is finite dimensional, so that the formal character \(\operatorname{ch}Sy(\tilde{\mathfrak{p}})\) is well defined. Then_
\[\operatorname{ch}\widetilde{C}_{\mathfrak{r}}^{U(\mathfrak{r}^{\prime})}\leq \operatorname{ch}Sy(\tilde{\mathfrak{p}})\]
_namely_
\[\prod_{\Gamma\in E(\pi^{\prime})}(1-e^{\delta_{\Gamma}})^{-1}\leq \operatorname{ch}Sy(\tilde{\mathfrak{p}}).\]
Proof.: Recall that the generalized Kostant filtration \(\mathscr{F}_{K}\) is decreasing, separated and that \(\mathscr{F}_{K}^{0}(U(\tilde{\mathfrak{p}}^{-})^{*})=U(\tilde{\mathfrak{p}}^{ -})^{*}\). Then, for every finite dimensional subspace \(V\) of \(U(\tilde{\mathfrak{p}}^{-})^{*}\), there exists \(N\in\mathbb{N}\) such that \(V\cap\mathscr{F}_{K}^{N}(U(\tilde{\mathfrak{p}}^{-})^{*})=\{0\}\). One deduces easily that the graded vector space \(gr_{K}(V)\) associated to the induced generalized Kostant filtration on \(V\) is isomorphic to \(V\). Using Theorem 7.6 and a same argument as in [15, 7.1] completes the proof.
|
2303.06659 | Scavenger: A Cloud Service for Optimizing Cost and Performance of ML
Training | While the pay-as-you-go nature of cloud virtual machines (VMs) makes it easy
to spin-up large clusters for training ML models, it can also lead to
ballooning costs. The 100s of virtual machine sizes provided by cloud platforms
also makes it extremely challenging to select the ``right'' cloud cluster
configuration for training. Furthermore, the training time and cost of
distributed model training is highly sensitive to the cluster configurations,
and presents a large and complex tradeoff-space.
In this paper, we develop principled and practical techniques for optimizing
the training time and cost of distributed ML model training on the cloud. Our
key insight is that both parallel and statistical efficiency must be considered
when selecting the optimum job configuration parameters such as the number of
workers and the batch size. By combining conventional parallel scaling concepts
and new insights into SGD noise, our models accurately estimate the time and
cost on different cluster configurations with < 5% error. Using the repetitive
nature of training and our models, we can search for optimum cloud
configurations in a black-box, online manner. Our approach reduces training
times by 2 times and costs more more than 50%. Compared to an oracle-based
approach, our performance models are accurate to within 2% such that the search
imposes an overhead of just 10%. | Sahil Tyagi, Prateek Sharma | 2023-03-12T13:42:39Z | http://arxiv.org/abs/2303.06659v1 | # Scavenger: A Cloud Service For Optimizing Cost and Performance of ML Training
###### Abstract
Cloud computing platforms can provide the computational resources required for training large machine learning models such as deep neural networks. While the pay-as-you-go nature of cloud virtual machines (VMs) makes it easy to spin-up large clusters for training models, it can also lead to ballooning costs. The 100s of virtual machine sizes provided by cloud platforms also makes it extremely challenging to select the "right" cloud cluster configuration for training. Furthermore, the training time and cost of distributed model training is highly sensitive to the cluster configurations, and presents a large and complex tradeoff-space.
In this paper, we develop principled and practical techniques for optimizing the training time and cost of distributed ML model training on the cloud. Our key insight is that both the parallel and statistical efficiency must be considered when selecting the optimum job configuration parameters such as the number of workers and the batch size. By combining conventional parallel scaling concepts and new insights into SGD noise, we develop models for estimating the time and cost on different cluster configurations. Using the repetitive nature of training and our performance models, our Scavenger cloud service can search for optimum cloud configurations in a black-box, online manner. Our approach reduces training times by \(2\times\) and costs by more than 50%. Compared to an oracle-based approach, our performance models are accurate to within 2% such that the search imposes an overhead of just 10%.
## I Introduction
The discovery of improved machine learning (ML) models has resulted in great advances in computer vision, language and speech processing, scientific computing, and many other areas. These advances are primarily driven by increasingly computationally intensive models, such as deep neural networks (DNNs), being "trained" on large data sets. The ready availability of computing resources is a key enabler of machine learning, and cloud platforms can easily provide these resources. However, current ML techniques and systems are ill-suited for making effective and efficient use of cloud resources, i.e., are not cloud-native.
ML models are often trained on large clusters of cloud virtual machines, but this often leads to prohibitive costs, because ML training techniques and frameworks like TensorFlow and PyTorch are oblivious to cost. Moreover, cloud platforms offer 100s of different virtual machine sizes and configurations with different cost/performance tradeoffs, making it extremely challenging to select the "right" type and quantity of cloud resources. Training large ML models on the cloud is thus often performed on sub-optimally configured cloud resources, leading to cost overruns, slow performance, and underutilized resources.
These challenges also exist when optimizing the resource allocation for conventional distributed applications (such as map-reduce data processing) on the cloud [1]. However, model training also has other unique execution and synchronization characteristics and a large array of configuration knobs (such as number of workers and the batch size) which have significant impact on performance and resource efficiency.
In this paper, we present Scavenger, a service for optimizing the cloud training cost and time for ML models. Scavenger is a model-agnostic, black-box, fully online service built using TensorFlow, and searches for good configurations for distributed model training jobs. We use a performance-model guided search across a multi-dimensional configuration space to find the pareto-optimal configurations based on user preferences and constraints. In its search for the best configuration, Scavenger horizontally scales a training job by adding/removing workers, and vertically scales it by changing the batch size.
As a key first step towards understanding and optimizing training time and costs, we develop a new phenomenological performance model for data-parallel distributed model training. Its is phenomenological in the sense that our prediction accuracy improves with the degree of exploration, which is based on the type of search performed in the search space (SS III). Our model uses both conventional parallel scaling concepts such as synchronization overheads, as well as fundamental performance artifacts of Stochastic Gradient Descent (SGD) based optimization. Unlike in classical parallel applications, we find that computation performed by parallel workers doesn't always compose because of the stochastic nature of the gradient computation. This _statistical inefficiency_ reduces the rate of the job's forward-progress, and imposes its own tradeoff on time and cost which also depends on the cluster configuration. We measure and consider this statistical inefficiency by using SGD noise (variance of gradients), and show how it can be used as a general _scaling indicator_.
Scavenger is a fully managed model training service requiring minimal user intervention, prior knowledge, or offline profiling. We use the repetitive and iterative nature of model training to briefly profile the job on different configurations and learn its performance profile by using the scaling indicators. We minimize the overhead of this exploration and search phase by using lightweight model checkpointing, and obtain the cost
and time tradeoff curves for different combinations of workers and batch sizes. The performance model is then used to run the reminder of the job on the "best" configuration based on user preferences and constraints.
Our profiling-based strategy of building the performance model is optimized to reduce the search cost. We can build a full performance profile of an ML model by profiling on only a small subset of configurations. We accomplish this by leveraging our phenomenological first-principles performance models that can be interpolated using linear regression--thus requiring only a _partial search_. Since Scavenger is a cloud service, it also leverages repeated training of similar models (e.g., part of hyperparamter or neural architecture search), and reuses its learned performance model, to completely eliminate the exploration phase and search costs. Surprisingly, we find that the SGD noise can serve as model-agnostic scaling indicator, and even a "universal" average model can estimate performance with reasonable accuracy without any exploration or pilot jobs.
To the best of our knowledge, Scavenger is the first work which can optimize both cost and time in a fully online manner. We build on recent work for SGD noise based scaling such as [2, 3, 4, 5], and use it for simple intuitive phenomenological models. By considering both the parallel and statistical efficiency, we are able to accurately predict the training time of a wide range of DNN models with minimal search overhead.
Scavenger is an open-source library built on top of TensorFlow, and provides a practical, online, black-box, model-agnostic service for addressing the crucial problem of cost and performance optimization of distributed machine learning in the cloud. In addition to the practical significance, we make the following research contributions:
1. We provide a thorough empirical investigation of the cost and time tradeoffs in distributed ML model training, and show how parallel and statistical efficiency influence the performance.
2. We show how the variance in gradients results in SGD noise, and how it can serve as a reliable scaling indicator for elastic horizontal and vertical scaling.
3. We develop new models for predicting the performance for deep neural networks, which consider both parallel and statistical efficiency, and the aforementioned SGD noise. Our models predict training time and cost for different job configurations (number of workers and batch size), and construct full tradeoff curves and pareto frontiers, with very high accuracy of more than 98%.
4. Our models enable us to search for the optimum job and cluster configuration in a model-agnostic and online manner, and minimize various combinations of cost and time. Our techniques can find the "right" cloud configuration and reduce training time by more than \(2\times\) compared to naive configurations.
## II Background and Challenges
In this section, we describe the performance tradeoffs faced by distributed ML training. These observations and insights guide our performance model presented in the next section.
### _Distributed ML Training_
Distributed training entails learning the model parameters (or weights) of a model over an input training dataset. A model trains in an iterative-convergent process to minimize a loss function over the dataset by using optimization techniques such as Stochastic Gradient Descent (SGD) [6] and Mini-Batch Gradient Descent [7] or Full Gradient Descent.
Since ML training is highly compute intensive, parallelizing it using computational accelerators such as GPUs and TPUs, and through distributed training, is vital [8, 9]. A common parallelization approach is _data-parallelism_, where training is launched on multiple workers, and each worker learns and updates the model parameters by processing a small batch of the training data [10] at each iteration.
After each iteration, the gradient updates from all workers are aggregated via all-reduce operations to compute the averaged gradients, update model parameters and synchronize the new parameters among the workers [11]. A popular and widely successful data-parallel training approach is the parameter server strategy, where the workers compute the gradients, and parameter servers aggregate and average the gradients from all workers after every iteration and update the model weights. Training a popular image recognition model like ResNet [12, 13] or an attention-based language model like Transformer [14] typically require thousands of iterations until the model's error converges to the desired low training loss.
Concretely, the training process iteratively computes the model parameters over \(K\) workers, each processing a mini-batch of \(b\) at iteration \(t\) and computing the gradient \(\nabla f(\mathbf{x}_{k,t})\). The update rule for the model parameters \(\mathbf{x}\) is given by:
\[\mathbf{x}_{t+1}=\mathbf{x}_{t}-\eta\frac{1}{K}\frac{1}{b}\sum_{k=1}^{k=K} \nabla f(\mathbf{x}_{k,t}), \tag{1}\]
where \(\eta\) is the learning rate, one of the hyperparameters of the model that is found through empirical search techniques. The global batch size is \(B=bK\), and is a crucial job parameter.
**Elasticity.** Distributed training is resource _elastic_, which means that the models can be trained on different cluster sizes and configurations, which can also be changed during runtime (i.e., during model training). Training can be horizontally
Fig. 1: Running cost and time for different batch sizes and workers for ResNet18 training. Each point along a tradeoff-curve represents 20, 16, 12, 8 workers respectively. Dashed line shows our model prediction.
scaled by adjusting the number of workers (i.e., changing \(K\) in Equation 1), and vertically scaled by increasing the mini-batch size \(b\) on each worker. ML frameworks such as TensorFlow and PyTorch also support model checkpointing, and thus we can adjust the horizontal and vertical scaling dynamically by checkpointing the model state and resuming the training on a different cluster configuration.
This elasticity makes distributed training a good fit for clouds, since we can easily scale the cluster by adding/removing VMs, and changing the underlying VM size to increase the batch size and intra-worker parallelism. Scavenger makes use of this elasticity in its search for the ideal cloud configuration. However, distributed training has complex and incompletely-understood performance tradeoffs [15] that are affected by the various SGD parameters (such as \(K,b\)). Simply running more workers and increasing batch size has diminishing returns, as we can see from Figure 1, which shows the running time and cost for training the ResNet18 model. Each point corresponds to a different number of workers for each batch size. We can see that there are diminishing returns, and thus it is not obvious which cluster configuration is the "best".
### _Horizontal Scaling: Adding Workers_
The simplest way of scaling a parallel training job is to add more workers (\(K\)). Figure 1(a) shows the decrease in the total training time to reach a fixed accuracy level for three ML models. As the number of workers increases, the training time reduces, but there are diminishing returns. Increasing the workers from 4 to 16 (\(4\times\)) only reduces the training time from 15 to 5 hours (\(3\times\)).
Thus, ML training shows parallel inefficiency due to the communication and synchronization overheads. A single model-training iteration consists of a local gradient computation step, and a synchronization step where the gradients are aggregated/averaged. Figure 3 shows the breakdown of this computation and synchronization. It also shows the overhead of horizontal scaling in terms of higher synchronization overhead with increasing workers. Here, the cumulative cluster capacity is same across various \(K\), i.e., total CPU cores and memory allocated over all the workers in a cluster is held constant. We can see that increasing the number of workers increases the synchronization time. With parameter servers, more workers means more stragglers, and because bulk synchronous processing is used, this increases the communication costs for everyone. The larger number of workers also increases the work for the parameter servers, which increases the synchronization time further. Figure 3 also shows the breakdown for different batch sizes. We can see that the gradient computation time also increases with increasing batch sizes.
**Memory.** The VM's memory size is also an important resource for model training. Insufficient memory forces smaller batch sizes, which reduce the training accuracy and require more iterations and synchronization during model training. Figure 1(b) shows the final model accuracy reached when training the Transformer model under a strict time deadline. The smaller VMs provide insufficient accuracy, and below a 4GB threshold, the system (TensorFlow) crashes and makes no forward progress at all.
### _Vertical Scaling: Increasing Batch Size_
One way to reduce the synchronization overheads is to increase the batch size, which reduces the total number of iterations required and increases the parallel efficiency. This is illustrated in Figure 1(c), which shows the training-loss for different batch sizes for the ResNet18 model. Larger batch sizes (1024) achieve lower (i.e., better) loss compared to smaller ones. This is also seen for other models in Figure 4, which shows the training time to desired accuracy for \(K=16\).
The gains of compute efficiency with larger \(B\) is evident from Fig. 5 where the throughput (i.e., the number of samples processed per second in the training phase) increases as the global batch size increases, then saturates after a knee-point/inflection point. The throughput plateaus after a certain batch size since the CPU utilization of the workers makes out after the inflection point. In the results shown, the workers used in the cluster are GCP E2-standard VMs with 4 vCPUs and 16 GB memory.
### _Statistical inefficiency_
In both horizontal and vertical scaling, parallel training does not scale linearly. The fundamental reason for this non-linear scaling is that not all computing work is effective because of stochastic gradient descent. In conventional parallel applications, all work performed by all workers is equally useful. However, with stochastic gradient descent, the work
Fig. 2: (a) Improvement in training time as cluster size and capacity scales. ResNet18, ResNet50 and Transformer Tiny are run upto 80% accuracy on CIFAR-10,90% on CIFAR-100 and 15.0 BLEU on WMT14 dataset. (b) Accuracy reached by Transformer Base before job fails due to memory constraints. (c) Training time taken by ResNet18 on CIFAR-10 to converge. _Larger \(B\) are more time-efficient to achieve the same model quality._
done by the workers (i.e., the gradients computed) does not fully compose. That is, the total forward progress made (i.e., the decrease in training loss) is not equal to the sum of progress made by the individual workers. We call this the _statistical inefficiency_ of parallel model training, and it reflects how "aligned" the computed gradients of the different workers are. This statistical inefficiency is a fundamental attribute of SGD (and all optimization algorithms in the SGD family like Adam [16] etc). The statistical inefficiency can be captured by computing the SGD noise, which is the variance in the gradients among the workers [2, 3]. We use a similar variance formulation in our model described in the next section.
Thus adding more workers (increasing K) can increase the divergence between gradients and require more training iterations and increase the overall training time. Similarly, a small batch size means that the gradients are computed on a small subset of data, and are more likely to differ from each other. Thus, larger batch sizes are preferable from a statistical efficiency perspective, but have other tradeoffs: they impose additional memory requirements and communication overheads. Furthermore, increasing batch sizes may have diminishing returns (Figure 5). _Because of gradient noise and statistical inefficiency, throughput (number of training data samples processed per second) is not sufficient to capture the performance,_ and we need to consider the wall-clock time to reach the desired accuracy level.
Our performance model is able to capture both the statistical and parallel efficiency associated with different horizontal and vertical scaling configurations, and provide accurate estimates of training time for different configurations which can be used to select the "best" configuration.
## III Design
At a high level, our goal is to find the "best" cloud cluster configuration for a model training job, with minimal information about the ML model, and in an online manner with minimal apriori profiling. We want to minimize the time and cost of training a model to a specified accuracy level.
For optimizing the job configuration for a given ML training job, we first develop an analytical performance model for estimating the total training time (and cost). This performance model is used to compute the time vs. cost tradeoff curves for a job, which can be used to select the "right" cloud cluster based on user preferences and constraints. Our focus is on building simple, practical, and generalizable performance models that do not require offline training, and which can be refined and used with online profiling. Predicting the total training time of ML training is especially challenging due to the statistical inefficiency of distributed SGD. To address this challenge, we investigate and use general _SGD noise_ indicators, that serve as a proxy for statistical inefficiency (Section III-A). Using these scaling indicators, we develop an analytical _statistical_ performance model, which we combine with a more conventional parallel performance model. Finally in Section III-E, we describe how the combined parallel and statistical performance model can be obtained and used in practice.
### _SGD Noise as a Scaling Indicator_
We have seen that simply adding more resources to a distributed training job doesn't decrease the training time uniformly. This inefficiency is crucial in cloud environments, since it increases costs without proportional decrease in training time. We seek a general "scaling indicator" which serves as a proxy for the overall parallel efficiency. For example, such a scaling indicator should indicate the scenarios in which adding more resources would not decrease training time, and we should stop scaling. Because we want online cluster optimization, this scaling indicator should also be easily computable at run-time, and be independent of the ML model and cluster size.
For classic parallel applications, the communication and synchronization overheads typically serve as scaling indicators.
Fig. 4: Training time to convergence for various global batches on \(K=\mathit{16}\). ResNet18, ResNet50 and Transformer are trained to 80%, 90% accuracy and 18.0 BLEU score respectively.
Fig. 5: Throughput of various models on increasing the global batch size B for \(K=\mathit{16}\). The throughput increases as we increase \(B\) upto a certain point, then plateaus.
Fig. 3: The gradient computation and synchronization time breakdown for various ML workloads across multiple \(K\) and trained on various \(B\). Weak-scaling scenario: the cumulative cluster capacity is same across all \(K\), and the worker VM size is varied.
For example, we can compute the scaling efficiency as the fraction of time spent in communication, and stop scaling if this fraction increases above a threshold. Amdahl's law and other parallel scaling laws can then use these communication overheads and inform us about the performance and scaling properties of the application. Communication and synchronization overheads are also applicable for ML training and can be used to model their parallel efficiency. However, they are not sufficient, because of the statistical inefficiency of parallel ML training.
_Just as communication overheads can indicate parallel scaling in conventional parallel applications, are there similar scaling indicators for statistical inefficiency?_ We seek a _general_ indicator for statistical efficiency that is independent of the model and the execution environment (number and type of workers, etc.). For example, such a scaling signal could indicate the batch size threshold for a given cluster size, beyond which scaling the application does not significantly reduce the training time.
Fundamentally, the statistical inefficiency arises because of the noise in the gradients computed by the workers. Our main observation is that the SGD noise can be captured by the _variance_ in the gradients computed by the workers, and this serves as a useful general statistical inefficiency indicator. This variance/noise can be computed by:
\[\gamma(t)=\frac{\mathbb{E}[\frac{1}{K}\sum_{k=1}^{K}||g_{t}^{(k)}||^{2}]}{ \mathbb{E}[||\hat{g}_{t}||^{2}]}, \tag{2}\]
where \(g_{t}^{(k)}\) is gradient computed on worker \(k\) at iteration \(t\) and \(\hat{g}_{t}\) is the aggregated gradient norm obtained by reducing gradients on the parameter servers in a cluster with \(K\) workers. This SGD gradient variance has been investigated previously [2, 3] to understand either batch size scaling or worker scaling, and we generalize it to both types of scaling.
The noise, which is essentially the deviation in the calculated gradients from the "true" gradient, is also a _practical_ scaling indicator. It can be easily computed in the data-parallel parameter server strategy during the model training, i.e., in an online manner. The per-worker and aggregated gradients are collected from all workers and parameter servers respectively. Thus, from equation 2, we can compute the gradient noise by computing the ratio of the mean of the workers' local gradient norms and the aggregated gradient norm.
In the early training stages, the variance in the gradients is on the same scale as the gradients itself and thus the initial noise is low (Figure 6). As the ML model converges, the gradients approach towards the true gradient, increasing the noise before finally saturating to the number of workers \(K\). Since we want to compare the noise for different \(K\), we normalize it by \(K\), so that it is a true statistical efficiency indicator.
We have observed that the noise is not constant over the course of training, even with a static job configuration. Instead, the noise increases and then stabilizes, as we can see from Figure 6. This is a fundamental artifact of SGD-based optimization, and applicable for all models and configurations. The noise is also affected by the SGD learning rate, and we need to account for the learning-rate schedule. For our cluster optimization, we want to search and select for the right cluster configuration as quickly as possible after the training commences. However since the noise from early training epochs is unreliable, we let the noise stabilize before using it as a scaling indicator. When a job starts, we run it on the starting configuration until the noise stabilizes, and then begin the exploration/search process. This increases the overall profiling and search time, since the early iterations are the "cold start", but provides reliable noise estimates.
_How effective is SGD noise in predicting performance?_ Figure 7 shows the total training time to the desired accuracy vs. the SGD noise for different global batch sizes \(B\). For all the three ML models, the increase in noise leads to an increase in training time. We also observe that smaller batches have higher noise. Thus, the SGD noise can serve as a good indicator of the training time and efficiency. We investigate a deeper relation of noise with statistical efficiency in the our performance model developed in the rest of this section.
### _Performance and Cost Model_
We develop an analytical model for the total training time and cost of distributed ML training, which creates the tradeoff curves (like in Figure 1), and guides the cloud resource allocation policies. Our performance models use statistical and parallel scaling indicators which can be obtained by profiling in an online manner during job execution, and do not need a-priori offline profiling. The job's performance depends on its configuration, which consists of the number of parallel workers, \(K\), and the total batch size \(B\), and our model predicts the performance for each combination of these configuration parameters. ML training is an iterative process, and the total training time, \(T\):
\[T=n_{i}\tau, \tag{3}\]
where \(n_{i}\) is the number of iterations required to reach the specified model accuracy, and \(\tau\) is the per-iteration time. The
Fig. 6: The SGD noise requires some iterations to stabilize, after which it is dominated by the number of workers.
Fig. 7: The normalized SGD noise directly impacts the total training time for different batch sizes and models. ResNet8, ResNet50 and Transformer are trained to 80%, 90% accuracy and 18.0 BLEU score respectively.
number of iterations depends on the total number of training epochs \(e\):
\[n_{i}=\frac{eD}{B}, \tag{4}\]
where \(D\) is the fixed dataset size, and \(B\) is the global batch size, an important job configuration parameter. The number of epochs to reach the desired model accuracy \(e\), is the key unknown, and depends on many factors such as the model size, complexity, and desired accuracy, and the statistical inefficiency.
The other key parameter in Equation 3 is \(\tau\), which is the per-iteration time. For a given job configuration, i.e., fixed \((K,B)\), the time to process a mini-batch is roughly constant over the course of training, because the same gradient computation and communication steps are being performed on the same mini-batch of identically distributed data.
Finally, the total cost is simply the product of training time, the number of workers \(K\), and the per-VM price \(p\):
\[\mathcal{C}=TKp \tag{5}\]
We estimate the number of epochs, \(e\) using our statistical performance model described in the next subsection. The time per iteration \(\tau\), will be estimated using our parallel performance model in Section III-D.
**Online Profiling and Searching.** Using the model, we first obtain the tradeoff curves in our search or exploration phase. In the search phase, we briefly run the job on some configuration, observe its parallel and statistical scaling indicators, and estimate the time (and thus cost) on that configuration. Only a small number of iterations (around 20) are usually required for estimating the performance of a given configuration, after which we checkpoint the model, and run the job on a different configuration. This exploration of the various configurations allows us to obtain the full time and cost curves.
Note that due to checkpointing, there is no lost work. The search cost is running the job briefly on suboptimal configurations, and the small overhead of restoring the model from checkpointed weights. Selecting the next configuration in the exploration phase is done using grid search guided by the optimization criteria and constraints on K and B. We refer to this as a **full or offline search**, since we first explore the configuration space, and then run the reminder of the job on the best configuration.
To reduce the search cost, our phenomenological statistical and parallel performance model also allows us to estimate the running time on configurations without even profiling on them. That is, we can obtain estimates of T by profiling on only a small sample of K, B configurations, and use our phenomological models to build the rest of the tradeoff curve by fitting the learned the performance models. This **partial or online search** reduces the search cost significantly. However, the drawback is that the estimates of running time due to the interpolation/regression can be error-prone, and thus the tradeoff curves obtained using the online search can differ slightly from the offline search.
Finally, we observe that many jobs train nearly identical models as part of hyperparamter tuning, neural architecture search, etc. For example, the hyperparamter tuning may involve dozens of jobs that train the same model, but with different activation functions, weight decay, regularization, etc. In such cases, the parallel and statistical efficiency of the job doesn't significantly change. Thus, once a job's performance model is learnt, it can be stored and _reused_ when the same or similar model is trained in the future. We can thus avoid the exploratory search phase entirely, and this is the **no-search** scenario. We develop full, partial, and no-search techniques for both the statistical and parallel performance models.
### _Statistical Performance Model_
The SGD noise scaling indicator allows us to model the statistical performance and the number of epochs required for achieving the desired accuracy level. The SGD noise increases the total amount of work required, and hence the number of iterations and epochs. For a given job configuration \((K,B)\), the number of epochs \(e\) is proportional to the SGD noise \(\gamma\) :
\[e\propto\gamma, \tag{6}\]
Empirical support for this can be seen in Figure 8, which shows the normalized noise plotted against epochs taken to reach a specific performance target across various \(K\) and for different target accuracy levels.
This linear model can be understood in relation to full gradient descent, which has no noise, and the minimal number of epochs \(e^{*}\). Thus for SGD, for a given \(B\), we have \(e=e^{*}+\theta\gamma\), where \(\theta\) is the unknown linear-model parameter which relates the noise to the statistical efficiency.
**Full offline search.** We profile the model on each \((K,B)\) configuration, and measure the noise \(\gamma_{K,B}\). Note that we are primarily interested in the _relative_ performance of various configurations. Both \(e^{*},\theta\) are properties of the model and not affected by \(K,B\). We can obtain them from prior profiling runs like those shown in Figure 8. From the figure, we can see that the epochs required to reach different accuracy levels is not sensitive to the number of workers. In fact, this is a static property of the ML model itself, and not influenced by any job configuration parameter. Thus, if we have access to any single prior execution of the model (under any configuration, and even without profiling), then we can estimate the epochs required to reach any desired accuracy level. In many cases however, we only need to compare the _relative_ performance between different configurations, for which we do not need any prior execution log, and can compare the epochs of configurations based on their observed noise.
The above full search technique already provides significant new capabilities for statistical efficiency modeling. We refine it with two more powerful insights that reduce the search cost associated with the statistical efficiency model further.
**Partial Search.** While the linear relation between noise and epochs is extremely powerful, we can enhance it even further to model the statistical efficiency using partial search without exploring the entire configuration space. First, we develop a
finer-grained model for noise, which allows us to relate it to the batch size.
\[\gamma_{K,B}\propto\frac{1}{\sqrt{B}} \tag{7}\]
This allows us to estimate the noise on different batch sizes without even requiring profiling, we can profile and find the noise on a small number of different \(B\) values in the search phase, and build a linear model of \(\gamma\), \(B\), and use it to interpolate the noise for the rest of the unseen values of \(B\). This relation between \(\gamma,\sqrt{B}\) is derived from the theoretical properties of SGD described in [3]. We empirically show it in Figure 9, which shows the linear relation between noise and \(\frac{1}{\sqrt{B}}\) for different ML models.
Our partial search is performed by running the job briefly on the _extreme_ points of B, i.e., on the smallest and largest batch size provided by the user, and then fitting a model to Equation 7.
For enhanced accuracy, we can repeat the process for different values of \(K\). We have found that it is possible to avoid this per-K profiling and instead use a general/average model for noise and B. Surprisingly, the relation between noise and batch size is not very sensitive to the number of workers K. Thus, we can simplify statistical efficiency model even further, by using an _average_ model for noise vs. B. This average model is also shown in Figure 9 (by the solid line). For a given B, we average the (estimated) noise for various K values. With this averaging, using only the batch size, we can predict the noise, and thus the number of epochs.
**No-search.** If the same or similar ML model is being trained repeatedly, then we can use its noise vs. B relation, and do not need any further profiling for modeling its statistical performance.
### _Parallel Performance Model_
The above statistical performance model provides us the estimate of the number of epochs/iterations required. We now tackle the relatively simpler task of modeling the per-iteration time, using more conventional parallel performance techniques. Our key insight is that ML training is highly repetitive and the performance characteristics of each iteration within a job are nearly identical. This allows us to continue using the profiling based search strategy. Thus the **full-search** for the parallel performance model simply runs the job for a small number of iterations on all the job configurations of interest.
**Partial-search.** For the partial search, we again use a phenomenological model for iteration time and use an interpolation approach. Each iteration entails computing the gradients, collecting and averaging them, and then synchronizing them between workers via the parameter server. Both these major components can be modeled as follows:
\[\tau=\mathit{compute\_time}+\mathit{sync\_time}. \tag{8}\]
The gradient computation time on a worker depends on the mini-batch size, b:
\[\mathit{compute\_time}\propto b. \tag{9}\]
The synchronization time is influenced by number of parallel workers:
\[\mathit{sync\_time}\propto K \tag{10}\]
Using these relations, we can build a model for the per-iteration time \(\tau\) by profiling on the extreme points in the (K, B) configuration space, and then fitting linear models for the computation and synchronization.
**No-search.** In case of repeated model training, since the computation and synchronization costs do not change, we can reuse the performance model from identical/similar models, and avoid the search phase altogether.
### _Resource allocation policies_
We combine the statistical and parallel performance model for our job configuration and cloud resource allocation policies. We first build the time/cost tradeoff curves using the profiling and modeling. Depending on the prior information available, the search strategy and costs may differ. We have built our system as a service, so future jobs training similar models can be significantly sped-up using their stored performance models and using the partial or no-search policies.
The job configuration search is ultimately determined by the user's objective and constraints. We support optimizing for time, cost, and also a knee-point based optimization that selects the knee-point of the cost/time curves. We determine the knee of the curve using the kneedle [17] algorithm.
Constraints on the maximum cost and time are provided by the user. This bounds the search space and is also practical. These constraints thus also impose a constraint on the number of worker VMs (K), and yield \(K_{\text{min}}\) and \(K_{\text{max}}\). The bounds on the batch size are determined by the memory-size of the VM, yielding \(B_{\text{max}}\). Small batch sizes result in extremely high noise, and thus realistic lower-bounds on \(B\) are necessary.
## IV Implementation
Scavenger is implemented as a modular extension to TensorFlow, and written in Python in about 2000 lines of code. The training scalability indicators are implemented by extending
Fig. 8: The number of epochs required to reach various accuracy levels is linear in the normalized SGD noise.
TensorFlow's estimator framework [18]. Users simply need to download our TensorFlow distribution (or apply a patch), and no modifications are required to the models or any workflow component. The parameter server computes the SGD noise by computing the gradient norms for all the workers' updates, and the final norm for the averaged gradient. This approximates the gradient variance, as shown in [2]. The gradient variance can be noisy, and we use exponentially weighted moving average to smoothen the output.
All the scaling indicators: the gradient noise, gradient computation time, and synchronization time, are sent to an external model service on every iteration. The model service uses these scaling indicators to update the performance model if operating in the initial exploratory search mode. The user can select the full or partial search mode based on the search-cost and performance-model prediction accuracy requirements. By default, we use the partial-search, since its results are comparable to full-search with lower search costs. Scavenger saves all performance models on persistent storage, and the no-search strategy is used if a model has been trained before. Once the tradeoff curves are constructed, we select the best configuration and stop all profiling.
We interface with standard cloud APIs for managing VMs. Our partial-search process starts with the smallest \(K,B\) configuration, and then adds more VMs to the cluster to reach the largest configuration. We use lightweight checkpointing: since the parameter server stores the latest model weights, the new workers in a new configuration pull the latest weights from the parameter server and resume training. We switch to different configurations only on iteration boundaries, and thus no work is lost. The existing VMs are always reused, to avoid excessive VM churn and startup/shutdown overheads. Although Scavenger is currently implemented in TensorFlow v1.5, its main components are modular, and need only minimal profiling information from the ML framework. Supporting PyTorch is part of our ongoing work.
## V Experimental Evaluation
We use popular deep learning models: two residual networks and one attention-based transformer, and evaluate across different VM size and price configurations from the Google Cloud Platform (GCP). Our experimental evaluation is focused on answering the following questions: 1. How effective is gradient noise as an indicator of statistical efficiency? 2. How accurate is our performance and cost model across different job configurations? 3. What are the performance and cost tradeoffs for different cloud computing cost models? 4. What are the time and cost savings achievable with our job configuration and resource allocation policies?
While most work on model training uses GPUs, we perform all evaluation on CPU VMs. GPUs simply reduce the per-iteration time, and all aspects of Scavenger such as the model and service are unaffected by the underlying hardware parallelism. Standard CPU VMs can also be sized in a fine-grained manner and we can configure the VM with arbitrary amounts of CPUs and memory. This allows us to also evaluate _weak scaling_: the total computing resources across all our cluster configurations are the same, but they are distributed among VMs differently. In contrast, GPUs have fixed and limited memory, and severely limit weak-scaling and batch-size scaling. Furthermore, we only consider the worker cost, and assume that sufficient parameter servers are launched and available. Parameter server allocation is tackled by other systems such as Optimus [19], and is orthogonal and complementary to our work.
### _Cost and time tradeoffs_
With the performance model described in Section III, we can predict running cost and time for distributed training for various cluster configurations. Figure 10 shows the cost vs. time trade-offs for ResNet18, ResNet50 and Transformer Base to reach 80%,90% train accuracy and 18.0 BLEU score for various \(B\) on Google E2-standard-4 VMs. Each scatter point show results from full runs for each \((K,B)\) configuration and dashed line shows the predicted cost and time with the offline performance model. The rental cost of each worker is $0.13402/hr. Each point on the curve represents a decreasing cluster size, with \([20,16,12,8]\) workers.
We can see that there are clear cost vs. time tradeoffs for each batch size. Here, the per-worker compute hardware is the same, and the per-hour total cluster-price is also proportional to \(K\). The largest clusters have highest cost but also lowest running time. Decreasing workers reduces cost slightly but significantly increases running time.
Both the ResNet models (Figures 9(a), 9(b)) have a single inflection/knee-point for all batch sizes, after which we see diminishing returns on cost. _For ResNet-18 \(B\)=384_, \(K\)=16 _represents ideal configuration since it corresponds to the knee-point. For \(B\)=512, inflection point corresponds to \(K\)=12 so that is the ideal configuration at this batch-size._ With
Fig. 9: The noise for each \((K,B)\) config at 80%, 90% train accuracy and 18.0 BLEU for ResNet18, ResNet50 and Transformer Base. The normalized noise is not very sensitive to \(K\), and our average noise model can estimate noise for any \((K,B)\) configuration with low error.
Transformer model in Figure 9(c), we observe two inflection points corresponding to clusters \(12\) and \(16\) for any \(B\). We observed a notable decrease in iteration time from \(K\)\(12\) to \(16\) since for the same per-worker compute hardware and \(B\), since larger \(K\) implies smaller worker mini-batch size. For example, \(B=768\) changes mini-batch size from 64 to 48 when \(K\) goes from 12 to 16. Thus, we see a significant training time difference between \(K=12\) vs. \(16\), resulting in two distinct inflection points.
**Result:**_The tradeoff curves can be a crucial tool for judicious resource allocation on the cloud for distributed training._
The dashed lines in Figure 10 shows the cost predicted by our performance model using the _full-search_ strategy, which relies on profiling of the gradient noise and iteration-time performance models by running the model on different configurations for a small number of iterations. Compared to the actual job running time, our offline performance model has an error of only 1-5%, across the entire range of models, workers, and batch sizes.
### _Partial Search_
We now evaluate the effectiveness of our partial search statistical and parallel performance model. In the partial search strategy, we only profile the job on a small number of configurations (and only for a few iterations). We then use the phenomenological models and linear regression for estimating the job running time for the other configurations.
For our evaluation, we set \(8\leq K\leq 20\) and \(384\leq B\leq 1024\). In case of Transformers, we set \(B_{min}\), \(B_{max}\) to \(512\) and \(1280\). We increment \(K\) by \(4\) to compare results with offline runs from Figure 10, so we use \(K\in[8,12,16,20]\).
The starting configurations are \((K_{min},B_{min})\) and \((K_{min},B_{max})\), until the gradient noise has stabilized. With exponential moving average smoothing, noise for ResNet18, ResNet50 and Transformer Base stabilized at \(2K,3K\) and \(10K\) iterations respectively. The total search cost for ResNet 18, arising from doing this profiling on extreme configurations was minimal. _The overhead of exploring a new configuration (due to checkpoint-restore) is minimal, on average \(37\) seconds for ResNet18, \(40\) for ResNet50, and \(127\) seconds for Transformers. Each configuration is run for around 20 iterations, which takes around 17-35 seconds for our three models._ Compared to an "oracle" scenario of running on the optimal configuration all along (bypassing the search phase), our approach increases running time by \(0.83\) hours, and $0.89 to the final cost. This represents a 13% increase in running time and 9% increase in cost, compared to an oracle approach which runs the job on the optimal configuration from the start. Compared to arbitrary job configuration without our techniques, our running times can be more than \(2\times\) lower and costs can be more than 40% lower.
**Result:**_The partial search increases job running time by 13% and cost by 9%, even compared to an oracle approach, and is a low-overhead strategy for discovering optimal configurations._
### _Model Accuracy_
Both the full and partial search are able to accurately predict the total training time, as seen from Figure 11. We evaluate three configurations: partial search (red), full search (green), and a worst-case no-search strategy. The figure shows the distribution of the error of running time prediction vs. the empirical job running time, across different K and B. We see that the average error for partial search is 4% for ResNet and less than 2% for Transformer. The full-search is even better: with an average error of 0.5-3.5% across all models and configurations.
In Figure 11, we also evaluate our no-search strategy in a worst-case scenario. The no-search strategy performance is exactly the same as the full-search scenario, if a near-identical ML model has been trained before. However we construct a scenario where a "global average" performance model is used which average the statistical and performance models _over all the three ML models_. Thus we are using a "universal" performance model. Even this universal global-average model shows acceptable training-time prediction: the error is in the range of 4-20%. Note that this global-average model does not require any search, has no search costs, nor does it require any prior profiling or pilot runs. It is thus fully online and zero overhead.
Finally, we note that the running-time prediction error is not highly significant to our overall objective of discovering optimal configurations. We primarily care about the _relative_ running times, because we only compare configurations and run the job on the best-predicted configuration. It is likely that the best-predicted configuration remains the same even with
Fig. 10: Cost-Time trade-offs for ResNet18, ResNet50 and Transformer Base to reach 80%,90% train accuracy and 18.0 BLEU score for various \(B\) on Google E2-standard-4 VMs. The rental cost of each worker is fixed (\(=\$0.13402/hr\)). We show the trade-offs between running cost and time for a given \(B\) across decreasing cluster-sizes \([20,16,12,8]\). Dashed line shows the cost predicted by our full-search performance model.
the higher error, or the sub-optimal configuration chosen due to the errors is very close to the optimal configuration in the trade-off curve.
**Result:**_Our performance model can predict training times with a low error of 0.5-3.5%, and only 4% even with partial searching. In the fully-online setting, the error range is 4-20%._
### _Memory-based pricing_
The cost of training is ultimately determined by the VM cost model. So far, we have looked at conventional on-demand VM pricing, where the VM cost scales linearly according to the number of vCPUs. Scavenger can work with different cost models. We consider VMs that are priced both per CPU and also per GB of memory. Google cloud's custom-sized VMs approximate this model.
With such a finer-grained cost model, the cost-time tradeoff curves are shown in Figure 12. In this case, the VM memory is allocated according to the batch size such that there is negligible free memory. The cost is proportional to the total memory required (the global batch size \(B\)) and running time.
Comparing the results in Fig. 10 and 12, we see a shift in the inflection points for all ML workloads. This is expected since the running costs changed as a cluster \(K\) training on \(B=1024\) will be pricier than that running \(B=384\), as more memory would be allocated to the former.
## VI Related Work
Our work falls in the category of adapting model training on distributed infrastructure such as shared clusters and cloud platforms [20]. Scavenger uses the noise scale proposed in AdaScale SGD [2], which is similar to the gradient noise model of McCandlish et.al [3]. KungFu [4] and Pollux [5] also use this gradient noise metric for monitoring training performance and dynamically adjusting the resource allocation to minimize it. In addition to elasticity and adaptation mechanisms proposed in these papers, we use a performance profiling based approach that also takes cost into account. KungFu is complementary to our work: we can implement Scavenger's policies as part of their adaptation-policy framework and mechanisms. Pollux also considers statistical efficiency and similar worker and batch-size tradeoffs, but is not cloud cost aware, and instead provides scheduling policies for shared clusters. BFTrainer [21] attempts to utilize idle nodes for distributed training dynamically using a mixed integer linear programming (MILP) resource allocation algorithm.
Commercial offerings of "model training as a service", such as Amazon AWS SageMaker [22], use only rudimentary performance models, and do not use statistical efficiency or pareto-optimal allocation. Searching for hyperparamters is an important cloud workload, and reducing this search cost using parallel search techniques and early stopping provide significant cost and time savings [23, 24, 25]. Unlike hyperparamter optimization which focuses on reducing the cost of a "bag" of jobs, Scavenger focuses on optimizing the cost and time of a _single_ job. Efficient elasticity mechanisms and policies for ML training [26, 27, 28] can also be incorporated into Scavenger.
Scheduling and resource allocation in shared clusters is challenging for distributed training because of the complex performance tradeoffs we have identified, and the large computing requirements. In shared private clusters, optimizing the use of limited GPU resources is a key challenge [29, 30, 31, 32]. In cloud platforms, resource contention is not an issue, but instead cost optimization is important.
Modeling distributed ML training poses many challenges because of the heterogeneity of ML models and their performance tradeoffs [33]. Optimus [19] models the throughput and communication costs to allocate workers and parameter servers to jobs on a shared kubernetes cluster. Cynthia [34] minimizes cloud cost and time by scaling workers and parameter servers using a finer-grained analytical model, but does not consider batch sizes and statistical efficiency. We do not adjust the number of parameter servers and assume that they are suitably provisioned. Optimizing parameter server allocation is part of our future work. Batch-size adaptation can be important for model generalizability and performance, and can benefit from second-order gradient information [35].
## VII Conclusion
The training time and cost for large machine learning models is significant, and sensitive to many job and cloud configuration parameters. Scavenger is a cloud service which uses online profiling and new performance models for estimating the training performance on different cloud configurations, with high accuracy of over 95%, and reduces training time by \(2\times\)
Fig. 11: Error in predicted training time from actual training time across all job configurations. The error for partial and full search is low. Even the universal model, which doesn’t consider any model-specific details, provides acceptable results. |
2306.14483 | Medical Federated Model with Mixture of Personalized and Sharing
Components | Although data-driven methods usually have noticeable performance on disease
diagnosis and treatment, they are suspected of leakage of privacy due to
collecting data for model training. Recently, federated learning provides a
secure and trustable alternative to collaboratively train model without any
exchange of medical data among multiple institutes. Therefore, it has draw much
attention due to its natural merit on privacy protection. However, when
heterogenous medical data exists between different hospitals, federated
learning usually has to face with degradation of performance. In the paper, we
propose a new personalized framework of federated learning to handle the
problem. It successfully yields personalized models based on awareness of
similarity between local data, and achieves better tradeoff between
generalization and personalization than existing methods. After that, we
further design a differentially sparse regularizer to improve communication
efficiency during procedure of model training. Additionally, we propose an
effective method to reduce the computational cost, which improves computation
efficiency significantly. Furthermore, we collect 5 real medical datasets,
including 2 public medical image datasets and 3 private multi-center clinical
diagnosis datasets, and evaluate its performance by conducting nodule
classification, tumor segmentation, and clinical risk prediction tasks.
Comparing with 13 existing related methods, the proposed method successfully
achieves the best model performance, and meanwhile up to 60% improvement of
communication efficiency. Source code is public, and can be accessed at:
https://github.com/ApplicationTechnologyOfMedicalBigData/pFedNet-code. | Yawei Zhao, Qinghe Liu, Xinwang Liu, Kunlun He | 2023-06-26T07:50:32Z | http://arxiv.org/abs/2306.14483v1 | # Medical Federated Model with Mixture of Personalized and Sharing Components
###### Abstract
Although data-driven methods usually have noticeable performance on disease diagnosis and treatment, they are suspected of leakage of privacy due to collecting data for model training. Recently, federated learning provides a secure and trustable alternative to collaboratively train model without any exchange of medical data among multiple institutes. Therefore, it has draw much attention due to its natural merit on privacy protection. However, when heterogenous medical data exists between different hospitals, federated learning usually has to face with degradation of performance. In the paper, we propose a new personalized framework of federated learning to handle the problem. It successfully yields personalized models based on awareness of similarity between local data, and achieves better tradeoff between generalization and personalization than existing methods. After that, we further design a differentially sparse regularizer to improve communication efficiency during procedure of model training. Additionally, we propose an effective method to reduce the computational cost, which improves computation efficiency significantly. Furthermore, we collect \(5\) real medical datasets, including \(2\) public medical image datasets and \(3\) private multi-center clinical diagnosis datasets, and evaluate its performance by conducting nodule classification, tumor segmentation, and clinical risk prediction tasks. Comparing with \(13\) existing related methods, the proposed method successfully achieves the best model performance, and meanwhile up to \(60\%\) improvement of communication efficiency. Source code is public, and can be accessed at: [https://github.com/ApplicationTechnologyOfMedicalBigData/pFedNet-code](https://github.com/ApplicationTechnologyOfMedicalBigData/pFedNet-code).
Medical data, federated learning, personalized model, similarity network.
## I Introduction
With proliferation of data, decision models generated by data-driven paradigm have shown remarkable performance on clinical diagnosis and treatment [1, 2, 3, 4, 5]. Those medical models are usually trained by using multiple institutes' data, which may lead to leakage of privacy due to centralization of medical data. Recently, federated learning has shown significant advantages on alleviating such concerns, since it does not require exchange of medical data between hospitals1[6, 7, 8, 9, 10, 11]. More and more federated models have been developed for clinical diagnosis and treatment [12, 13, 14, 15, 16].
Footnote 1: Federated learning usually consists of one _server_ and multiple _clients_. Either server or client may represent a hospital.
Although federated learning has drawn much attention due to its superiority in privacy protection, its performance may face with degradation due to heterogenous data of different medical institutes [17]. For example, as one of data heterogeneity, label unbalance widely exists between comprehensive hospital and specialized hospital, e.g. tumor hospital, which may highly impair model's performance [18]. To mitigate such drawback of federated learning, personalized models are extensively investigated [19], and extensive personalized methods such as _FedAMP_[20], _FedRoD_[21], _APFL_[22], _FPFC_[23], _IFCA_[24], _pFedMe_[25], _SuPerFed_[26], _FedRep_[27] have been proposed. Although personalized models yielded by those methods have shown adaption to heterogenous data, they usually have three major limitations in medical scenario, including _sub-optimal performance, requirement of prior assumption_, and _limited flexibility_. Specifically, in terms of _sub-optimal performance_, those methods usually work well in some general datasets such as MNIST2, and CIFAR3, but have unsatisfied performance in real medical scenario due to high complication of medicine [22, 24, 25]. Additionally, in terms of _requirement of prior assumption_, existing methods may assume either clustering structure among clients [24] or client's computing resources [27], which may be either hard to know or not satisfied in real medical scenario. Moreover, in terms of _limited flexibility_, some existing methods develop personalized models based on similarity network for clients' local data, but limit to few special topologies of network such as complete graph [23, 20], star graph [28, 26, 21], and may not achieve optimum for general medical scenarios directly.
Footnote 2: [http://yann.lecun.com/exdb/mnist/](http://yann.lecun.com/exdb/mnist/)
Footnote 3: [https://www.cs.toronto.edu/~kriz/cifar.html](https://www.cs.toronto.edu/~kriz/cifar.html)
Footnote 4: [https://www.nccchi.nih.gov/health/provider/clinicalpractice](https://www.nccchi.nih.gov/health/provider/clinicalpractice)
Moreover, another drawback of those existing personalized models is their limited usability for medical application. One of major reasons is that they are not able to handle both _sharing_ and _personalized_ components of medical data discriminately. Those components widely exist in clinical diagnosis and treatment [29].
* **Sharing component.** Generally, clinical diagnosis and treatment are conducted by referring to international clinical guidelines such as the Clinical Practice Guidelines (CPG)4, which usually provide general clinical treatment for the whole population. Those clinical guidelines provide some necessary operations for some symptom, no matter who he/she is. For example, when inflammation appears, patients have to conduct blood routine examina
tion before clinical diagnosis. It is widely applied for all patients who appear the corresponding symptom.
* **Personalized component.** Every patient has his/her family genetic history, allergen, medication records etc. Such patient's information is unique, and should be considered carefully before offering clinical diagnosis and treatment. For example, when inflammation appears, the patient who has history of penicillin allergy cannot be treated by using penicillin.
Therefore, it is necessary for personalized models to capture above common and personalized characteristics of medical data. Unfortunately, few existing methods are designed to handle the case. Although Collins et al. develop a local model for every client with shared representation among clients [27], it ignores the underlying similarity between clients who own similar data, and cannot allow to adjust personalization as need for medical scenario.
To mitigate limitations of those existing methods, we propose a new formulation of personalized federated learning, namely _pFedNet_, and develop a flexible framework to obtain good adaption to heterogenous medical data. The personalized model5 consists of the sharing component and the personalized component, and is designed to capture both common and personalized characteristics of medical data. Note that _pFedNet_ builds personalized models based on _similarity network_ of clients' data, which is able to find underlying relation between personalized models. Additionally, it does not rely on any extra assumption on clients' clustering structure, and any special topology of similarity network, and thus is more suitable to real medical scenario.
Footnote 5: In the paper, we do not distinguish difference between the proposed _model_ and _formulation_, and denote them by using _pFedNet_ indiscriminately.
Furthermore, we propose a new communication efficient regularizer to reduce workload of communication between clients and server, which can encourage elements of local update to own clustering structure, and thus improve communication efficiency. After that, we propose a new framework to optimize and obtain personalized models, which successfully reduces computational cost significantly. Finally, we collect \(5\) real medical datasets, which includes \(2\) public datasets of medical image and \(3\) private datasets of medical records. \(3\) classic medical tasks, including nodule classification, tumor segmentation, and clinical risk prediction, are conducted to evaluate the proposed method. Numerical results show that the proposed method successfully outperforms existing methods on performance, and meanwhile achieves up to \(60\%\) promotion of communication efficiency. In summary, contributions of the paper are summarized as follows.
* We propose a new personalized federated model for the medical scenario, which is built based on awareness of similarity of between medical institutes' data, and successfully captures both sharing and personalized characteristics of patients' data.
* We develop a new communication efficient regularizer to reduce workload of communication during learning of personalized model, and a new optimization framework to reduce the computational cost.
* Extensive empirical studies have been conducted to evaluate the effectiveness of the proposed model and the optimization framework.
The paper is organized as follows. Section II reviews related literatures. Section III presents the proposed formulation, and explains its application. Section IV presents a communication efficient regularizer, which is able to decrease communication's workload during learning of models. Section V presents an efficient method to reduce computation cost during federated learning. Section VI presents extensive empirical studies, and Section VII concludes the paper.
## II Related Work
In the section, we review related literatures on methodology of personalized federated learning, and medical applications of federated learning.
### _Personalized Federated Learning_
Personalized federated learning combines benefits of personalized model and federated learning, while taking into account the unique characteristics and preferences of each client [19]. Its methodology usually has five branches including _parameter decoupling_, _knowledge distillation_, _multi-task learning_, _model interpolation_, and _clustering_. Specifically, the branch of _parameter decoupling_ classifies parameters of model into two categories: base parameters and personalized parameters, where base parameters are shared between client and server, and personalized parameters are stored at client privately [30, 31, 32]. The branch of _knowledge distillation_ transfers the knowledge from teacher's model to student's model, which can significantly enhance the performance of local models [33, 34, 35, 36, 37]. The branch of _multi-task learning_ views client's model as a task, and abstracts the learning procedure of personalized federated models as a multi-task learning task [38, 39, 20, 40]. The branch of _model interpolation_ simultaneously learns a global model for all clients, and a local model for every client. It usually makes tradeoff between the global model and local models to achieves the optimum of personalization [28, 22, 41]. The branch of _clustering_ aims to generating similar personalized model for clients who own similar data distribution [42, 43, 44, 24]. The proposed method of personalized federated learning, namely _pFedNet_, belongs to the branch of _clustering_, but meanwhile allows to decouple parameters flexibly. Those existing methods in the branch of _clustering_ either conduct model learning and client clustering separately [42, 43], or conduct model learning based on prior assumption on clustering, e.g. selection of the number of clusters and specific clustering method [45, 46]. Comparing with them, _pFedNet_ focuses on learning of personalized models, and meanwhile finds clustering structure among clients implicitly. It does not rely on prior assumption on clustering, and thus usually obtain better performance benefiting from good adaption of local data.
Among those existing methods, most related methods include two groups: _personalized model based on similarity_ and _personalized model with mixture of components_. Specifically,
the first group consists of methods such as _FPFC_[23], _FedAMP_[20], _L2GD_[28], _FedRoD_[21], _SuPerFed_[26] etc. These existing methods yield personalized model based on some special topologies of similarity network of clients' local data, e.g. complete graph and star graph, limiting their applications in medical scenarios. For example, both _FPFC_ and _FedAMP_ generate personalized model based on the complete graph. _L2GD_, _FedRoD_, and _SuPerFed_ produce personalized model based on the star graph. Comparing with those methods, the proposed method, namely _pFedNet_, does not have this limitation, and can work on any topology. Moreover, the second group consists of methods such as _FedRep_[27], who produce personalized model with mixture components. However, _FedRep_ does not consider the similarity between client's data, and assumes every client has sufficient computing resources to update personalized component. Comparing with _FedRep_, the proposed method supports flexible combination of personalized and sharing components, and meanwhile achieves better performance based on awareness of pairwise similarity between clients' local data.
### _Federated Learning in Medical Applications_
In recent years, several studies have explored the use of federated learning in medicine, and present promising results [47, 48]. One of the main medical applications is the development of predictive models for disease diagnosis and treatment [16, 15]. For example, Bai et al. propose an open source framework for medical artificial intelligence, and offer diagnosis of COVID-19 by using federated learning method [16]. Dayan et al. develop federated learning method to predict clinical outcomes in patients with COVID-19. Additionally, another area where federated learning has shown promising is analysis of medical images [49, 50]. For instance, Kaissis et al. review recent emerging methods on privacy preservation of medical images analysis, and discusses drawbacks and limitations of those existing methods [49]. Moreover, a general federated learning framework, namely PriMIA [48] is developed, and its advantages on privacy protection, securely aggregation, and encrypted inference have been evaluated by conducting classification of paediatric chest X-rays images. Similarly, a federated learning method for predicting histological response to neoadjuvant chemotherapy in triple-negative breast cancer is recently proposed [12]. An automatic tumor boundary detector for the rare disease of glioblastoma has been proposed by using federated learning [13], which presents impressive performance. Similar to those studies, the paper focuses on medical scenario, but provides a general and flexible learning framework for personalized models. The proposed formulation is inspired by the real procedure of clinical treatment, has wide applications for disease diagnosis and medical data analysis, and is not limited to a specific disease like those existing methods.
## III Formulation
In the section, we first present similarity network for representing heterogenous client's data, and then develop a new formulation of personalized federated learning. Finally, we present the framework of alternative optimization to solve the proposed formulation.
### _Personalized Representation based on Similarity Network_
In the work, similarity of distribution of local data is measured under data space. Local dataset of every node is represented by using a _sketch_ matrix. The similarity of local data distribution is measured based on the distance between sketch matrices. The similarity network \(\mathcal{G}:=\{\mathcal{N},\mathcal{E}\}\) is usually built by using the _K-Nearest Neighbors_ (KNN) method. As illustrated in Figure 1, the network \(\mathcal{G}\) is generated by given a \(K\), where \(\mathcal{N}:=\{1,2,...,N\}\) represents the node set, consisting of \(N\) nodes. \(\mathcal{E}:=\{e_{i,j}:i\in\mathcal{N},j\text{ is the node }i\text{'s neighbour}\}\) represents the edge set, consisting of \(M\) edges.
Besides, major notations used in the paper are summarized as follows for easy understanding of mathematical details.
* Bold and lower letters such as \(\mathbf{a}\) represent a vector. Bold and upper letters such as \(\mathbf{X}\) represents a matrix.
* Lower letters such as \(f(\cdot)\) and \(h(\cdot)\) represent a function. Other letters such as \(n\), \(N\), \(M\) represents a scalar value.
* \(\mathcal{N}\) and \(\mathcal{E}\) represent a set, and \(\mathcal{D}_{n}\) represent a data distribution for the \(n\)-th client.
* \(\odot\) represents Hadamard of two matrices. \(\left\|\cdot\right\|_{p}\) represent the \(p\)-th norm of a vector.
* \(\nabla\) represents the gradient operator, and \(\nabla f(\cdot)\) represent the gradient of \(f\).
* \([\mathbf{a}]_{+}\) means negative elements of \(\mathbf{a}\) is replaced by \(0\), and non-negative elements of \(\mathbf{a}\) do not make any change.
### _pFedNet: Formulation of Personalized Federated Learning_
Given the similarity network \(\mathcal{G}\), and constant matrices \(\mathbf{M}\in\mathbb{R}^{d\times d_{1}}\) and \(\mathbf{N}\in\mathbb{R}^{d\times d_{2}}\), the proposed personalized federated learning, namely _pFedNet_, is finally formulated by
\[\min_{\mathbf{x},\{\mathbf{x}^{(n)},\mathbf{x}^{(n)}\}_{n=1}^{N}}\frac{1}{N} \sum_{n\in\mathcal{N}}f_{n}\left(\mathbf{x}^{(n)};\mathcal{D}_{n}\right)+ \lambda\sum_{\begin{subarray}{c}e,j\in\mathcal{E},\\ \forall i,j\in\mathcal{N}\end{subarray}}\left\|\mathbf{z}^{(i)}-\mathbf{z}^{( j)}\right\|_{p},\]
Fig. 1: Personalized representation based on similarity network
subject to:
\[\mathbf{x}^{(n)}=\mathbf{M}\mathbf{x}+\mathbf{N}\mathbf{z}^{(n)},\ \ \ \ \ \forall n\in\mathcal{N},\mathbf{x}\in\mathbb{R}^{d_{1}}, \mathbf{z}^{(n)}\in\mathbb{R}^{d_{2}}.\]
Here, \(f_{n}\left(\mathbf{x}^{(n)};\mathcal{D}_{n}\right)\) represents the local loss at the \(n\)-th client, where \(\mathbf{x}^{(n)}\) represents the personalized model, and \(\mathcal{D}_{n}\) represents the local data. For example, it can be instantiated by \(f_{n}\left(\mathbf{x}^{(n)};\mathcal{D}_{n}\right)=\sum_{(\mathbf{a},y)\sim \mathcal{D}_{n}}\log\left(\frac{1}{1+e^{-y\mathbf{n}^{T}\mathbf{x}^{(n)}}}\right)\) for the logistic regression task.
_Note that \(\mathbf{x}\) and \(\mathbf{z}^{(n)}\) represent the sharing and personalized component of the personalized model \(\mathbf{x}^{(n)}\), respectively._ In terms of statistical machine learning models like SVM and logistic regression [51], their sharing component may be weights of features like _inflammation_, _ diarrhoea_, and _vomiting_ etc, and their personalized component may be weights of features like _family genetic history_ and _allergen_ etc. In terms of deep learning models like dense net [52] and u-net [53], their sharing component may be weights of layers of feature extraction, and their personalized component may be weights of layers of classifier.
The proposed formulation has wide application in medical analysis and clinical diagnosis and treatment. Generally, doctors offer diagnosis and treatment service according to patients' medical records and the international clinical guidelines. It is a natural scenario for personalized model with mixture of components to conduct clinical decision.
* **Personalized component**. Since every patient has his/her unique medical record including family genetic history, allergen, medication records etc, the personalized component of model, e.g. \(\mathbf{z}^{(n)}\) is necessary to capture characteristics of such data.
* **Sharing component**. The international clinical guideline usually provide a general solution to conduct diagnosis and treatment. For example, blood routine examination is required when inflammation appears. The sharing component of model, e.g. \(\mathbf{x}\) is necessary to capture such common characteristics.
Additionally, some special diseases such as regional disease, and occupational disease also need personalized model with sharing component to conduct clinical decision [29]. Specifically, the treatment of regional and occupational disease needs to consider the location and occupation of patients, respectively, which corresponds to the personalized component of the clinical decision model. Besides, all patients should also be offered some basic treatment such as alleviation of inflammation, which corresponds the sharing component of the model. In a nutshell, the formulation provides a general and flexible framework to conduct personalized federated learning.
* **Generality**. No matter statistical machine learning models such as _ridge regression_, _logistic regression_, _support vector machine_ etc [51] or deep learning models such as _dense net_[52] and _u-net_[53] etc, the formulation can be instantiated by specifying the local loss function \(f_{n}\).
* **Flexibility**. First, it is flexible to select the sharing component of the federated model, namely \(\mathbf{x}\) and the personalized component \(\mathbf{z}^{(n)}\) according to the specific task. Second, it is flexible to choose a \(\lambda\) for making tradeoff between personalized need and global requirement.
Additionally, \(\lambda\) with \(\lambda>0\) is a given hyper-parameter, which controls the personalization of federated model. When \(\lambda\to 0\), more personalization is allowed, that is \(\mathbf{z}^{(n)}\) has much difference. The personalization decays with the increase of \(\lambda\). Almost all \(\mathbf{z}^{(n)}\) with \(n\in\{1,2,\cdots,N\}\) tends to be same for a large \(\lambda\). As shown in Figure 2, this has been verified by conducting logistic regression on datasets: _CHD_ and _Covid19_6. In the case, there are \(20\) clients and \(1\) server in the federated network. We observe the similar phenomenon. That is, every personalized model is different with \(\lambda=0\) (represented by red circle), and those models begin to gather together with the increase of \(\lambda\) (represented by red lines). When \(\lambda\) is sufficiently large, all personalized models converge to a point, which means all personalized models indeed become same.
Footnote 6: Details of those datasets are shown in Section VI-A.
Note that the formulation can be equally transformed as follows.
\[\min_{\left\{\mathbf{z}^{(n)}\right\}_{n=1}^{N},\mathbf{x}}\frac{1}{N}\sum_{n= 1}^{N}f_{n}\left(\mathbf{x}^{(n)};\mathcal{D}_{n}\right)+\lambda\left\| \mathbf{Z}\mathbf{Q}\right\|_{1,p}, \tag{1}\]
subject to:
\[\mathbf{x}^{(n)}=\mathbf{M}\mathbf{x}+\mathbf{N}\mathbf{z}^{(n)}.\]
Here, \(\mathbf{Z}\in\mathbb{R}^{d_{2}\times N}\), \(\mathbf{Q}\in\mathbb{R}^{N\times M}\), \(N\) and \(M\) represent the total number of nodes and edges in the network \(\mathcal{G}\), respectively. \(\mathbf{Z}\) represents some a variable matrix, consisting of \(N\) variables as columns, that is, \(\mathbf{Z}=\left[\mathbf{z}^{(1)},\mathbf{z}^{(2)},...,\mathbf{z}^{(N)}\right]\). As shown in Figure 3, both \(\mathbf{M}\) and \(\mathbf{N}\) have special structure, where every row of them has at most one non-zero value, and the non-zero value is \(1\). \(p\in\{1,2,\infty\}\). \(\mathbf{Q}\) is the given auxiliary matrix, which has \(M\) columns and every column has two non-zero values: \(1\) and \(-1\). Note that \(\left\|\cdot\right\|_{1,p}\) is denoted by \(\ell_{1,p}\) norm. Given a matrix \(\mathbf{U}\in\mathbb{R}^{d_{2}\times M}\), it is defined by
\[\left\|\mathbf{U}\right\|_{1,p}:=\sum_{m=1}^{M}\left\|\mathbf{U}_{:,m}\right\|_ {p}.\]
### _Optimization_
The formulation 1 is difficult to be solved due to \(3\) reasons. First, the optimization variables may be highly non-separable
Fig. 2: Personalized models yielded by _pFedNet_ appear more similarity with the increase of \(\lambda\) for datasets: _CHD_ and _Covid19_. Red circle corresponds to a personalized model of a client, and there are \(20\) clients. The x-axis and y-axis of figures represents the first and second principal component of the yielded personalized model, respectively.
due to \(\mathbf{Q}\). As we have shown, every column of \(\mathbf{Q}\) corresponds an edge of the similarity network \(\mathcal{G}\), which implies that the corresponding personalized component, e.g. \(\mathbf{z}^{(i)}\) and \(\mathbf{z}^{(j)}\), corresponding to nodes of such edge has dependent relation. Second, the loss function may be highly non-smooth, because the regularizer is sum of norms. Third, the number of optimization variables is large, when the network \(\mathcal{G}\) has a large number of nodes and edges. Generally, the formulation 1 is solved by alternative optimization. The variable \(\{\mathbf{x}^{(1)},\mathbf{x}^{(2)},...,\mathbf{x}^{(N)}\}\) is obtained by alternatively optimizing \(\mathbf{x}\) and \(\{\mathbf{z}^{(1)},\mathbf{z}^{(2)},...,\mathbf{z}^{(N)}\}\).
**Optimizing \(\mathbf{x}\) by given \(\mathbf{Z}\).**\(\mathbf{x}\) is optimized by solving the following problem:
\[\min_{\mathbf{x}\in\mathbb{R}^{4}}\frac{1}{N}\sum_{n=1}^{N}f_{n}\left(\mathbf{ M}\mathbf{x}+\mathbf{N}\mathbf{z}^{(n)};\mathcal{D}_{n}\right).\]
By using the data-driven stochastic optimization method such as SGD [54], we need to perform the following problem to obtain \(\mathbf{x}\) iteratively.
\[\min_{\mathbf{x}\in\mathbb{R}^{4}}\frac{1}{N}\sum_{n=1}^{N}\left\langle \mathbf{M}^{\top}\mathbf{g}_{t}^{(n)},\mathbf{x}\right\rangle+\frac{1}{2\eta_ {t}}\left\|\mathbf{x}-\mathbf{x}_{t}\right\|^{2}, \tag{2}\]
where \(\mathbf{g}_{t}^{(n)}\) is a stochastic gradient of \(f_{n}\) with \(\mathbf{M}\mathbf{x}+\mathbf{N}\mathbf{z}_{t}^{(n)}\) by using data drawn from the local dataset \(\mathcal{D}_{n}\).
**Optimizing \(\mathbf{Z}\) by given \(\mathbf{x}\).**\(\mathbf{Z}\) is optimized by solving the following problem:
\[\min_{\mathbf{Z}\in\mathbb{R}^{42\times N}}\frac{1}{N}\sum_{n=1}^{N}f_{n} \left(\mathbf{M}\mathbf{x}+\mathbf{N}\mathbf{z}^{(n)};\mathcal{D}_{n}\right)+ \lambda\left\|\mathbf{Z}\mathbf{Q}\right\|_{1,p}.\]
By using the data-driven stochastic optimization method such as SGD [54], we need to perform the following problem:
\[\min_{\mathbf{Z}\in\mathbb{R}^{42\times N}}\frac{1}{N}\sum_{n=1}^{N}\left\langle \mathbf{N}^{\top}\mathbf{g}_{t}^{(n)},\mathbf{z}^{(n)}\right\rangle+\lambda \left\|\mathbf{Z}\mathbf{Q}\right\|_{1,p}+\frac{\left\|\mathbf{Z}-\mathbf{Z}_{ t}\right\|_{F}^{2}}{2\eta_{t}}.\]
\(\mathbf{g}_{t}^{(n)}\) is a stochastic gradient of \(f_{n}\) with \(\mathbf{M}\mathbf{x}_{t}+\mathbf{N}\mathbf{z}^{(n)}\) by using stochastic data drawn from the local dataset \(\mathcal{D}_{n}\). Suppose \(\mathbf{G}_{t}=\left[\mathbf{g}_{t}^{(1)},\mathbf{g}_{t}^{(2)},...,\mathbf{g} _{t}^{(N)}\right]\), and \(\mathbf{Z}\) is optimized by performing the following problem:
\[\min_{\mathbf{Z}\in\mathbb{R}^{42\times N}}\frac{\mathbf{1}_{T}^{\top}\left( \left(\mathbf{N}^{\top}\mathbf{G}_{t}\right)\odot\mathbf{Z}\right)\mathbf{1}_ {N}}{N}+\lambda\left\|\mathbf{Z}\mathbf{Q}\right\|_{1,p}+\frac{\left\|\mathbf{Z }-\mathbf{Z}_{t}\right\|_{F}^{2}}{2\eta_{t}}. \tag{3}\]
Here, \(\odot\) means Hadamard product of two matrices.
```
1:Receive the personalized model \(\mathbf{y}_{t}^{(n)}:=\mathbf{M}\mathbf{x}_{t}+\mathbf{N}\mathbf{z}_{t}^{(n)}\) from the server.
2:Randomly sample an instance \(\mathbf{a}\sim\mathcal{D}_{n}\), and compute the stochastic gradient \(\mathbf{g}_{t}^{(n)}=\nabla f(\mathbf{y}_{t}^{(n)};\mathbf{a})\) with \(\mathbf{a}\sim\mathcal{D}_{n}\).
3:Send \(\mathbf{g}_{t}^{(n)}\) to the server.
```
**Algorithm 1** Compute local stochastic gradient at the \(n\)-th client for the \(t+1\)-th iteration.
Here, \(\odot\) means Hadamard product of two matrices.
```
1:The number of total iterations \(T\), and the initial model \(\mathbf{x}_{1}\), \(\mathbf{z}_{1}^{(n)}\) with \(n\in\{1,2,\cdots,N\}\).
2:Deliver the model \(\mathbf{y}_{1}^{(n)}=\mathbf{M}\mathbf{x}_{1}+\mathbf{N}\mathbf{z}_{1}^{(n)}\) to all client \(n\) with \(n\in\{1,2,...,N\}\).
3:for\(t=1,2,...,T\)do
4: Collect stochastic gradient \(\mathbf{G}_{t}=\left[\mathbf{g}_{t}^{(1)},\mathbf{g}_{t}^{(2)},...,\mathbf{g} _{t}^{(N)}\right]\) from all client \(n\) with \(n\in\{1,2,...,N\}\).
5: Update the global model \(\mathbf{x}\) by solving 2.
6: Update the personalized model \(\mathbf{Z}\) by solving 3.
7: Deliver the parameter \(\mathbf{y}_{t+1}^{(n)}=\mathbf{M}\mathbf{x}_{t+1}+\mathbf{N}\mathbf{z}_{t+1}^{(n)}\) to every client. return\(\mathbf{x}_{T+1}^{(n)}=\mathbf{M}\mathbf{x}_{T+1}+\mathbf{N}\mathbf{z}_{T+1}^{(n)}\) with \(n\in\{1,2,...,N\}\).
```
**Algorithm 2** Train personalized models at the server.
**Federated optimization.** According to the above optimization steps, the stochastic gradient \(\mathbf{G}_{t}\) is obtained at client in the scenario of federated learning. Details are illustrated in Algorithm 1. Moreover, the personalized model \(\mathbf{x}^{(n)}\) with \(n\in\{1,2,\cdots,N\}\) is optimized at the server, and details are shown in Algorithm 2. Unfortunately, the federated optimization has two major drawbacks.
* **Heavy workload of communication**. Since every client has to transmit the stochastic gradient, e.g. \(\mathbf{g}_{t}^{(n)}\) to the server, the communication workload will be unbearable
Fig. 3: Illustration of matrices \(\mathbf{Q}\), \(\mathbf{M}\), and \(\mathbf{N}\).
for a large \(d\). Especially, deep neural network models usually own more than millions of parameters, the transmission of such gradient will lead to high cost of communication.
* **High cost of computation**. Since the _sum-of-norms_ regularizer leads to high non-separability and non-smoothness of the objective loss, the computation cost is high. The optimization of personalized model is time-consuming and even unbearable.
To mitigate those drawbacks, we first develop a communication efficient method for every client to transmit the stochastic gradient. Additionally, we then propose a computation efficient method for the server to update the personalized model. In summary, the learning framework of the personalized federated learning is illustrated in Figure 4.
## IV Communication Efficient Update of Model
In the section, we first propose a communication efficient regularizer, which encourages elements of update of local model to own clustering structure, and improves the communication efficiency effectively. Then, we develop an ADMM method [55] to conduct the update of local model.
### _CER: Communication Efficient Regularizer_
In the work, we propose a communication efficient method, which can let \(\mathbf{g}_{t}^{(n)}\) be encoded by using few bits. Since the code length of \(\mathbf{g}_{t}^{(n)}\) is much reduced, the communication efficiency is significantly increased. The basic idea is to induce the clustering structure of elements of \(\mathbf{g}_{t}^{(n)}\) by using differential sparsity regularizer. The regularizer encourages the update of local model \(\nabla_{t+1}^{(n)}\) to own clustering structures. Figure 5 presents an illustrative example. According to Figures 5(a) and 5(c), when the elements of \(\nabla_{t+1}^{(n)}\) own clustering structures, they can be encoded by using fewer bits. Its code length can be reduced a lot. The update of parameter can be transmitted from clients and the server efficiently. According to Figures 5(b) and 5(d), our basic idea is to let the difference between the elements of \(\nabla_{t+1}^{(n)}\) be sparse, which encourages the elements of \(\nabla_{t+1}^{(n)}\) to have clustering structures. Comparing with the gradient quantization methods in the previous studies, the proposed method is able to find a good tradeoff between the convergence performance and the communication efficiency.
To improve the communication efficiency, we propose a new method to conduct the update of the parameter, which is formulated as
\[\nabla_{t+1}^{(n)}=\frac{\mathbf{y}_{t}^{(n)}-\mathbf{v}}{\eta_{t}},\]
where \(\mathbf{v}\) is obtained by performing the following problem:
\[\mathbf{v}\] \[=\] \[\underset{\text{communication efficient regularizer}}{ \text{communication efficient regularizer}}.\]
Fig. 4: Learning framework of the personalized federated model with sharing component.
Fig. 5: The illustrative example shows that \(\nabla_{t+1}^{(n)}\) with clustering structures can be compressed by using fewer bits, and thus the code length is reduced effectively. **Our basic idea is to make the difference between elements of \(\nabla_{t+1}^{(n)}\) sparse.**
Here, \(\mathbf{g}_{t}^{(n)}\) is a stochastic gradient, which is obtained by using the local data at the \(n\)-th client. The given full rank square matrix \(\mathbf{\Lambda}\in\mathbb{R}^{d\times d}\) is defined by
\[\mathbf{\Lambda}:=\begin{bmatrix}1&-1&&&\\ &1&-1&&\\ &&\cdots&&\\ &&&1&-1\\ &&&1\end{bmatrix}.\]
Notice that \(\mathbf{\Lambda}\) is a full rank square matrix, whose smallest singular value, denoted by \(\sigma\), is positive, that is, \(\sigma>0\). The proposed communication efficient regularizer is an \(\ell_{1}\) norm square. It punishes the difference between elements of \(\nabla_{t+1}^{(n)}\), and encourages them to be small or even zero. Thus, those corresponding elements of \(\nabla_{t+1}^{(n)}\) are very similar or even identical. That is, the elements of \(\nabla_{t+1}^{(n)}\) own clustering structures. Exploiting the clustering structures, \(\mathbf{x}_{t+1}\) can be compressed by using few bits, and thus improves the communication efficiency in the distributed setting. Adjacent elements of \(\nabla_{t+1}^{(n)}\) have tiny difference, and appear clustering structures.
We present more explanations by taking an example. As illustrated in Figure 6, we generate local update of the personalized model with \(100\) features (orange lines in Figures 6(e)-6(h)) and difference of its elements (orange lines in Figures 6(a)-6(d)). As we can see, the differential sparsity, e.g. \(\mathbf{\Lambda}\nabla_{t+1}^{(n)}\) (blue lines in Figures 6(a)-6(d)) becomes sparse significantly with the increase of \(\gamma\) (Figures 6(a)-6(d)). It verifies that the proposed method, namely _CER_ successfully encourages difference between elements of local update to be sparse. Meanwhile, we find that \(\nabla_{t+1}^{(n)}\) is similar to \(\mathbf{g}_{t}^{(n)}\) for a small \(\gamma\) (Figure 6(e)), and a large \(\gamma\) leads to a significant trend (Figures 6(e)-6(h)). As illustrated in Figures 6(d) and 6(h), we observe that elements of local update become similar when their difference is sparse, and thus appear clustering structures (peak and bottom of the blue curve). It leads to much easier compression than the original local update.
Note that there is a trade-off between the accuracy and communication efficiency. When elements of a gradient are partitioned into more clusters, the higher accuracy of the gradient is guaranteed. Meanwhile, the gradient has to be encoded by using more bytes, thus leading to the decrease of the communication efficiency.
### _Optimizing \(\nabla_{t+1}^{(n)}\)_
As we have shown, \(\nabla_{t+1}^{(n)}\) is obtained by
\[\nabla_{t+1}^{(n)}=\frac{\mathbf{x}_{t}-\mathbf{v}}{\eta_{t}}.\]
\(\mathbf{v}\) is obtained by performing the following problem.
\[\mathbf{v}\] \[= \operatorname*{argmin}_{\mathbf{y}\in\mathbb{R}^{d}}\left\| \mathbf{\Lambda}\left(\mathbf{y}-\mathbf{y}_{t}^{(n)}\right)\right\|_{1}+ \frac{\left\|\mathbf{y}-\left(\mathbf{y}_{t}^{(n)}-\eta_{t}\mathbf{g}_{t}^{(n )}\right)\right\|^{2}}{2\eta_{t}\gamma}.\]
\(\mathbf{v}\) can be obtained by performing the following problem.
\[\min_{\mathbf{y}\in\mathbb{R}^{d},\mathbf{r}\in\mathbb{R}^{d}}\underbrace{ \left\|\mathbf{r}\right\|_{1}+\frac{\left\|\mathbf{y}-\left(\mathbf{y}_{t}^{(n )}-\eta_{t}\mathbf{g}_{t}^{(n)}\right)\right\|^{2}}{2\eta_{t}\gamma}}_{=:g( \mathbf{r},\mathbf{y})},\]
subject to:
\[\mathbf{r}=\mathbf{\Lambda}\mathbf{y}-\mathbf{\Lambda}\mathbf{y}_{t}^{(n)}.\]
The Lagrangian multiplier of \(g(\mathbf{r},\mathbf{y})\) is
\[L(\mathbf{r},\mathbf{y},\mathbf{\omega})\] \[= g(\mathbf{r},\mathbf{y})+\left\langle\mathbf{\omega},\mathbf{r}- \mathbf{\Lambda}\mathbf{y}+\mathbf{\Lambda}\mathbf{y}_{t}^{(n)}\right\rangle+ \frac{\rho}{2}\left\|\mathbf{r}-\mathbf{\Lambda}\mathbf{y}+\mathbf{\Lambda} \mathbf{y}_{t}^{(n)}\right\|^{2}.\]
ADMM[55] is used to solve the above optimization problem, which consists of update of \(\mathbf{r}\), \(\mathbf{y}\), and \(\omega\), iteratively.
**Update of \(\mathbf{r}\).** Given \(\mathbf{y}_{j}\), \(\mathbf{\omega}_{j}\), \(\mathbf{r}_{j+1}\) is updated by performing the following problem.
\[\mathbf{r}_{j+1}=\operatorname*{argmin}_{\mathbf{r}\in\mathbb{R}^ {d}}L(\mathbf{r},\mathbf{y}_{j},\mathbf{\omega}_{j})\] \[= \operatorname*{argmin}_{\mathbf{r}\in\mathbb{R}^{d}}\left\|\mathbf{ r}\right\|_{1}+\left\langle\mathbf{\omega}_{j},\mathbf{r}\right\rangle+\frac{\rho}{2} \left\|\mathbf{r}-\mathbf{\Lambda}\mathbf{y}_{j}+\mathbf{\Lambda}\mathbf{y}_{t }^{(n)}\right\|^{2}\] \[= \operatorname*{argmin}_{\mathbf{r}\in\mathbb{R}^{d}}\left\|\mathbf{ r}\right\|_{1}+\frac{\rho}{2}\left\|\mathbf{r}-\left(\mathbf{\Lambda}\mathbf{y}_{j}- \mathbf{\Lambda}\mathbf{y}_{t}^{(n)}-\frac{1}{\rho}\mathbf{\omega}_{j}\right) \right\|^{2}\] \[= \operatorname*{\mathbf{Prox}}_{\rho,\|\cdot\|_{1}}\left(\mathbf{ \Lambda}\mathbf{y}_{j}-\mathbf{\Lambda}\mathbf{y}_{t}^{(n)}-\frac{1}{\rho}\mathbf{ \omega}_{j}\right)\] \[= \left[\mathbf{\Lambda}\left(\mathbf{y}_{j}-\mathbf{y}_{t}^{(n)} \right)-\!\frac{\mathbf{\omega}_{j}}{\rho}\!-\!\rho\right]_{+}-\left[\mathbf{ \Lambda}\left(\mathbf{y}_{t}^{(n)}-\mathbf{y}_{j}\right)+\!\frac{\mathbf{\omega}_{j }}{\rho}\!-\!\rho\right]_{+}. \tag{4}\]
Here, \(\mathbf{Prox}\) represents the _proximal operator_, which is defined by
\[\mathbf{Prox}_{\nu,\phi}(\mathbf{a}):=\operatorname*{argmin}_{\mathbf{b}}\phi( \mathbf{b})+\frac{\nu}{2}\left\|\mathbf{b}-\mathbf{a}\right\|^{2}.\]
The last equality holds due to
\[\mathbf{Prox}_{\nu,\|\cdot\|_{1}}(\mathbf{a})=(\mathbf{a}-\nu)_{+}-(-\mathbf{ a}-\nu)_{+},\]
where \(\mathbf{b}_{+}\) means that negative elements of \(\mathbf{b}\) are set by \(0\), and other non-negative elements do not change.
```
0: A positive \(\gamma\) to improve communication efficiency. Given \(\mathbf{x}_{t}\), \(\mathbf{P}\), and \(\mathbf{\Sigma}\) such that \(\mathbf{\Lambda}^{\top}\mathbf{\Lambda}=\mathbf{P}\mathbf{\Sigma}\mathbf{P}^{-1}\).
1: Receive the personalized model \(\mathbf{y}_{t}^{(n)}:=\mathbf{M}\mathbf{x}_{t}+\mathbf{N}\mathbf{z}_{t}^{(n)}\) from the server.
2: Randomly sample an instance \(\mathbf{a}\sim\mathcal{D}_{n}\), and compute the stochastic gradient \(\mathbf{g}_{t}^{(n)}=\nabla f(\mathbf{y}_{t}^{(n)};\mathbf{a})\) with \(\mathbf{a}\sim\mathcal{D}_{n}\).
3:for\(j=0,1,2,...,J-1\)do
4: Update \(\mathbf{r}_{j+1}\) by performing Eq. 4.
5: Update \(\mathbf{y}_{j+1}\) by performing Eq. 5.
6: Update \(\mathbf{\omega}_{j+1}\) by performing Eq. 6.
7: Compute \(\nabla_{t+1}^{(n)}\) with \(\nabla_{t+1}^{(n)}=\frac{\mathbf{y}_{t}^{(n)}-\mathbf{y}_{J}}{\eta_{t}}\).
8: Send \(\nabla_{t+1}^{(n)}\) to the server.
```
**Algorithm 3** Communication efficient update of local models on the \(n\)-th client for the \(t\)\(+\)\(1\) iteration.
**Update of \(\mathbf{y}\).** Given \(\mathbf{r}_{j+1}\), \(\mathbf{\omega}_{j}\), \(\mathbf{y}_{j+1}\) is updated by performing the following problem:
\[\mathbf{y}_{j+1}= \operatorname*{argmin}_{\mathbf{y}\in\mathbb{R}^{d}}L(\mathbf{r}_ {j+1},\mathbf{y},\mathbf{\omega}_{j})\] \[= \operatorname*{argmin}_{\mathbf{y}\in\mathbb{R}^{d}}\frac{\left\| \mathbf{y}-\left(\mathbf{y}_{t}^{(n)}-\eta_{t}\mathbf{g}_{t}^{(n)}\right) \right\|^{2}}{2\eta_{t}\gamma}-\left\langle\mathbf{\omega}_{j},\mathbf{\Lambda} \mathbf{y}\right\rangle\] \[\quad\quad\quad+\frac{\rho}{2}\left\|\mathbf{r}_{j+1}-\mathbf{ \Lambda}\mathbf{y}+\mathbf{\Lambda}\mathbf{y}_{t}^{(n)}\right\|^{2}\] \[= \operatorname*{argmin}_{\mathbf{y}\in\mathbb{R}^{d}}\left\|\mathbf{ \Lambda}\mathbf{y}-\left[\mathbf{r}_{j+1}+\mathbf{\Lambda}\mathbf{y}_{t}^{(n)}+ \frac{\mathbf{\omega}_{j}}{\rho}\right]\right\|^{2}+\frac{\left\|\mathbf{y}- \mathbf{y}_{t}^{(n)}+\eta_{t}\mathbf{g}_{t}^{(n)}\right\|^{2}}{\rho\eta_{t} \gamma}\] \[= \left(\rho\eta_{t}\gamma\mathbf{\Lambda}^{\top}\mathbf{\Lambda}+\mathbf{ I}\right)^{-1}\left[\rho\eta_{t}\gamma\mathbf{\Lambda}^{\top}\left[\mathbf{r}_{j+1}+ \mathbf{\Lambda}\mathbf{y}_{t}^{(n)}+\frac{\mathbf{\omega}_{j}}{\rho}\right]+\gamma_{ t}^{(n)}-\eta_{t}\mathbf{g}_{t}^{(n)}\right].\]
According to the eigen-value decomposition, \(\mathbf{\Lambda}^{\top}\mathbf{\Lambda}\) can be represented by \(\mathbf{\Lambda}^{\top}\mathbf{\Lambda}=\mathbf{P}\mathbf{\Sigma}\mathbf{P}^{-1}\), where \(\mathbf{\Sigma}:=\operatorname{diag}\left(\lambda_{1},\lambda_{2},...,\lambda_{d}\right)\), and \(\lambda_{i}\) with \(i\in\{1,2,...,d\}\) are eigen-values of \(\mathbf{\Lambda}^{\top}\mathbf{\Lambda}\). We have
\[\left(\rho\eta_{t}\gamma\mathbf{\Lambda}^{\top}\mathbf{\Lambda}+\mathbf{ I}\right)^{-1}=\mathbf{P}\left(\rho\eta_{t}\gamma\mathbf{\Sigma}+\mathbf{I} \right)^{-1}\mathbf{P}^{-1}.\]
Therefore, \(\mathbf{y}_{j+1}\) is updated by the following rule:
\[\mathbf{y}_{j+1} \tag{5}\] \[= \mathbf{P}\left(\rho\eta_{t}\gamma\mathbf{\Sigma}+\mathbf{I}\right)^ {-1}\mathbf{P}^{-1}\left[\rho\eta_{t}\gamma\mathbf{\Lambda}^{\top}\left[\mathbf{r }_{j+1}+\mathbf{\Lambda}\mathbf{y}_{t}^{(n)}+\frac{\mathbf{\omega}_{j}}{\rho}\right]+ \mathbf{y}_{t}^{(n)}-\eta_{t}\mathbf{g}_{t}^{(n)}\right].\]
**Update of \(\mathbf{\omega}\).** Given \(\mathbf{r}_{j+1}\) and \(\mathbf{y}_{j+1}\), \(\mathbf{\omega}_{j+1}\) is updated by the following rule:
\[\mathbf{\omega}_{j+1}=\mathbf{\omega}_{j}+\rho\left(\mathbf{r}_{j+1}- \mathbf{\Lambda}\mathbf{y}_{j+1}+\mathbf{\Lambda}\mathbf{y}_{t}^{(n)}\right). \tag{6}\]
In summary, algorithmic details are illustrated in Algorithm 3.
## V Computation Efficient Update of Model
In the section, we find optimum of \(\mathbf{x}\) and \(\mathbf{Z}\) by performing alternative optimization, iteratively. \(\mathbf{Z}\) is optimized by using ADMM, which significantly reduces the computational cost.
### _Efficient update of \(\mathbf{x}\)_
When the server collects \(\nabla_{t+1}^{(n)}\) from client, the sharing component \(\mathbf{x}\) is updated by performing as follows:
\[\mathbf{x}_{t+1}= \operatorname*{argmin}_{\mathbf{x}\in\mathbb{R}^{d_{1}}}\left\langle \frac{1}{N}\sum_{n=1}^{N}\mathbf{M}^{\top}\nabla_{t+1}^{(n)},\mathbf{x} \right\rangle+\frac{1}{2\eta_{t}}\left\|\mathbf{x}-\mathbf{x}_{t}\right\|^{2}.\]
Since it is an unconstrained optimization problem, it equals to the following update rule:
\[\mathbf{x}_{t+1}=\mathbf{x}_{t}-\eta_{t}\left(\frac{1}{N}\sum_{n=1}^{N} \mathbf{M}^{\top}\nabla_{t+1}^{(n)}\right).\]
That is, the sharing component \(\mathbf{x}\) can be updated by multiplication of matrices, which leads to low computational cost.
### _Efficient update of \(\mathbf{Z}\)_
Denote
\[h(\mathbf{Z}):=\frac{\mathbf{1}_{d}^{\top}\left(\left(\mathbf{N}^{\top}\nabla_ {t+1}\right)\odot\mathbf{Z}\right)\mathbf{1}_{N}}{N}+\frac{1}{2\eta_{t}}\left\| \mathbf{Z}-\mathbf{Z}_{t}\right\|_{F}^{2}.\]
The update of \(\mathbf{Z}\) can be formulated by the following problem:
\[\min_{\mathbf{Z}\in\mathbb{R}^{d_{2}\times N},\mathbf{W}\in\mathbb{R}^{d_{2} \times M}}H(\mathbf{Z},\mathbf{W}):=h(\mathbf{Z})+\lambda\left\|\mathbf{W} \right\|_{1,p},\]
subject to:
\[\mathbf{Z}\mathbf{Q}-\mathbf{W}=\mathbf{0}.\]
Denote the augmented Lagrangian multiplier of \(H(\mathbf{Z},\mathbf{W})\) by \(L(\mathbf{Z},\mathbf{W},\mathbf{\Omega})\), and we have
\[L(\mathbf{Z},\mathbf{W},\mathbf{\Omega})\] \[:= h(\mathbf{Z})+\lambda\left\|\mathbf{W}\right\|_{1,p}+\mathbf{1}_{d_ {2}}^{\top}(\mathbf{\Omega}\odot(\mathbf{Z}\mathbf{Q}-\mathbf{W}))\mathbf{1}_{M}+ \frac{\rho}{2}\left\|\mathbf{W}-\mathbf{Z}\mathbf{Q}\right\|_{F}^{2}.\]
Then, \(\mathbf{Z}\) is optimized by using ADMM by performing update of \(\mathbf{Z}\), \(\mathbf{W}\), and \(\mathbf{\Omega}\), iteratively.
**Update of Z.** Given \(\mathbf{W}_{k}\) and \(\mathbf{\Omega}_{k}\), \(\mathbf{Z}_{k+1}\) is obtained by performing the following problem:
\[\mathbf{Z}_{k+1}=\operatorname*{argmin}_{\mathbf{Z}\in\mathbb{R}^{d_{2}\times N }}L(\mathbf{Z},\mathbf{W}_{k},\mathbf{\Omega}_{k})\] \[=\operatorname*{argmin}_{\mathbf{Z}\in\mathbb{R}^{d_{2}\times N }}h(\mathbf{Z})+\mathbf{1}_{d_{2}}^{\top}(\mathbf{\Omega}_{k}\odot(\mathbf{Z }\mathbf{Q}))\mathbf{1}_{M}+\frac{\rho}{2}\left\|\mathbf{W}_{k}-\mathbf{Z} \mathbf{Q}\right\|_{F}^{2}.\]
Since it is unconstrained optimization problem, we can obtain \(\mathbf{Z}_{k+1}\) as follows:
\[\frac{\mathbf{N}^{\top}\nabla_{t+1}}{N}+\frac{\mathbf{Z}_{k+1}- \mathbf{Z}_{t}}{\eta_{t}}+\mathbf{\Omega}_{k}\mathbf{Q}^{\top}+\rho(\mathbf{ Z}_{k+1}\mathbf{Q}-\mathbf{W}_{k})\mathbf{Q}^{\top}=\mathbf{0}.\]
That is, we have
\[\mathbf{Z}_{k+1} \tag{7}\] \[= \left[\eta_{t}\left[\rho\mathbf{W}_{k}\mathbf{Q}^{\top}- \mathbf{\Omega}_{k}\mathbf{Q}^{\top}-\frac{\mathbf{N}^{\top}\nabla_{t+1}}{N} \right]+\mathbf{Z}_{t}\right]\left(\mathbf{I}_{N}+\eta_{t}\rho\mathbf{Q} \mathbf{Q}^{\top}\right)^{-1}.\]
**Update of W.** Given \(\mathbf{Z}_{k+1}\) and \(\mathbf{\Omega}_{k}\), \(\mathbf{W}_{k+1}\) is obtained by performing the following problem:
\[\mathbf{W}_{k+1}=\operatorname*{argmin}_{\mathbf{W}\in\mathbb{R}^{ d_{2}\times M}}L(\mathbf{Z}_{k+1},\mathbf{W},\mathbf{\Omega}_{k})\] \[= \operatorname*{argmin}_{\mathbf{W}\in\mathbb{R}^{d_{2}\times M}} \lambda\left\|\mathbf{W}\right\|_{1,p}-\mathbf{1}_{d_{2}}^{\top}(\mathbf{ \Omega}_{k}\odot\mathbf{W})\mathbf{1}_{M}+\frac{\rho\left\|\mathbf{W}- \mathbf{Z}_{k+1}\mathbf{Q}\right\|_{F}^{2}}{2}\] \[= \operatorname*{argmin}_{\mathbf{W}\in\mathbb{R}^{d_{2}\times M}} \lambda\left\|\mathbf{W}\right\|_{1,p}+\frac{\rho}{2}\left\|\mathbf{W}-\left( \mathbf{Z}_{k+1}\mathbf{Q}+\frac{1}{\rho}\mathbf{\Omega}_{k}\right)\right\|_{F} ^{2}\] \[= \mathbf{Prox}_{\frac{\rho}{k},\left\|\cdot\right\|_{1,p}}\left( \mathbf{Z}_{k+1}\mathbf{Q}+\frac{1}{\rho}\mathbf{\Omega}_{k}\right).\]
Recall that \(\left\|\cdot\right\|_{1,p}\) is the sum of norms, its proximal operator has a closed form [56]. Specifically, for the \(m\)-th column with \(m\in\{1,2,...,M\}\) of \(\mathbf{W}_{k+1}\) is obtained by performing:
\[\left[\mathbf{W}_{k+1}\right]_{:,m}=\left[\mathbf{Prox}_{\frac{ \rho}{k},\left\|\cdot\right\|_{1,p}}\left(\mathbf{Z}_{k+1}\mathbf{Q}+\frac{1} {\rho}\mathbf{\Omega}_{k}\right)\right]_{:,m}\] \[= \left[1-\frac{\lambda}{\left\|\rho\mathbf{Z}_{k+1}\mathbf{Q}_{:,m} +[\mathbf{\Omega}_{k}];,m\right\|_{q}}\right]_{+}\left(\mathbf{Z}_{k+1}\mathbf{ Q}_{:,m}+\frac{\left[\mathbf{\Omega}_{k}\right]_{:,m}}{\rho}\right), \tag{8}\]
where \([\mathbf{A}]_{:,m}\) represents the \(m\)-th column of \(\mathbf{A}\), and \(\left\|\cdot\right\|_{q}\) is the dual norm of \(\left\|\cdot\right\|_{p}\) such that \(\frac{1}{p}+\frac{1}{q}=1\).
**Update of \(\mathbf{\Omega}_{k}\)** Given \(\mathbf{Z}_{k+1}\) and \(\mathbf{W}_{k+1}\), \(\mathbf{\Omega}_{k+1}\) is obtained by performing the following rule:
\[\mathbf{\Omega}_{k+1}=\mathbf{\Omega}_{k}+\rho\left(\mathbf{Z}_{k+1}\mathbf{ Q}-\mathbf{W}_{k+1}\right). \tag{9}\]
Algorithmic details are shown in Algorithm 4.
In summary, the federated model with personalized and sharing components is optimized by performing update of \(\mathbf{x}\) and \(\mathbf{Z}\) iteratively. Algorithmic details are illustrated in Algorithm 5.
```
0: The number of total iterations \(T\), and the initial model \(\mathbf{x}_{1}\), and \(\mathbf{z}_{1}^{(n)}\) with \(n\in\{1,2,\cdots,N\}\).
1: Deliver the model \(\mathbf{y}_{1}^{(n)}=\mathbf{M}\mathbf{x}_{1}+\mathbf{N}\mathbf{z}_{1}^{(n)}\) to all client \(n\) with \(n\in\{1,2,...,N\}\).
2:for\(t=1,2,...,T\)do
3:for\(i=0,1,2,\cdots,I-1\)do
4: Collect update of local model \(\nabla_{i}=\left[\nabla_{t}^{(1)},\nabla_{t}^{(2)},...,\nabla_{t}^{(N)}\right]\) from all client \(n\) with \(n\in\{1,2,...,N\}\).
5: Update the global model \(\mathbf{x}_{t+1}\) by performing: \[\mathbf{x}_{i+1}=\mathbf{x}_{i}-\eta_{i}\left(\frac{1}{N}\sum_{n=1}^{N} \mathbf{M}^{\top}\nabla_{i}^{(n)}\right).\]
6: Deliver the model \(\mathbf{y}_{i+1}^{(n)}=\mathbf{M}\mathbf{x}_{i+1}+\mathbf{N}\mathbf{z}_{t}^{(n)}\) to every client.
7:for\(j=0,1,2,\cdots,J-1\)do
8: Collect update of local model \(\nabla_{j}=\left[\nabla_{j}^{(1)},\nabla_{j}^{(2)},...,\nabla_{j}^{(N)}\right]\) from all client \(n\) with \(n\in\{1,2,...,N\}\).
9: Update the personalized model \(\mathbf{Z}_{j+1}\) according to Algorithm 4.
10: Deliver the parameter \(\mathbf{y}_{j+1}^{(n)}=\mathbf{M}\mathbf{x}_{I}+\mathbf{N}\mathbf{z}_{j}^{(n)}\) to every client.
11:return\(\mathbf{x}_{T+1}^{(n)}=\mathbf{M}\mathbf{x}_{I}+\mathbf{N}\mathbf{z}_{J}^{(n)}\) with \(n\in\{1,2,...,N\}\).
```
**Algorithm 5** Computation efficient training of personalized models at the server.
## VI Empirical Studies
This section presents performance of the proposed method on model effectiveness, communication efficiency and so on by conducting extensive empirical studies.
### _Experimental Settings_
**Datasets and tasks.** We conduct classification and segmentation tasks on \(2\) public medical datasets: _Luna16, BraTS2017_, and \(3\) private medical datasets collecting from multiple medical centers of hospital: _CHD_, _Diabetes_, and _Covid19_. Those datasets own different modalities. Specifically, _Luna16_7 and
_BraTS2017_8 are lung CT and brain tumor MRI images, respectively. _CHD_, _Diabetes_, and _Covid19_ are structural medical data. Details of datasets are presented as follows.
Footnote 8: [https://www.med.upenn.edu/sbja/brats2017/data.html](https://www.med.upenn.edu/sbja/brats2017/data.html)
* **Luna16**. It is a public dataset to evaluate the algorithmic performance of lung nodule detection. The dataset consists of \(888\) patients' CT scans, and every scan is sliced into \(64\) pieces. More than \(551,065\) candidates of lung nodules are recognized by tools automatically, while only \(1186\) true nodules are identified by real doctors. In the experiment, we extract every candidate of lung nodules by using a \(32\times 32\) patch.
* **BraTS2017**. It is a public dataset, and is usually used to segmentation of glioma sub-regions of brain. The dataset consists of \(484\) patients' MRI scans, and every scan owns \(4\) channels. In the experiment, we extract every candidate of brain tumor by using a \(64\times 64\) patch.
* **CHD**. The dataset is built from the first medical center of the PLA general hospital of China. It is used to conduct prediction of bleeding risk in elderly patients with coronary heart disease combined with intestinal malignant tumors. The dataset consists of \(716\) patients' medical records, and every record owns \(58\) features. Logistic regression model is used to predict whether the event of bleeding appears.
* **Diabetes**. The dataset is built from the first medical center of the PLA general hospital of China, and is used to conduct risk prediction of type \(2\) diabetes retinopathy. The dataset consists of \(31,476\) patients' medical records, and every record owns \(63\) features. Logistic regression model is used to predict whether the event of diabetes retinopathy.
* **Covid19**. The dataset is built from three medical centers (the first/fifth/sixth medical center) of the PLA general hospital of China, and is used to predict event of Covid-19 infection. The dataset consists of \(2402\) patients' medical records, and every record owns \(77\) features. Logistic regression model is used to predict whether the infection event.
Additionally, we conduct \(3\) medical analysis tasks, including lung nodule classification, brain tumor segmentation, and clinical risk prediction.
* **Lung nodule classification**. _Dense net_[52] (D-Net) model is chosen to detect real lung nodules from all candidates. We choose parameters of the fully connecting layer as the personalized component, and others as the sharing component.
* **Brain tumor segmentation**. _U-net_[53] (U-Net) model is picked to conduct segmentation of brain tumors. Parameters of down-sampling layers are chosen as the sharing component, and up-sampling layers' parameters are chosen as the personalized component.
* **Clinical risk prediction**. We use _Logistic Regression_ (LR) model [51] to predict whether clinical risks (bleeding, and infection etc.) appears. All features are chosen as the personalized component.
In the experiment, we first fill all missing values by using zeros, and normalize values between \(-1\) and \(1\). Experimental settings are shown in Table I briefly.
**Methods and metrics.** The proposed method, that is _pFedNet_, is evaluated by comparing \(13\) existing methods. Those methods include _Ditto_[57], _FedAMP_[20], _FedAvg_[58], _L2GD_[28], _FedPer_[30], _FedProx_[59], _FedRoD_[21], _APFL_[22], _FPFC_[23], _IFCA_[24], _pFedMe_[25], _SuPerFed_[26], and _FedRep_[27]. _FedAvg_ and _FedProx_ are general optimization methods for federated learning, while others are recently proposed personalized federated learning methods. Additionally,
the performance of all classification model is measured by _test accuracy_, and the performance of the segmentation model is measured by _Intersection over Union (IoU)_. These metrics are widely used in previous work [18, 60, 59, 26]. The communication efficiency is measured by the model size.
**Federated setting.** In the experiment, there are \(5\) clients and \(1\) server. That is, the similarity network consists of \(5\) nodes, and its edges are generated by using KNN with \(k=3\). Every dataset is partitioned and allocated to all clients, \(80\%\) of them is used to training model, and \(20\%\) of them is used to conduct model test. Specifically, data federation for classification is built based on the setting of label unbalance, which is measured by \(\delta:=n_{\text{negative}}/n_{\text{positive}}\). Here, \(n_{\text{negative}}\) and \(n_{\text{positive}}\) represent the number of negative and positive labels, respectively. Data federation for segmentation is built based on setting of channel unbalance, which is measured by the id of the missing channel (e.g. _lack #0_, and _lack #1_ etc). All methods are implemented by using PyTorch, and run on a machine with \(24\) GB memory, \(2\)TB SSD, and Nvidia Tesla 3090.
Fig. 8: _pFedNet_ significantly reduces model size with _CER_.
Fig. 7: Illustrative results of test accuracy w.r.t. \(\lambda\).
### _Numerical Results on Public Datasets_
#### Iv-B1 Classification of Lung Nodules
First, we evaluate the model performance of methods, and find _pFedNet_ successfully beat other existing methods. We test personalized model at client, collect all local test accuracy, and then compute the average as the final test accuracy. As illustrated in Table II, _pFedNet_ achieves the best performance, and enjoys more than \(3\%\) gains of test accuracy higher than other methods in most case. Additionally, we vary \(\lambda\) to generate different personalized models. As shown in Figure 7, a small \(\lambda\) tends to yield a more personalized model, which could be adaptive to unbalance data, and obtains higher test accuracy. However, a tiny \(\lambda\) with \(\lambda<10^{-3}\) may falsely view some noise of data as the personalized component, which leads to over-personalized model, and decrease the model performance. It seems that \(\lambda=0.01\) is a good choice since most of unbalanced data achieves best performance. Moreover, we find that test accuracy is sensitive to \(\lambda\). The more unbalanced data, the more sensitive it is. Specifically, comparing with the unbalanced data with different settings of \(\delta\), we find the test accuracy decreases much more significantly with the increase of \(\lambda\) for a larger \(\delta\).
Second, the proposed method, namely _CER_, successfully improves communication efficiency by reducing model size effectively. Figure 8 shows the superiority of communication efficiency. It is a good complement for existing methods, and can promote their performance effectively. We choose one of widely used model compression methods, that is _STC_[61], to show benefits of the proposed method. As illustrated in Figure 8(a). _STC_ without _CER_, that is \(\gamma=0\), achieves up to \(146\times\) compression ratio at client, and \(34\times\) compression ratio at server, respectively. After equipping with _CER_ (\(\gamma=0.1\)), that is _CER+STC_, successfully achieves up to \(172\times\) compression ratio at client and \(61\times\) compression ratio at server. The advantage becomes more significant with the increase of \(\gamma\). As we have claimed, the communication efficiency may be achieved with sacrifice of model performance. Figure 8(b) demonstrates the test accuracy climbs up fast, and the gap caused by _CER_ is insignificant. That is, the superiority on the communication efficiency can be achieved without significant harm to the test accuracy. Specifically, as shown in Figure 8(c), _CER_ reduces size of client's model effectively, which becomes more and more significant with the increase of \(\gamma\). Figure 8(d) shows that _CER_ can improve the communication efficiency at client by reducing \(7\%\sim 32\%\) model size more than _STC_. The benefit becomes more significant when delivering personalized model to every client. Figure 8(e) shows that _CER_ can promote the performance of _STC_ prominently at server, and obtains much more noticeable advantages on the communication efficiency than that at client. Similarly, Figure 8(f) shows that the communication efficiency at server can be improved by reducing \(45\%\sim 60\%\) model size more than _STC_. Therefore, those numerical results validate that _CER_ makes a good tradeoff between accuracy and communication efficiency.
_STC_ without _CER_, that is the case of \(\gamma=0\) enjoys more than \(7\times\) compression ratio of model size at server, and \(28\times\) compression ration at client. Although it is effective to compress model, the proposed method _CER_ successfully achieves more than \(9\times\) and \(52\times\) compression ratio for the server and client when choosing \(\gamma=50\), respectively. Its advantage becomes significant with the increase of \(\gamma\), and achieves more than \(17\times\) and \(92\times\) compression ratio for the server and client. The reason is that _CER_ encourages local update of model to own clustering structure, which is more suitable to conduct compression by using existing methods. According to Figure 10(b), we observe that _CER_ with a large \(\gamma\) is indeed harmful to the model training, which means the model may need more time to achieve the optimum by using large \(\gamma\). It validates that _CER_ makes tradeoff between communication efficiency and model performance. We suggest to adopt the dynamic strategy to choosing \(\gamma\) during model learning to obtain more gains of communication efficiency without much sacrifice of model performance. Figure 10(c) shows that _CER_ can be used together with _STC_, and achieves much more significant compression. The superiority becomes significant with increase of \(\gamma\). According to Figure 10(d), we observe that _CER_ gains more than \(28\%\sim 60\%\) improvement of communication efficiency at client. Similarly, we find that more than \(23\%\sim 57\%\) improvement of communication efficiency at server according to Figures 10(e) and 10(f). It validates that _CER_ can successfully find a good tradeoff between model performance and communication efficiency once more.
Finally, Figure 11 illustrates the true region of target object (the red line), and some examples of the segmentation region (the blue line) yielded by _pFedNet_ and other methods for the setting of 'lack #0'. As we can see, _pFedNet_ captures details of interested region more accurately than others.
### _Numerical Results on Private Datasets_
#### Vi-C1 Prediction of Clinical Risk
We evaluate _pFedNet_ by using LR model on three structural medical datasets. As illustrated in Tables V-VII, _pFedNet_ outperforms other methods on the test accuracy in most settings of \(\delta\). It shows that the proposed model _pFedNet_ works well for tabular data, and validates the superiority on the model performance again. Additionally, we evaluate the test accuracy by varying \(\lambda\). According to Figure 12, we observe that _pFedNet_ achieves significantly high accuracy in the balanced setting of the data federation, that is \(\delta=1\). Specifically, the accuracy seems to increase slightly with a large \(\lambda\), but may decrease when \(\lambda\) is too large. The reason is that \(\lambda\) controls the the tradeoff between personalized model and general model, and it should not be either too large or small. Specifically, \(\lambda=0.1\) is recommended for the task since it gains best performance in most settings of \(\delta\). Since every LR model in the task only has \(58\sim 77\) features, which leads to tiny workload of data transmission, the communication efficiency is not evaluated here.
## VII Conclusion
We propose a new formulation of personalized federated learning, which has good adaption to heterogenous medical data, and achieves better performance than existing methods. To improve the communication efficiency, we further develop a communicate efficient regularizer, which can decrease workload of communication effectively. Additionally, we propose a new optimization framework to update personalized models, which reduces computation cost significantly. Extensive empirical studies have been conducted to verify the effectiveness of the proposed method. In the future, we explore and analyze the dynamics of medical data, and try to develop the adaptive version of the proposed model to capture such dynamics.
## Acknowledgment
This work was supported by the funding Grants No. 22-TQ23-14-ZD-01-001, and No. 145BQ090003000X03. Zongren Li and Qin Zhong provide help on collecting medical datasets and related papers. Hebin Che provide help on processing of medical data. Mingming Jiang gives advices about English writing. Thanks a lot for their kind heart and help.
|
2303.11325 | GeoMIM: Towards Better 3D Knowledge Transfer via Masked Image Modeling
for Multi-view 3D Understanding | Multi-view camera-based 3D detection is a challenging problem in computer
vision. Recent works leverage a pretrained LiDAR detection model to transfer
knowledge to a camera-based student network. However, we argue that there is a
major domain gap between the LiDAR BEV features and the camera-based BEV
features, as they have different characteristics and are derived from different
sources. In this paper, we propose Geometry Enhanced Masked Image Modeling
(GeoMIM) to transfer the knowledge of the LiDAR model in a pretrain-finetune
paradigm for improving the multi-view camera-based 3D detection. GeoMIM is a
multi-camera vision transformer with Cross-View Attention (CVA) blocks that
uses LiDAR BEV features encoded by the pretrained BEV model as learning
targets. During pretraining, GeoMIM's decoder has a semantic branch completing
dense perspective-view features and the other geometry branch reconstructing
dense perspective-view depth maps. The depth branch is designed to be
camera-aware by inputting the camera's parameters for better transfer
capability. Extensive results demonstrate that GeoMIM outperforms existing
methods on nuScenes benchmark, achieving state-of-the-art performance for
camera-based 3D object detection and 3D segmentation. Code and pretrained
models are available at https://github.com/Sense-X/GeoMIM. | Jihao Liu, Tai Wang, Boxiao Liu, Qihang Zhang, Yu Liu, Hongsheng Li | 2023-03-20T17:59:03Z | http://arxiv.org/abs/2303.11325v2 | # Towards Better 3D Knowledge Transfer via Masked Image Modeling for Multi-view 3D Understanding
###### Abstract
Multi-view camera-based 3D detection is a challenging problem in computer vision. Recent works leverage a pre-trained LiDAR detection model to transfer knowledge to a camera-based student network. However, we argue that there is a major domain gap between the LiDAR BEV features and the camera-based BEV features, as they have different characteristics and are derived from different sources. In this paper, we propose Geometry Enhanced Masked Image Modeling (GeoMIM) to transfer the knowledge of the LiDAR model in a pretrain-finetune paradigm for improving the multi-view camera-based 3D detection. GeoMIM is a multi-camera vision transformer with Cross-View Attention (CVA) blocks that uses LiDAR BEV features encoded by the pre-trained BEV model as learning targets. During pretraining, GeoMIM's decoder has a semantic branch completing dense perspective-view features and the other geometry branch reconstructing dense perspective-view depth maps. The depth branch is designed to be camera-aware by inputting the camera's parameters for better transfer capability. Extensive results demonstrate that GeoMIM outperforms existing methods on nuScenes benchmark, achieving state-of-the-art performance for camera-based 3D object detection and 3D segmentation.
## 1 Introduction
Multi-view camera-based 3D detection is an emerging critical problem in computer vision [20, 44, 45, 19, 28, 29, 36, 26, 31, 32]. To improve the detection performance, recent works [9, 27, 21] often choose to use a pretrained LiDAR model as the teacher and transfer its knowledge to a camera-based student network. Various techniques, such as LIGA-Stereo [13], CMKD [17], and BEVDistill [8], have been proposed to leverage the rich geometry information of the LiDAR model's BEV (bird's eye view) features.
Utilizing a pretrained LiDAR model to provide auxiliary supervision has become a widely adopted design that can enhance the performance of camera-based models. However, we contend that this design is not optimal due to a significant domain gap between the BEV features of the LiDAR model and those of the camera-based model. This domain gap arises from the 3D and sparse characteristics of LiDAR point clouds compared to the dense 2D images captured by the camera. Additionally, the LiDAR model's BEV features are grounded in ground truth depth, while those of the camera-based model are typically inferred from 2D images, a problem that is often ill-posed. We empirically demonstrate their domain gap with a pilot study as shown in Tab. 1. We find that utilizing a LiDAR teacher to provide auxiliary supervision can indeed improve an ImageNet-pretrained [33] camera-based model, but is unable to improve a stronger camera-based model initialized by recent powerful self-supervised pretraining. In other words, directly utilizing the pretrained LiDAR model to distill the final camera-based model might not be an optimal design and does not necessarily lead to performance gain.
To better take advantage of the LiDAR model, in this paper, we propose _Geometry Enhanced Masked Image Modeling (GeoMIM)_ to transfer the knowledge of the LiDAR model in a pretrain-finetune paradigm for improving the multi-view camera-based 3D detection. It is built upon a multi-camera vision transformer with _Cross-View Attention (CVA)_ blocks and enables perspective-view (PV) representation pretraining via BEV feature reconstruction from masked images. Specifically, during pretraining, we partition the training images into patches and feed a portion of them into the encoder following Masked Autoencoder [14]. Our GeoMIM decoder then uses these encoded visible to
\begin{table}
\begin{tabular}{l l|c c} \hline \hline
**Pretrain** & **Supervision** & **Finetune** &
\begin{tabular}{c} **Finetune +** \\ **LiDAR BEV** \\ \end{tabular} \\ \hline SL [33] & Classes & 40.6 & 41.7 \\ SSL [30] & RGB Pixels & 44.3 & 43.9 \\ GeoMIM & BEV Feature & **47.2** & 45.4 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The effects of LiDAR BEV feature distillation on ImageNet-pretrained (SL), self-supervised (SSL), and our GeoMIM pretraining-finetuning settings for BEVDet in nuScenes 3D detection. Naively distilling LiDAR BEV features in finetuning introduces domain gaps and harms the performance when the pretrained model is powerful enough.
kens to reconstruct the pretrained LiDAR model's BEV feature in the BEV space instead of commonly used RGB pixels [47, 14, 30] or depth points [3] as in existing MAE frameworks. To achieve this PV to BEV reconstruction, we first devise two branches to _decouple_ the semantic and geometric parts, with one branch completing dense PV features and the other reconstructing the depth map. The dense PV features can then be projected into the BEV space with the depth distribution following Lift-Splat-Shoot (LSS) [37]. We further equip the two branches with the proposed CVA blocks in their intermediate layers to allow each patch to attend to tokens in other views. It enhances the decoder's capability of joint multi-view inference which is especially critical for BEV feature reconstruction. Finally, the depth branch is designed to be _camera-aware_ with the additional encoding of cameras' parameters as input, making the pretrained GeoMIM better adapt to downstream tasks with different cameras.
To demonstrate the effectiveness of GeoMIM, we finetune the pretrained backbone to conduct multi-view camera-based 3D detection and 3D segmentation on the nuScenes [7] dataset. We achieve state-of-the-art results of 64.4 NDS (NuScenes Detection Score) and 70.5 mIoU (mean intersection over union) for 3D detection and segmentation on the NuScenes test set, which are 2.5% and 1.1% better than previously reported best results [36, 22]. Additionally, we verify that the backbone pretrained on nuScenes dataset can be successfully transferred to Waymo Open dataset [40], improving the mAP (mean average precision) of the ImageNet-initialized 3D detector by 6.9%.
## 2 Related Works
Masked Image ModelingInspired by BERT [11] for Masked Language Modeling, Masked Image Modeling (MIM) becomes a popular pretext task for visual representation learning [6, 14, 2, 46, 1, 4, 51, 3, 49]. MIM aims to reconstruct the masked tokens from a corrupted input. SimMIM [47] points out that raw pixel values of the randomly masked patches are a good reconstruction target and a lightweight prediction head is sufficient for pretraining. Different from SimMIM, MAE [14] only takes the visible patches as the input of the encoder. Mask tokens are added in the middle of the encoder and the decoder. BEiT [6] utilizes a pretrained discrete VAE (dVAE) [39, 38] as the tokenizer. PeCo [12] proposed to apply perceptual similarity loss on the training of dVAE can drive the tokenizer to generate better semantic visual tokens, which helps pretraining. In contrast to those works, our GeoMIM utilizes a geometry-rich LiDAR model and transfers its knowledge via MIM pretraining, aiming to improve the multi-view camera-based 3D models.
Multi-view camera-based 3D detectionThe field of camera-based 3D object detection has seen significant progress in recent years [44, 45, 29, 20, 28, 26, 36, 50]. FCOS3D [44] proposed a fully convolutional single-stage detector for monocular 3D object detection. DETR3D [45] extends the DETR framework to the 3D domain, and proposes a framework for end-to-end 3D object detection. BEVFormer [29] combines BEV (bird's eye view) representation and transformer networks for 3D object detection. BEVDepth [28] focuses on accurately estimating the depth of objects in the BEV representation. Additionally, considering the promising performance of the LiDAR-based detectors, there are several papers that use a pretrained LiDAR detector for knowledge distillation [16]. LIGA-Stereo [13] proposes to mimic the LiDAR BEV features for training a camera-based detector. UVTR [27] represents different modalities in a unified manner and supports knowledge transfer with the voxel representations. More recent BEVDistill [8] and CMKD [17] not only use the LiDAR BEV features for knowledge distillation but also transfer the teacher's knowledge through sparse instance distillation and response-based distillation respectively. In comparison, we utilized the pretrained LiDAR model in a pretraining-finetuning paradigm to avoid the LiDAR-camera BEV domain gap.
## 3 Method
Employing a pretrained LiDAR-based detection model to provide auxiliary learning guidance to train camera-based 3D understanding models has shown promising results in recent years [13, 8, 9, 27, 17]. However, because of the domain gap between the LiDAR and camera modalities, we observe that when a camera-based model is already strong, directly supervising it with the LiDAR teacher fails to improve the camera-based model as shown in Tab. 1.
To address this problem, we propose GeoMIM to better transfer the LiDAR model's knowledge to the camera-based model in a pretrain-finetune paradigm. GeoMIM pretrains a multi-view camera-based model via Masked Image Modeling (MIM) [47]. Unlike existing 2D MAE works [14, 30], we project the semantic features to the BEV (bird's eye view) space and use the LiDAR BEV features in the 3D space as the reconstruction targets for pretraining. The pretrained LiDAR model is only used in the pretraining stage, and is discarded in the finetuning stage to avoid introducing the LiDAR-camera BEV domain gap. We illustrate the proposed GeoMIM in Fig. 1.
Masking and EncoderGiven the multi-view input images \(X=\{x_{i}\in\mathbb{R}^{3\times H\times W},i=1,2,\ldots,N\}\) where \(N\), \(H\), \(W\) are the number of views, image height, and width, we randomly mask a proportion of input image patches (tokens) and use a Swin Transformer [33] as the encoder to encode the visible tokens. The encoded representations, \(F^{v}\in\mathbb{R}^{N\times C\times L}\) where \(C\) and \(L\) denote the number of dimensions and the number of visible tokens, are then filled with a shared mask token \([\mathrm{M}]\in\mathbb{R}^{C}\) at the masked locations and further processed by the decoder for reconstruction.
GeoMIM DecoderTo transfer the rich geometry knowledge of a pretrained LiDAR detector to our camera-based model, we jointly project the multi-view semantic features according to their estimated depth maps to the BEV space and use the same scene's LiDAR BEV features as the reconstruction targets. Specifically, our GeoMIM uses two _decoupled_ decoders, each of which consists of 8 Transformer [41] blocks. The semantic decoder \(\mathrm{D_{sem}}\) reconstructs the dense camera-view semantic features \(F^{s}\in\mathbb{R}^{N\times C\times\frac{H}{16}\times\frac{W}{16}}\) of the \(N\) camera views and the other geometry decoder \(\mathrm{D_{geo}}\) predicts dense camera-view depth maps \(D\in\mathbb{R}^{N\times B\times\frac{H}{16}\times\frac{W}{16}}\) of the \(N\) camera views, where \(B\) denotes the number of depth bins. The depth map and semantic feature can be expressed as
\[D=\mathrm{D_{geo}}(F^{v},[\mathrm{M}]),\quad F^{s}=\mathrm{D_{sem}}(F^{v},[ \mathrm{M}]). \tag{1}\]
We can then obtain the camera BEV features \(F^{I}_{BEV}\) by jointly projecting the multi-view semantic features to the BEV space with the Lift-Splat-Shoot (LSS) [37] operation according to the predicted dense depth maps,
\[F^{I}_{BEV}\in\mathbb{R}^{C\times N_{x}\times N_{y}}=\mathrm{LSS}(F^{s},D), \tag{2}\]
where \(N_{x}\), \(N_{y}\) are the numbers of bins in the \(x\) and \(y\) axis of the BEV feature maps respectively. Empirically, the two decoders share the first half of the Transformer blocks for efficiency.
Unlike existing works that separately process the multi-view input images, we propose a novel _Cross-View Attention (CVA)_ block to model the interaction across different views to better reconstruct the LiDAR BEV features from input images. Our intuition is that as the multi-view images are naturally overlapped, proper interaction across views is beneficial to align those images and better infer the LiDAR BEV features. Instead of explicitly using the epipolar lines to associate pixels across the multi-view images, we partition the camera-view tokens of the multiple views into groups according to their row indices and only allow the tokens belonging to the same row of the \(\frac{1}{16}\) input resolution to interact with each other. The interaction is modeled by the self-attention operation [41]. Notably, our proposed CVA has linear computation complexity to the input image size and is therefore much more efficient compared to global self-attention. We illustrate the proposed CVA in Fig. 2. We use the CVA block as the \(2\)th and \(6\)th attentions blocks of the decoder. Note that we do not add it to the backbone and no extra computation is introduced when finetuning the encoder.
Accurately reconstructing depth with the geometry decoder implicitly requires the decoder to infer the camera's intrinsic parameters, which is difficult to generalize to an unseen dataset as the data may be collected with different
Figure 1: Overview of GeoMIM. For pretraining, the multi-view images are randomly masked for a proportion of image tokens, and only the visible tokens are processed by the encoder. Right before decoding, the token embeddings are filled with mask tokens for separately decoding dense camera-view semantic features and depth maps, which are then projected to BEV space for reconstructing the LiDAR BEV features. After pretraining, only the encoder is finetuned on downstream tasks.
Figure 2: Cross-view attention block. We partition the multi-view inputs into multiple groups according to their row indices, and perform self-attention within each group.
cameras. To achieve better transferability across different downstream tasks, we encode the camera's intrinsic and extrinsic parameters using a linear projection layer and use the resulting features to scale the geometry decoder's feature using the Squeeze-and-Excitation module [18]. Importantly, we do not require the camera's information when finetuning on downstream tasks since only the decoder uses the camera's information during pretraining. We demonstrate that the camera-aware depth reconstruction branch leads to better performance when finetuning on tasks that differ from the pretraining dataset.
**Loss** We use the mean squared error (MSE) loss between the projected camera BEV features and the pretrained LiDAR BEV features for pretraining,
\[\mathcal{L}_{rec}=\|(F^{I}_{BEV}-F^{L}_{BEV})\|_{2}^{2}, \tag{3}\]
where \(F^{L}_{BEV}\in\mathbb{R}^{C\times N_{x}\times N_{y}}\) denotes the pretrained LiDAR model's BEV features. In addition, we incorporate a depth prediction task. Following prior arts [28], we use the ground truth discrete depth \(D_{GT}\) derived from the LiDAR point cloud and calculate the binary cross entropy (BCE) loss as the depth loss,
\[\mathcal{L}_{depth}=\mathrm{BCE}(D,D_{GT}). \tag{4}\]
The overall loss can be expressed as
\[\mathcal{L}=\mathcal{L}_{rec}+\alpha\mathcal{L}_{depth}, \tag{5}\]
where \(\alpha\) balances the two loss terms, which is set as 0.01 experimentally. Empirically, we observe that the depth loss can enhance the convergence speed, which is crucial for pretraining large models.
After pretraining, we discard the decoders and add a task-specific head on the top of the encoder for downstream tasks finetuning. During finetuning, we only utilize ground-truth supervision and abstain from utilizing the LiDAR model to avoid introducing the aforementioned domain gap.
**Comparison with 2D MAE** Compared to existing 2D MAE models [14, 47, 30], our proposed GeoMIM's pretraining has two distinct characteristics: (1) We employ a geometry-rich LiDAR model and transfer its high-level knowledge in the BEV space via MIM pretraining, which can effectively enhance the geometry perception capability of the camera-based model. In contrast, the original MAE [14] reconstructs image pixels and could work well for 2D downstream perception tasks, but is found to be less effective for 3D perception. The reason is that the autonomous driving dataset, e.g., nuScenes [7], is much less diverse than MAE's pretraining dataset ImageNet-1K [10]. As a result, employing image pixel reconstruction as the pretext task is hard to learn high-quality representations. (2) Contrary to MAE which only calculates the reconstruction loss in the masked tokens, we take all tokens into consideration in our loss. This is because the learning targets we use are from a different modality and in a different geometric space. We can take full advantage of the LiDAR model by using all tokens to calculate the loss. For the masked locations, the objective is a prediction task while for the unmasked locations, it is similar to a distillation task.
## 4 Experiment Setups
To demonstrate the effectiveness of GeoMIM, we conduct experiments by pretraining Swin Transformer [33] backbones with GeoMIM and then finetuning it on various downstream tasks. These tasks include multi-view camera-based 3D detection on nuScenes [7] and Open Waymo [40] datasets, camera-based 3D semantic segmentation on nuScenes dataset, and 2D detection on nuImages dataset.
**Dataset and Evaluation Metrics** We use the large-scale nuScenes dataset for pretraining and finetuning, which contains 750, 150, and 150 scenes for training, validation, and testing, respectively. Each scene has 6 camera images and LiDAR point cloud covering 360\({}^{\circ}\). Following the official evaluation metrics, we primarily report NuScenes Detection Score (NDS) and mean Average Precision (mAP) for comparison. We also report other five metrics, including ATE, ASE, AOE, AVE, and AAE, to measure translation, scale, orientation, velocity, and attribute errors, respectively, for a more detailed diagnosis.
We also evaluate the transferability of GeoMIM by finetuning the pretrained backbone on the Open Waymo and nuImages datasets. We report LET-3D-APL [23] and LET-3D-AP following the latest official guidelines for comparison. We report Mean Average Precision (mAP) of box and mask on nuImages dataset for 2D object detection and instance segmentation.
**Pretraining** We pretrain the Swin Transformer backbones on the training split of the nuScenes dataset with multi-view images as input. By default, we pretrain for 50 epochs with an input size of \(256\times 704\). For ablation studies, we pretrain for 6 epochs unless otherwise specified. We use a pretrained TransFusion-L [5] LiDAR model to provide the reconstruction targets. We randomly mask the multi-view input images with a mask ratio of 50%. We use AdamW [34] optimizer with a learning rate of \(2\times 10^{-4}\) and weight decay of 0.01. The learning rate is linearly warmed-up for 500 iterations and cosine decayed to 0. We apply the data augmentation strategy in BEVDet [20] to augment the input images and do not use augmentations for the LiDAR inputs. We utilize the Swin-Base and -Large backbones for pretraining, initializing the backbone with self-supervised ImageNet-pretraining [30].
**Finetuning** We keep the pretrained encoder, abandon the decoders, and adopt state-of-the-art frameworks for finetuning. We mainly evaluate the performance of the finetuned models on the 3D detection task on the nuScenes dataset. We also assess the transferability of GeoMIM on other down
stream tasks.
For the 3D detection on the nuScenes dataset, we utilize the BEVDepth [28] framework with an input size of \(512\times 1408\) for comparison with other state-of-the-art approaches. For ablation studies, we use the BEVDet [20] framework with an input size of \(256\times 704\). For 3D detection on the Open Waymo dataset, the DfM [43, 42] framework is utilized. For 3D segmentation on the nuScenes dataset, we utilize the recent TPVFormer [22] for finetuning. We use MaskRCNN [15] for object detection and instance segmentation on nuImages. We follow those frameworks' default settings for finetuning, and include the detailed hyperparameters settings in the supplementary.
## 5 Main Results
In this section, we compare our GeoMIM to prior arts on various benchmarks. We first conduct comparisons between GeoMIM and previous pretraining approaches in Sec. 5.1. We then compare our best results with state-of-the-art results on the nuScenes 3D detection benchmark in Sec. 5.2. To show the transferability of GeoMIM, we present the transfer learning results on other 3 benchmarks in Sec. 5.3. We finally show the quantitative results in Sec. 7.
### Comparison with previous camera-based pretraining methods
We compare our pretraining method, GeoMIM, with previous pretraining approaches to demonstrate its effectiveness in multi-view camera-based 3D detection. Four pretraining approaches for camera-based models are utilized, including the supervised pretraining on ImageNet-1K [33], the contrastive approach EsViT [25], the multi-modal approach UniCL [48], and masked-image-modeling approach MixMAE [30]. Using the BEVDet framework with input size of \(256\times 704\), we finetune the pretrained Swin-B [33] models on nuScenes [7] and compare their performances in Tab. 4. Our approach outperforms other compared approaches in terms of all reported metrics, demonstrating the effectiveness of our pretraining method.
Particularly, our approach achieves 0.472 NDS (NuScenes Detection Score), 2.9% better than the self-supervised pretraining. Notably, our approach is much better at predicting translation, demonstrating a 3.3% improvement in mATE, which shows that our geometry-enhanced pretraining can help more with localization. Surprisingly, while the contrastive or multi-modal approaches perform much better than the supervised ImageNet pretraining on various 2D visual tasks, they fail to improve the ImageNet-supervised pretraining on the 3D detection task.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline
**Pretrain** & **NDS\(\uparrow\)** & **mAP\(\uparrow\)** & **mATE\(\downarrow\)** & **mADE\(\downarrow\)** \\ \hline Supervised [33] & 0.406 & 0.326 & 0.665 & 0.546 \\ EsViT [25] & 0.389 & 0.305 & 0.699 & 0.516 \\ UniCL [48] & 0.396 & 0.314 & 0.694 & 0.596 \\ MixMAE [30] & 0.443 & 0.374 & 0.647 & 0.418 \\ GeoMIM & **0.472** & **0.397** & **0.614** & **0.395** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison with previous pretraining methods on nuScenes with BEVDet.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c c c c} \hline \hline
**Framework** & **Pretrain** & **Backbone** & **Image Size** & **CBGS** & **mAP\(\uparrow\)** & **NDS\(\uparrow\)** & **mATE\(\downarrow\)** & **mASE\(\downarrow\)** & **mADE\(\downarrow\)** & **mAVE\(\downarrow\)** & **mAAE\(\downarrow\)** \\ \hline DETR3D [45] & & & \(900\times 1600\) & ✓ & 0.349 & 0.434 & 0.716 & 0.268 & 0.379 & 0.842 & 0.200 \\ BEVFormer [29] & \multirow{2}{*}{FCOS3D} & \multirow{2}{*}{R101-DCN} & \(900\times 1600\) & ✗ & 0.416 & 0.517 & 0.673 & 0.274 & 0.372 & 0.394 & 0.198 \\ UVTR [27] & & & \(900\times 1600\) & ✗ & 0.379 & 0.483 & 0.731 & 0.267 & 0.350 & 0.510 & 0.200 \\ PolarFormer [24] & & & \(900\times 1600\) & ✗ & 0.432 & 0.528 & 0.648 & 0.270 & 0.348 & 0.409 & 0.201 \\ \hline PETR [31] & \multirow{2}{*}{ImageNet} & \multirow{2}{*}{R101} & \(512\times 1408\) & ✓ & 0.357 & 0.421 & 0.710 & 0.270 & 0.490 & 0.885 & 0.224 \\ PETRv2 [32] & & & \(640\times 1600\) & ✓ & 0.421 & 0.524 & 0.681 & 0.267 & 0.357 & 0.377 & 0.186 \\ SOLFOusion [36] & & \(512\times 1408\) & ✓ & 0.483 & 0.582 & 0.503 & 0.264 & 0.381 & **0.246** & 0.207 \\ \hline \hline \multirow{2}{*}{BEVDepth [28]} & \multirow{2}{*}{ImageNet} & \multirow{2}{*}{ConvNeXt-B} & \(512\times 1408\) & ✓ & 0.462 & 0.558 & - & - & - & - & - \\ BEVSteero [26] & & & \(512\times 1408\) & ✓ & 0.478 & 0.575 & - & - & - & - & - \\ \hline BEVDet4D [19] & \multirow{2}{*}{ImageNet} & \multirow{2}{*}{Swin-B} & \(640\times 1600\) & ✓ & 0.421 & 0.545 & 0.579 & **0.258** & **0.329** & 0.301 & **0.191** \\ BEVDepth\({}^{\dagger}\) & & & \(512\times 1408\) & ✓ & 0.466 & 0.555 & 0.531 & 0.264 & 0.489 & 0.293 & 0.200 \\ \hline BEVDepth & GeoMIM & Swin-B & \(512\times 1408\) & ✓ & **0.523** & **0.605** & **0.470** & 0.260 & 0.377 & 0.254 & 0.195 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison on nuScenes val set. \(\dagger\) denotes our implementation with the official code.
\begin{table}
\begin{tabular}{l|c c|c c|c} \hline \hline
**Pretrain** & \begin{tabular}{c} **3D-Segmentation** \\ \(\text{mIoU}^{\text{val}}\) & \(\text{mIoU}^{\text{test}}\) \\ \end{tabular} & \begin{tabular}{c} **Waymo** \\ LET-3D APL \\ \end{tabular} & \begin{tabular}{c} **mulmulticolumn{2}{c}{**nullmages**} \\ \(\text{LET-3D}\) AP \\ \end{tabular} &
\begin{tabular}{c} **\(\text{AP}^{\text{box}}\) \\ \(\text{AP}^{\text{mask}}\) \\ \end{tabular} \\ \hline & TPVFormer [22] & \multicolumn{2}{c}{DfM [43]} & \multicolumn{2}{c}{Mask-RCNN [15]} \\ \hline Supervised [33] & 66.4 & 68.3 & 31.5 & 44.6 & 49.0 & 41.3 \\ Self-supervised [30] & 65.0 & 66.3 & 34.8 & 49.5 & 51.5 & 41.8 \\ GeoMIM & **68.9** & **70.5** & **37.8** & **52.5** & **52.9** & **44.4** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Transfer learning results on 3D-segmentation with TPVFormer (left), Open Waymo 3D detection with DfM (middle), and nuImages object detection and segmentation with Mask-RCNN (right).
### Comparison with state-of-the-art results
Tab. 2 shows the comparison of our approach with state-of-the-art methods on nuScenes val set. Our approach achieves state-of-the-art results of 0.605 NDS and 0.523 mAP, demonstrating substantial 2.3% NDS and 4.0% mAP improvements over SOLOFusion [36]. Particularly, the most improvement of NDS comes from the mATE, which improves SOLOFusion by 2.7%. Compared to BEVDepth [28] using the same Swin-B backbone, we improve the NDS and mAP by 5.0% and 5.7% respectively.
On the test set, our single model achieves 64.4% NDS and 56.1% mAP without using extra data and test-time augmentation, which are 2.5% and 2.1% better than the previous state-of-the-art results. Notably, this model performs best among all reported metrics. Compared to BEVStereo [26], the most significant improvement of NDS comes from the mAVE (10.2%), which shows that our geometry-enhanced pretraining is not only better for localization but also improves the velocity estimation. Compared to SOLOFusion, we largely improve the mATE metric (5.3%), showing that our pretraining is beneficial for localization.
We also show that our pretraining is scalable in terms of model size. In particular, on the test set, we obtain 1.8% NDS and 1.5% mAP gains by using the larger Swin-L [33] backbone.
### Transfer to various 3D understanding tasks
In this section, we evaluate the transferability of our approach to other datasets and tasks with different frameworks. We use three benchmarks, 3D segmentation on nuScenes dataset [7], 3D detection on Open Waymo dataset [40], and object detection and instance segmentation on nuImages dataset.
As shown in Tab. 3, our approach achieves superior results on all three benchmarks, demonstrating the transferability of our pretraining method. Particularly, on the 3D segmentation task, our approach achieves 68.9% mIoU on the nuScenes val set, surpassing the ImageNet-supervised pretraining [33] results by a large margin. Note that, unlike the 3D detection task, the self-supervised pretraining [30] fails to improve the supervised pretraining because the segmentation task relies more on semantic understanding. In comparison, GeoMIM improves the ImageNet-supervised pretraining for 2.5% mIoU. On the nuScenes test test, we achieve state-of-the-art results, 1.1% mIoU better than the previous best camera-based results in TPVFormer [22]. Moreover, our pretrained backbone can also transfer to datasets that differ from that used in pretraining. On Open Waymo detection benchmark, our pretraining improves the MixMAE [30] self-supervised pretrained model by 3.0%/3.0% on LET-3D APL/AP.
Apart from the 3D perception task, we show that our pretrain can also transfer to 2D object detection and instance segmentation tasks. As shown in Tab. 3 (right), GeoMIM improves the self-supervised pretraining by 1.4% AP\({}^{\text{box}}\) and 2.6% AP\({}^{\text{mask}}\).
## 6 Ablation Studies
In this section, we conduct ablation studies to evaluate the impact of different design choices on the performance of our proposed GeoMIM on the multi-view camera-based 3D detection task. Unless otherwise specified, we use the Swin-B [33] backbone and pretrain it for 6 epochs. We utilize the BEVDet [20] framework for finetuning the pretrained backbone and report the performance on the nuScenes val set [7]. The gray column indicates the final choice of GeoMIM.
**Pretraining epochs and pretraining data.** We explore the effect of pretraining epochs and pretraining data on GeoMIM. As shown in Tab. 6, we find that we can improve the mATE performance but degenerate the mAOE performance through 6 epochs of pretraining. Interestingly, if we pretrain for more epochs, mATE performance saturates but mAOE can be largely improved. Additionally, as shown in Tab. 7, the performance of all metrics gradually increases as we use more data for pretraining.
**Mask ratio.** We examine the effect of the mask ratio used in the masked image modeling training process on the performance. As shown in Tab. 8, we find that using a mask ratio of 0.5 performs best as a very high mask ratio causes
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c c c c c} \hline \hline
**Methods** & **Backbone** & **Image Size** & **Extra Data** & **TTA** & **mAP\(\uparrow\)** & **NDS\(\uparrow\)** & **mATE\(\downarrow\)** & **mASE\(\downarrow\)** & **mAOE\(\downarrow\)** & **mAVE\(\downarrow\)** & **mAAE** \\ \hline FCOS3D [44] & R101-DCN & \(900\times 1600\) & ✗ & ✓ & 0.358 & 0.428 & 0.690 & 0.249 & 0.452 & 1.434 & 0.124 \\ DETR3D [45] & V2-99 & \(900\times 1600\) & ✓ & ✓ & 0.412 & 0.479 & 0.641 & 0.255 & 0.394 & 0.845 & 0.133 \\ UVTR [27] & V2-99 & \(900\times 1600\) & ✓ & ✗ & 0.472 & 0.551 & 0.577 & 0.253 & 0.391 & 0.508 & 0.123 \\ BEVFormer [29] & V2-99 & \(900\times 1600\) & ✓ & ✗ & 0.481 & 0.569 & 0.582 & 0.256 & 0.375 & 0.378 & 0.126 \\ BEVDet4D [19] & Swin-B & \(900\times 1600\) & ✗ & ✓ & 0.451 & 0.569 & 0.511 & 0.241 & 0.386 & 0.301 & 0.121 \\ PolarFormer [24] & V2-99 & \(900\times 1600\) & ✓ & ✗ & 0.493 & 0.572 & 0.556 & 0.256 & 0.364 & 0.439 & 0.127 \\ PETRv2 [32] & GLOM-like & \(640\times 1600\) & ✗ & ✗ & 0.512 & 0.592 & 0.547 & 0.242 & 0.360 & 0.367 & 0.126 \\ BEVDepth [28] & ConvNeXt-B & \(640\times 1600\) & ✗ & ✗ & 0.520 & 0.609 & 0.445 & 0.243 & 0.352 & 0.347 & 0.127 \\ BEVStereo [26] & V2-99 & \(640\times 1600\) & ✓ & ✗ & 0.525 & 0.610 & 0.431 & 0.246 & 0.358 & 0.357 & 0.138 \\ SOLOFusion [36] & ConvNeXt-B & \(640\times 1600\) & ✗ & ✗ & 0.540 & 0.619 & 0.453 & 0.257 & 0.376 & 0.276 & 0.148 \\ BEVDistill [8] & ConvNeXt-B & \(640\times 1600\) & ✗ & ✗ & 0.498 & 0.594 & 0.472 & 0.247 & 0.378 & 0.326 & 0.125 \\ GeoMIM & Swin-B & \(512\times 1408\) & ✗ & ✗ & 0.547 & 0.626 & 0.413 & 0.241 & 0.421 & 0.272 & 0.127 \\ GeoMIM & Swin-L & \(512\times 1408\) & ✗ & ✗ & **0.561** & **0.644** & **0.400** & **0.238** & **0.348** & **0.255** & **0.120** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison on nuScenes test set. “Extra data” denotes depth pretraining. “TTA” denotes test-time augmentation.
the pretext task too hard while a low mask ratio causes the pretext task too easy.
**Pretraining with distillation or other reconstruction targets.** We compare the performance of GeoMIM with different learning targets, including RGB pixels [14] and the voxelized LiDAR points. Moreover, we use the depth ground truth derived from the LiDAR point cloud for depth pretraining [35]. Following previous works, we also the LiDAR BEV features for conducting distillation pretraining [13, 8, 17]. We initialize the backbone with MixMAE [30] self-supervised pretraining and use its results for comparison. All the pretraining experiments are conducted on the nuScenes dataset.
As shown in Tab. 9, we find the depth or distillation pretraining fails to improve the MixMAE results. Those two pretraining methods are beneficial for object localization to improve the mATE metric, but degenerate mAOE a lot. Using the RGB pixels as the reconstruction targets like MAE is also unable to improve the NDS. The main reason is that the nuScenes dataset is much less diverse than the widely used ImageNet-1K [10] dataset, and as a result, the model is easy to overfit the training data. Moreover, though the LiDAR points contain rich geometry information, we find that directly using the voxelized LiDAR points as the reconstruction targets also fails to improve the MixMAE results. As stated in Sec. 1, the LiDAR voxels are sparse and noisy. Using them as the reconstruction targets results in unstable pretraining. In comparison, we use a pretrained LiDAR detection model to extract more meaningful BEV features as the reconstruction targets, which can not only transfer the rich geometry information to the camera-based model but also avoid the noise problem of directly reconstructing LiDAR voxels.
**Ablation on the decode designs.** We investigate the impact of the proposed decoder designs on the final performance. Apart from reporting results on the nuScenes dataset, we also report the performance on the Open Waymo dataset to show how the design choices affect the transferability of the pretraining.
As shown in Tab. 10, we find that using a decoupled decoder to separately decode the depth and semantic features can improve the NDS for 0.8%, compared to using one decoder to jointly decode them. The decoupled branches force the geometry branch to focus on inferring depth from geometry cues, which maximizes the utilization of the model capacity. Moreover, removing the Cross-View Attention (CVA) blocks results in a 0.5% performance drop because of lacking cross-view interaction for better BEV feature inference. Additionally, we find that further removing the camera-aware design leads to 0.8% LET-3D APL drop on the Open Waymo dataset. As the pretraining and finetuning might be conducted on different datasets, using camera-aware depth reconstruction is beneficial for the transferability of the pretraining.
**Ablation on backbone size.** We investigate the scalability of our pretraining method and show the results in Tab. 11. Our GeoMIM is capable to be scaled up to Swin-L model with 200M parameters. The pretrained Swin-L [33] improves Swin-B on all reported metrics, especially the mAOE performance (7.8%).
trees, etc. Furthermore, we visualize the reconstructed BEV features in Fig. 4. The reconstructed BEV features can well restore the LiDAR model's BEV features, including the road structures and the semantic features.
**Cross-view attention maps.** We visualize the attention maps of the Cross-View Attention (CVA) blocks in Fig. 5. Through the cross-view interaction, one view is able to attend to the semantic parts of other views.
**Convergence curve.** We show the convergence curve of different pretraining in Fig. 6. Our pretraining can largely improve the self-supervised [30] and supervised [33] pretraining in terms of the convergence speed, which can match the self-supervised pretraining's final results with only half of the iterations.
## 8 Conclusion
In this paper, we proposed a pretraining method, GeoMIM, for multi-view camera-based 3D detection. By leveraging the knowledge of a pretrained LiDAR model in a pretrain-finetune paradigm, GeoMIM aims to transfer its rich geometry knowledge to the camera-based model. Specifically, GeoMIM reconstructs BEV features from masked images via a novel decoder and a cross-view attention mechanism. We demonstrate that GeoMIM significantly outperforms existing state-of-the-art methods on the nuScenes dataset, achieving state-of-the-art results in both camera-based 3D detection and segmentation tasks. Moreover, we verify that the pretrained model can be transferred to the Waymo Open dataset, further showing its effectiveness and generality.
**Limitations** Despite the promising results, GeoMIM also has some limitations. First, GeoMIM requires a large amount of labeled data for pretraining, which may not be available in some applications. Second, GeoMIM relies on the quality of the LiDAR model's BEV features, which may not always be accurate or complete. Overall, while GeoMIM shows great potential, further research is needed to address these limitations and improve its applicability in a wider range of applications.
Figure 4: Reconstruction results on nuScenes val images. For each triplet, we show the reconstructed BEV features (left), LiDAR BEV features, and LiDAR point cloud (right) in BEV.
Figure 5: CVA blocks’ attention maps across different cameras. Our CVA enables the cameras at back to attend to the semantic parts in front views.
Figure 3: Example results on nuScenes val images. From top to bottom, the rows are the camera-view image, masked camera-view image, decoded semantic features, and decoded geometry features.
Figure 6: Performance curves of different pretraining methods. “SL” and “SSL” denote the ImageNet-supervised and MixMAE self-supervised pretraining respectively. |
2303.01136 | Effective Visualization and Analysis of Recommender Systems | Recommender system exists everywhere in the business world. From Goodreads to
TikTok, customers of internet products become more addicted to the products
thanks to the technology. Industrial practitioners focus on increasing the
technical accuracy of recommender systems while at same time balancing other
factors such as diversity and serendipity. In spite of the length of the
research and development history of recommender systems, there has been little
discussion on how to take advantage of visualization techniques to facilitate
the algorithmic design of the technology. In this paper, we use a series of
data analysis and visualization techniques such as Takens Embedding,
Determinantal Point Process and Social Network Analysis to help people develop
effective recommender systems by predicting intermediate computational cost and
output performance. Our work is pioneering in the field, as to our limited
knowledge, there have been few publications (if any) on visualization of
recommender systems. | Hao Wang | 2023-03-02T10:33:11Z | http://arxiv.org/abs/2303.01136v1 | # Effective Visualization and Analysis of Recommender Systems
###### Abstract
Recommender system exists everywhere in the business world. From Goodreads to TikTok, customers of internet products become more addicted to the products thanks to the technology. Industrial practitioners focus on increasing the technical accuracy of recommender systems while at same time balancing other factors such as diversity and serendipity. In spite of the length of the research and development history of recommender systems, there has been little discussion on how to take advantage of visualization techniques to facilitate the algorithmic design of the technology. In this paper, we use a series of data analysis and visualization techniques such as Takens Embedding, Determinantal Point Process and Social Network Analysis to help people develop effective recommender systems by predicting intermediate computational cost and output performance. Our work is pioneering in the field, as to our limited knowledge, there have been few publications (if any) on visualization of recommender systems.
recommender system, Takens Embedding, Determinantal Point Process, Social Network Analysis, visualization
## 1 Introduction
Recommender systems are ubiquitous in the internet industry. From late 1980's to 2022, there have been a tremendous amount of investment on the field. Lately, the focus of recommender system research has shifted from increasing accuracy metrics to a more comprehensive set of goals. Since 2017, AI fairness [1][2][3] has become the new buzz word in the field of recommender system research. This doesn't mean technical accuracy metrics are no longer the main concern. It's just a phenomenon as the consequence of years of efforts have already leading to satisfactory results on the accuracy metrics.
Recommender systems have evolved through different stages of development. The earliest recommender system algorithms are techniques such as collaborative filtering [4], matrix factorization [5][6], factorization machines [7], learning to rank [8][9], and hybrid shallow models [10]. 2016 [11][12][13] witnessed a surge in deep learning approaches in the top recommender system research venue - ACM RecSys. As a result, companies big and small all started to take on the new technological trend and produced effective recommender systems such as DeepFM [14], Deep Interest Network [15] and so on.
As more and more people joined the contest to achieve better technical accuracy, research topics such as fairness and diversity have attracted more and more attention. Another application field that is very important is context-aware recommendation [16][17]. As sensors turn more efficient and effective, there are more methods to collect contextual data for recommendation, which solves one of the biggest challenges of the topic - how to gather input data.
Improving performance of recommender system algorithms is the daily routine of many companies' AI departments. Companies like Baidu use a procedure called bad case analysis to analyze the reasons behind the bad performance and improves the product bit by bit. The need of effective tools for algorithmic analysis is urgent, but there has been very few publications on the topic.
Visualization powered algorithmic analysis has been a hot topic since the Researchers have agreed that effective visualization can greatly facilitate the development of big data products. However, data related to algorithmic analysis are heterogeneous. There does not exist a single elixir approach that can be used to explore the algorithmic structures of recommender systems.
For example, if we use Mean Absolute Error (MAE) as a metric, we need to find a method that can effectively analyze the 1-D data curve if Stochastic Gradient Descent approach is used and learning step is used as the x-axis variable. However, if we want to analyze the popularity bias effect, we need to visualize 2-D data. In the first case, we propose to use Takens Embedding [18] to elevate the data structures to higher dimensions. In the case of popularity bias effect analysis, we use heat map to visualize the 2-D data. There are more examples of using different techniques to solve different problems in this paper.
We find in our data analysis and visualization paradigm, we are not only capable of analyzing and visualizing technical accuracy metrics such as MAE, but also exploring other metrics such as fairness and diversity. Our approach is not only effective, but also comprehensive. To the best of our limited knowledge, we are among the first to propose a comprehensive set of data analysis and visualization tools to facilitate the design of recommender systems.
## 2 Related Work
Recommender system has different research subfields. One of the techniques that is versatile in quite many subfields is matrix factorization. The classic matrix factorization is designed to increase the technical accuracy metrics. Later, a more generic framework named SVDFeature [19] was invented to incorporate feature engineering in user-item feature decomposition, and thus greatly enhances the number of application contexts for the technique.
There have been different ways to optimize matrix factorization. One notable case is Alternating Least Squares [20] - a technique integrated in many different software packages such as Spark MLLib. Alternating Least Squares uses an alternating optimization approach to produce fast and reliable results.
In 2020, MatRec [21] was proposed as a special case of SVDFeature to solve the popularity bias problem based on matrix factorization framework. One year later, Zipf Matrix Factorization [22] and KL-Mat [23] were proposed to reduce the popularity bias effect using regularization terms specially designed for the problem.
Matrix Factorization variants are also suitable for cold-start problem. ZeroMat [24] was invented in 2021 as the first algorithm in history that solves the cold-start problem without extra input data. In 2022, another cold-start resolver named DotMat [25] was invented with superior performance and no extra input data. After experiments, the researchers discovered that not only these 2 algorithms could solve cold-start problems, but also alleviate the sparsity problem when used as a preprocessing step [26]. The hybrid models are named ZeroMat Hybrid and DotMat Hybrid, respectively.
Visualization of AI algorithms [27][28][29] has received a lot of attention in the field of InfoVis. Visualization of deep learning [30][31] models has been an extremely popular field over the years. However, emphasis on recommender system has not been enough. In this paper, we use the following techniques to analyze and visualize recommender system's algorithmic data.
Taken's embedding [18] was invented in 1981 to visualize low-dimensional data in higher dimension. We use the technique to visualize our data series in 1-D. Determinantal Point Process [32] inspired our analysis and visualization as well. Determinantal Point Process has been used to enhance diversity of recommender systems by Google Research and other institutions [33][34]. Heatmap [35] is also used by us and it has been applied elsewhere in InfoVis research to produce effective and beautiful visualization.
We also resort to social network analysis and visualization [36][37] in our publication. Social network analysis decomposes complex data structures into simpler structures that can better summarize the data. We apply techniques such as community detection [38] to segment the data set and visualize similarity matrix using different kinds of graph visualization algorithms [39].
## 3 Recommender System Benchmark
Matrix Factorization is one of the most successful recommender system paradigms for the past decade. The main idea behind the framework is to approximate the user item rating matrix with dot products of user feature vectors and item feature vectors. Precisely speaking, the loss function of the paradigm is as follows:
Notice that the framework reduces the space complexity of the user item rating values from O(mn) to O(k(m+n)) where m is the number of users, n is the number of items, and k is the dimension of user / item feature vectors.
Common optimization techniques used to solve the matrix factorization loss function for the optimal user/item feature vectors include Stochastic Gradient Descent (SGD), Adagrad, etc. Among the multitudes of different techniques, SGD is the simplest technique which takes a random sample of data points in computation of an incomplete form of gradients.
There have been a lot of variants of the matrix factorization approach. One notable contribution is SVDFeature, which models the user/item feature vectors as feature-based vectors. The framework is versatile and can be modified into different kinds of specializations that are widely applied in the industry.
One special example of matrix factorization is ZeroMat. ZeroMat is a milestone in the history of recommender systems. For the first time in the decades, ZeroMat solves the cold-start problem without extra input data. ZeroMat takes advantage of Zipf distribution and probabilistic matrix factorization, producing MAE results far superior to random placement, and only slightly inferior to the classic matrix factorization algorithm with historic user item rating values.
Another important invention is DotMat, which produces comparative recommendation results with classic matrix factorization. DotMat is also a cold-start resolution without extra input data. ZeroMat performs well on small random samples are selected during SGD. DotMat is more versatile as it suits large random sample size as well as small sample size.
ZeroMat and DotMat alleviate the sparsity problem as well as solving the cold-start issue. By using ZeroMat and DotMat as a preprocessing step to other algorithms, e.g., classic matrix factorization (ZeroMat Hybrid and DotMat Hybrid), researchers are able to achieve better MAE performance than single recommendation models.
One of the common heuristics used to tackle the cold start problem is random placement. Namely, we select random items for users when the user is new.
In our paper, we choose the following 7 recommender system algorithms for our data analysis and visualization: Classic Matrix Factorization, Random Placement, ZeroMat, DotMat,, DotMat Hybrid, user-based collaborative filtering, item-based collaborative filtering. Since ZeroMat and DotMat are comparatively new and lesser known in the community. We elaborate the algorithmic details in the following 2 sections.
## 4 ZeroMat
In this section, we focus on introducing a 2021 invention named ZeroMat. ZeroMat is the first cold-start algorithm in recommender system's history that solves the cold-start problem without using side information or extra data, in contrast with popular approaches such as meta learning.
ZeroMat assumes the user item rating follows the following distribution:
\[\frac{R_{ij}}{R_{max}}\sim\frac{Rank_{max}}{Rank_{ij}}\ \ (1)\]
Then the algorithm plugs the distribution into the framework of probabilistic matrix factorization:
\[\text{P}(\text{U},\text{V}\mid\text{R},\sigma_{\text{U}},\sigma_{\text{V}})= \prod_{i=1}^{\text{N}}\prod_{j=1}^{\text{M}}\left(\text{U}_{i}^{\text{T}}\cdot \text{V}_{j}\right)\times\prod_{i=1}^{\text{N}}\text{e}^{\frac{\text{U}_{i}^{ \text{T}}\cdot\text{U}_{i}}{2\sigma_{\text{U}}^{2}}}\times\]
\[\prod_{j=1}^{\text{M}}\text{e}^{\frac{\text{V}_{i}^{\text{T}}\cdot\text{V}_{j} }{2\sigma_{\text{U}}^{2}}}\ \ (2)\]
The SGD (stochastic gradient descent) update formulas for the algorithm is as follows (with standard deviations simplified to a constant):
\[U_{i}=U_{i}+\gamma\times\left(\frac{\text{v}_{j}}{\text{v}_{i}^{\text{T}} \cdot\text{v}_{j}}-2\times U_{i}\right)(3)\]
\[V_{j}=V_{j}+\gamma\times\left(\frac{\text{v}_{i}}{\text{v}_{j}^{\text{T}} \cdot\text{v}_{i}}-2\times V_{j}\right)(4)\]
As can be observed from the update formulas, there is no extra information involved in the computation other than the parameters U and V.
## 5 DotMat
DotMat is the second invention in recommender system's history that solves the cold-start problem without input data. The algorithm was invented in 2022 - one year later than ZeroMat. It achieves even better results than ZeroMat.
The algorithm of DotMat was inspired by ZeroMat and RankMat. It modifies the loss function of the classic matrix factorization in the following way:
\[L=|(U_{i}^{\text{T}}\cdot V_{j})^{\text{t}_{i}^{\text{T}}\cdot V_{j}}-\frac{R _{ij}}{R_{max}}|\ (5)\]
Just like ZeroMat, DotMat's SGD update formulas do not rely on extra information in its update formulas, and hence is history independent.
## 6 Takens Embedding
In this paper, we propose to use Takens Embedding to analyze and visualize the MAE curve. MAE curve is the men absolute error curve for recommender systems, namely the absolute value of the error between the prediction and ground truth. Takens Embedding was proposed to elevate low dimensional datasets to higher dimensions while preserving the chaotic properties of the original data. Takens Embedding is one of the tools in the toolkit of topological data analysis.
The formal definition of Takens Embedding can be found in the original 1981 paper:
Let \(\text{M}\subset\text{R}^{\text{n}}\) be a compact manifold of dimension n. Let
\[\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\
Fig.2 2-D visualization of dataset in Fig.1
Fig.2 illustrates the result of visualizing MAE curves in 2-D space. It is apparent that ZeroMat has the largest data point span, while random placement and DotMat are most compact. DotMat, DotMat Hybrid, and classic matrix factorization are clustered together, which means they probably share similar properties. We can safely draw the conclusion that these 3 algorithms produce the best performance when it comes to robustness.
After elevating the dimension of MAE curves into 3-D, by careful observation we draw the same conclusion as in 2-D. The visualization in 2-D and 3-D is clearer and more effective than the cluttered lines in 1-D space. The span and skewness of the MAE values are more visible.
Fig.3 3-D visualization of dataset in Fig.1
We now visualize MAE curves of LDOS-CoMoDa dataset. LDOS-CoMoDa dataset [41] is a movie dataset including contextual information. LDOD-CoMoDa include 121 users and 1232 movies.
Fig.4 Visualization of MAE curves on LDOS-CoMoDa dataset
Fig. 4 demonstrates the MAE curves of 5 different recommender systems. It is very difficult to analyze 4 of them since they are cluttered together.
Fig. 5 illustrates the 2-D visualization of the MAE curves.
We elevate the dataset into 2-D visualization in Fig. 5.
Although 4 algorithms are still cluttered, but they are more visible and analyzable in point cloud format. DotMat Hybrid has the smallest diameter, with DotMat coming second in diameter length. Classic matrix factorization is also very compact. ZeroMat is much less compact and random placement is the most spread-out.
We now elevate 1-D MAE curves into 3-D space :
Fig. 6 illustrates the 3-D visualization of the MAE curves
In 3-D space we could explore the point clouds interactively, so we could analyze the data even more easily. Unlike in 1-D time series, we could examine the data in different aspects. In addition, unlike the 1-D MAE curve, in 3-D we can examine individual data point more clearly and easily as we can rotate and zoom the data set. This makes it a lot easier to detect abnormal data points or special structures.
## 7 Recurrence Plot
Recurrence plot is a technology invented to visualize dynamical system recurrence patterns. In this paper, we use recurrence plot to show the recurring structures of the MAE curve.
Recurrence plot is a 2-D image defined as follows :
\[I(x,y)=\begin{array}{c}\begin{cases}1,&if\ |T(x)-T(y)|<\varepsilon\\ 0,&if\ |T(x)-T(y)|>\varepsilon\end{cases}\end{array}\]
, where T denotes the time series (MAE curve, in our case), and \(\varepsilon\) is a small real number.
We use recurrence plot to visualize MAE values of 5 different recommender systems with grid search on different gradient learning steps:
From Fig. 7 to Fig. 10, we observe that DotMat Hybrid is the most robust algorithm since the black dots representing 1 are so densely populated in the graph. Random Placement is by theory and observation produces the most random result. DotMat as a single model has correlated structures, but it also looks like there is some redundancy in the graph since some areas are densely populated by black dots while others are nearly entirely blank.
## 8 Determinantal Point Process
In modern day recommender systems, diversity is one of the critical concerns of product owners. One way to enhance diversity is to penalize the loss function of recommender systems using a regularization term computed by Determinantal Point Process (DPP).
DPP constructs the similarity matrix of items, and then use the determinant of the matrix as a regularization term to the loss function of recommender system algorithms. The maximization of the determinant is equivalent to maximizing the volume spanned by vectors of the similarity matrix. The idealized maximum value of the volume is achieved when the spanning vectors are orthogonal to each other.
To analyze and evaluate the diversity of recommender systems, we use heatmap to plot the similarity scores between item pairs before and after the execution of our algorithms. We define similarity as in collaborative filtering algorithms. Fig. 11 and Fig. 12 demonstrate the similarity heatmaps computed on the LDOS-CoMoDa Dataset. Fig.11 illustrates item-item similarity, showing that the items are less effected by popularity bias compared with user-user similarity result shown in Fig. 12. This suggests that we should apply item-based collaborative filtering rather than user-based collaborative filtering. Also we should do more item-based similarity computation than user-based similarity computation.
Fig. 11: HeatMap of user-user similarity
Fig. 12: HeatMap of item-item similarity
Fig. 8: Fig. 9 Recurrence Plot of Random Placement and DotMat
Fig. 10: Recurrence Plot of DotMat Hybrid
Fig. 13: Visualization of Popularity Bias Effect on MovieLens Small Dataset
## 9 Popularity Bias and Long-tail Effect
Bias problem is one of the hottest research topics of recommender systems in recent years. Both industry and governmental agencies carried out a series of experiments to enhance the transparency and fairness of recommender systems including the efforts to alleviate popularity bias problems.
Popularity bias refers to the phenomenon that the most popular items of a recommender system effects the performance far more deeply than others. For example, in the collaborative filtering algorithms, the most popular items are involved in far more similarity computations than the rest.
In 2018, Wang et. al. [42] designed analytical formulas to capture the effect of popularity bias in the input structures on algorithmic performance. In their paper, the researchers prove that Zipf Law in the input data structures leads to power law effect in intermediate computational procedures. In this paper, the authors plot graphs of item ranks v.s the number of items to illustrate the popularity bias.
Borrowing the ideas from their work, we plot the graph of item user item rating values v.s the number of ratings in the input data structures of MovieLens 1 Small Dataset in Fig. 13 (Log-log plot). From the figures, we observe that the user item ratings follow stepwise power law distribution. Therefore it is safe to apply algorithms such as ZeroMat who makes the same assumption for the dataset. Visualization of LDOS-CoMoDa dataset in Fig.14 leads to analogous analysis. Our visualization helps us to choose algorithms to tackle the problem.
Fig.16 Visualization of item similarity radius on LDOS-CoMoDa Dataset
Visualization of Similarity Radius
We propose a concept named Similarity Radius in this paper. A similarity radius of a user is defined as follows:
Definition 9.1: Similarity Radius of a user is the number of users whose similarity score with her is greater than 0.
Analogously, similarity radius of an item is defined as follows:
Definition 9.2: Similarity Radius of a user is the number of users whose similarity score with her is greater than 0.
Similarity Radius computes the size of the neighborhoods of users and items. The larger the similarity radius is, the more impact the user/item will exert on the intermediate computational procedure. We plot the user/item popularity ranks v.s similarity radius to explore the relations between popularity bias and intermediate similarity computational costs.
Fig. 15 shows the user-user similarity radius of LDOSCoMoDa Dataset, and Fig. 16 illustrates the item-item similarity radius of LDOS-CoMoDa Dataset.
Similarity radius measures the popularity bias effect in the intermediate computational procedures of recommender systems such as collaborative filtering. Collaborating filtering, among many algorithms, needs to compute the similarity between users or items. The similarity radius determines the number of computational steps in the intermediate procedure. If similarity radius is unevenly distributed, MapReduce will suffer skewness problem. In addition, this will effect the performance of algorithms as well.
From Fig.15, we observe that user similarity radius is highly skewed and even exhibiting power-law effect. This probably means we should prefer item-based similarity computations (Fig.16) which is much more evenly distributed. The visualization helps us detect potential intermediate computational problems in computational procedures of recommender systems.
## 11 Visualization by Social Network Analysis
The similarity matrix discussed in previous sections can be visualized in different ways other than heatmaps. If we take users / items as data points in social network graphs, and user-user / item-item similarity pairs as edges, we acquire a social network built-up the similarity matrix and we could all sorts of social network analysis (SNA) techniques to investigate the data.
We apply social network visualization to similarity pairs generated on LDOS-CoMoDa datasets and obtain Fig. 17 and Fig. 18. Fig.17 illustrates the user-user similarity graph, the size of whose nodes represents the similarity radius. Fig.18 illustrates a local view of the item-item similarity graph. As can be observed from Fig. 18, there exist many small clusters in the graph.
Comparison between Fig. 17 and Fig. 18 leads us to believe that item-based collaborative filtering might be more effective in computational time than user-based collaborative filtering since we can compute similarities within small clusters to expedite our overall computational process.
We could also apply other social network analysis and visualization algorithms to the similarity matrix. Fig. 19 shows the community detection result of LDOS-CoMoDa dataset using Louvain's Method.
## 12 Conclusion
In this paper we provide systematic visualization and analysis of 7 different recommender algorithms and 2 open-source datasets. We demonstrate that by carefully selecting visualization techniques, we are able to predict intermediate computational cost and output performance, therefore choosing correct recommender systems beforehand.
In future work, we would like to explore visualization of other AI algorithms to help algorithm engineers and experts design new algorithms and improve old ones. We believe AI + visualization can transform the current IT industry and research community into a more advanced technological landscape.
|
2310.19641 | DistNet2D: Leveraging long-range temporal information for efficient
segmentation and tracking | Extracting long tracks and lineages from videomicroscopy requires an
extremely low error rate, which is challenging on complex datasets of dense or
deforming cells. Leveraging temporal context is key to overcoming this
challenge. We propose DistNet2D, a new deep neural network (DNN) architecture
for 2D cell segmentation and tracking that leverages both mid- and long-term
temporal information. DistNet2D considers seven frames at the input and uses a
post-processing procedure that exploits information from the entire video to
correct segmentation errors. DistNet2D outperforms two recent methods on two
experimental datasets, one containing densely packed bacterial cells and the
other containing eukaryotic cells. It is integrated into an ImageJ-based
graphical user interface for 2D data visualization, curation, and training.
Finally, we demonstrate the performance of DistNet2D on correlating the size
and shape of cells with their transport properties over large statistics, for
both bacterial and eukaryotic cells. | Jean Ollion, Martin Maliet, Caroline Giuglaris, Elise Vacher, Maxime Deforet | 2023-10-30T15:29:48Z | http://arxiv.org/abs/2310.19641v2 | # DistNet2D: Leveraging long-range temporal information for efficient segmentation and tracking
###### Abstract
Extracting long tracks and lineages from videomicroscopy requires an extremely low error rate, which is challenging on complex datasets of dense or deforming cells. Leveraging temporal context is key to overcoming this challenge. We propose DistNet2D, a new deep neural network (DNN) architecture for 2D cell segmentation and tracking that leverages both mid- and long-term temporal information. DistNet2D considers seven frames at the input and uses a post-processing procedure that exploits information from the entire video to correct segmentation errors. DistNet2D outperforms two recent methods on two experimental datasets, one containing densely packed bacterial cells and the other containing eukaryotic cells. It is integrated into an ImageJ-based graphical user interface for 2D data visualization, curation, and training. Finally, we demonstrate the performance of DistNet2D on correlating the size and shape of cells with their transport properties over large statistics, for both bacterial and eukaryotic cells.
###### Abstract
We propose a novel approach to the classification of the classification of the classes of objects in a class of 3D. The proposed method is based on pixel-wise classification [1], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [2], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [3], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [4], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [5], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [6], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [7], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [8], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [9], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise classification [10], which is a novel approach to classification of objects in a class of 3D. The proposed method is based on pixel-wise [10], which is a novel approach to classification of objects in a class of 3D.
proxy for segmentation that are subsequently fed to a clustering algorithm: [9] proposed multi-dimensional embedding with a loss function that pushed dissimilarity between neighbors, [10] proposed to predict the Euclidean distance map (EDM), fed to a watershed algorithm. Compared to a binary probability map, EDM has the advantage of emphasizing the boundary between touching cells while being independent of morphology. A popular method that is similar (but not equivalent) predicts radial distance between center and boundaries at predefined angles, which limits the application to convex objects [8]. [11] proposed an efficient method that jointly predicts, for each cell pixel, an offset vector pointing to the cell center and a clustering bandwidth. Similarly, [12] predicts a normalized offset vector pointing to the cell center. In case of filamentous bacteria, this method tends to produce over-segmentation. This problem was reduced in [13] by predicting the EDM along with an offset vector pointing to the cell medial axis (skeleton), defined by the local maxima of the EDM.
### Cell Tracking
The most straightforward approach to cell segmentation and tracking runs in two independent successive steps: object detection followed by temporal association of detected objects. The recent method DeLTA 2.0 [14] uses, for the tracking step, a classification neural network to predict the next cell for each cell. However, because predictions are made independently for each cell, this method is likely to produce inconsistent results. A two-step approach can allow long-term temporal information to feed the tracking algorithm, as segmentation enables data compaction. Notably, [15] uses graph neural network to model the entire time-lapse sequence, resulting in very effective tracking. The main drawback of the two-step approach is that it is directly limited by the accuracy of the segmentation step. In difficult cases such as high density of similar cells, even a trained expert requires temporal context to perform accurate segmentation.
### Combined Segmentation and Tracking
Temporal information can be leveraged by combining segmentation and tracking into a single operation. Several recent methods simultaneously train a bounding box detector with a tracker that associates bounding boxes candidates between successive frames [16; 17]. In the context of cell tracking, one limitation of this kind of method is that they have restricted access to the spatial context around the bounding boxes (such as the position of neighboring objects), which is crucial when cells have similar aspects. An emerging trend is the prediction of the displacement vector of each cell between two successive frames as a proxy for tracking [18; 19; 20; 21], along with a proxy for segmentation or detection. The actual association of cells is performed in a post-processing step. One advantage of this strategy is that it enables simultaneous segmentation and tracking of all cells present in a time window using a single DNN,
which likely yields more consistent results for both tasks. It is noteworthy that [18; 21] do not segment cells but only detect their centers. [19] introduced an attention layer [22] in the neural network, and showed that it captures long-range spatial information, in the one-dimensional case of bacterial cells growing in a microfluidic device. [20] use a DNN architecture that performs segmentation independently for each frame and thus cannot leverage temporal context for segmentation. However, several works have shown that performing joint segmentation and tracking improves segmentation by leveraging sequential information [9; 19]. Due to memory limitations, these methods can only use a small temporal neighborhood, e.g. [18; 19; 20] use pairs of successive frames (t,t+1). More recently, [21] have shown that tracking and detection performance can be improved by using a larger temporal neighborhood of six frames as well as a carefully designed loss function that penalizes inconsistencies between detection and tracking.
In this work, we describe DistNet2D, a novel 2D cell segmentation and tracking method, with a carefully designed DNN architecture that leverages mid- and long-term temporal context for both tasks. Mid-term temporal context is incorporated at the input of the DNN: our method typically considers a 15-frame time window, but this size is adaptable to the features of the dataset and can be much wider if needed. Long-term temporal context is incorporated through a post-processing procedure that uses information from the whole video to correct segmentation errors. We compare DistNet2D to two recent methods (DeLTA 2.0 [14], EmbedTrack [20]) on two experimental datasets that we publish along with this work: a dataset containing phase contrast images of dense communities of motile bacterial cells, and a dataset of fluorescence images of adherent migrating eukaryotic cells. We also adapted the graphical user interface of BACMMAN software [23] for 2D data visualization, curation, and training. Finally, we demonstrate how DiSTNet2D's performance enables us to correlate the size and shape of cells with their motion properties over large statistics, for both bacterial and eukaryotic cells.
## 2 Results
Following the work of [19; 20; 21], we developed a method that performs segmentation and tracking simultaneously with a single DNN. This strategy has several advantages over methods that perform the tasks independently. First, it leverages temporal information for segmentation, improving the accuracy of the results. Second, it is easier to train and use a single DNN than two separate networks. Our method is based on a novel DNN architecture that incorporates a sequence of operations designed to blend the information gathered from the different input frames, enabling the use of this information for both segmentation and tracking (see details in Online Methods). Specifically, several frames are fed to the DNN, which predicts proxies for both segmentation and tracking (Figure 1A). Using a larger time window is expected to increase temporal
consistency, but the number of considered frames is limited by GPU memory. We chose to consider seven frames: three frames before and three frames after the current frame. To enable the DNN to use information at a longer time range without exceeding its memory capacity, we distributed the seven frames across a larger range, by spacing them apart (see Figure S1 for a diagram). The gap between considered frames depends on the dataset, in this work we used values of one and three frames (depending on the dataset). This strategy is referred to as temporal subsampling.
### Segmentation
The network predicts two complementary proxies for segmentation, the Euclidean distance map (EDM), which aims at identifying the cell shape, and the geodesic distance to the center map (GDCM), which aims at identifying the cell center. These proxies are predicted within each cell and then combined to produce objects' contours. More formally: let \(B\) be the background, \(F\) the
Figure 1: Method overview. A: Output of the model for a given frame pair. For each frame, _EDM_ is the map of the Euclidean distance to the edge of each cell, _GDCM_ is the map of the geodesic distance to the center. For each pair of frames, in each direction, _dX_ and _dY_ are the cell center displacements from previous frame for each axis, \(P(\textit{Link multiplicity}=0)\) and \(P(\textit{Link multiplicity}>1)\) are the probabilities that the link multiplicity is zero (no linked cell in the other frame) and strictly greater than one (several linked cells in the other frame), respectively. In this dataset, \(P(\textit{Link multiplicity}>1)\) is null because it contains no mitosis or merging cells. Note that only one frame pair (\(t\), \(t+1\)) is represented, but the model inputs a larger temporal neighborhood and predicts these maps for more frame pairs. The output includes both forward (\(t\to t+1\)) and backward (\(t+1\to t\)) predictions. B: Segmentation procedure: A watershed transform is applied to the EDM using regional maxima as seeds, which likely produces over-segmentation. Gaussian function is applied to the predicted GDCM and a watershed algorithm on the Laplacian transform is used to detect centers, which are used to reduce over-segmentation (see main text for details). C: Tracking procedure: The centers of each cell at \(t\) are shifted by the predicted displacement between \(t\) and \(t+1\) (dX and dY). Each cell at \(t\) is associated with the cell at \(t+1\) in which the shifted center falls. Images are from dataset PhC-C2DH-PA14. Method overview for dataset Fluo-C2DH-HBEC is available in Figure S2.
foreground and \(C_{j}\) the \(j^{th}\) cell (\(F=\bigcup_{j}C_{j}\)), \(c_{j}\) its center, \(d\) the Euclidean distance function and \(d_{g}\) the geodesic distance function (see section 3.6); for each pixel \(i\):
\[EDM_{i}=\left\{\begin{array}{l}min(d(i,B),d(i,F\setminus C_{j}))\text{ if }i \in C_{j}\\ -1\text{ if }i\in B\end{array}\right.\]
\[GDCM_{i}=\left\{\begin{array}{l}d_{g}(i,c_{j})\text{ if }i\in C_{j}\\ 0\text{ if }i\in B\end{array}\right.\]
The medoid center of the cells is used in order to ensure it is contained in the cell, even for non-convex shapes. For simplicity, it will be referred to as center in this work.
EDM-based segmentation is efficient even on non-convex cell morphologies, such as bent bacterial cells. It is performed by applying a watershed algorithm on EDM. Watershed is naturally limited to positive values (as the background is set to -1) and seeded from regional maxima of the EDM, which can easily produce over-segmentation, especially in long cells or cells with complex shapes that may contain several local maxima.
To reduce over-segmentation, we combined contours with predicted centers. Centers are segmented by performing a watershed algorithm on the Laplacian transform of the image that results from the Gaussian function applied to GDCM (see section 3.6 for details). Two segmented regions in contact are merged if either one of them or both do not contain a segmented center, or if the ratio of intensity amplitude of the centers is below a user-defined threshold (see Figure 1B).
Moreover, we also observed that predicting a unique center per cell improves EDM prediction especially in distinguishing neighboring cells instances.
### Tracking
Tracking is performed using the prediction of the displacement of the cell centers along the X and Y axis that occurs between two frames. The center of each cell at \(t\) is shifted by its predicted displacement between \(t\) and \(t+1\); if the shifted center falls into a segmented cell at \(t+1\), then the two cell instances are associated (see Figure 1C). This is similar to the procedure used in [20].
To assist the tracking procedure and manage more complex cases, we also predict the _link multiplicity_ category in both the forward (\(t\to t+1\)) and backward (\(t+1\to t\)) directions, accounting for the expected number of links for each cell (Figure S3). The possible values for forward link multiplicity are: no next cell (the cell will leave the field of view, or will die), one next cell (regular case, or the cell will fuse with another cell), multiple next cells (the cell will divide). The possible values for backward link multiplicity are: no previous cell (the cell has entered the field of view, or just appeared), one previous cell (regular case, or the cell has just divided), multiple previous cells (the cell has just fused). Formally, we predict each time three probability maps: \(P(\textit{Link Multiplicity}=0)\), \(P(\textit{Link Multiplicity}=1)\), \(P(\textit{Link Multiplicity}>1)\)
summing to 1. We assign the link multiplicity category to each cell as the multiplicity with the highest median probability within the cell.
Cells that are predicted to have a single next cell are linked by forward tracking using the predicted forward displacement. Cells with multiple next cells (either because of over-segmentation at \(t+1\) and not at \(t\), under-segmentation at \(t\) and not at \(t+1\), or a predicted division event) remain unlinked after forward tracking. Backward tracking is then applied on unlinked cells that are predicted to have a single previous cell, using predicted backward displacement (Figure S4).
Forward and backward tracking allow to assign both merge links --in which several cells at \(t\) are associated to a single cell at \(t+1\)-- and split links --in which several cells at \(t+1\) are associated to a single cell at \(t\). When merge links or split links are not confirmed by the link multiplicity category, it means they arise from over/under segmentation errors and they will be corrected in the post-processing stage (see section 2.3). Identifying merge and split links also enables a finer definition of metrics and a more accurate diagnosis of origin of errors (incorrect segmentation _vs_ incorrect linking), see section 2.6 for details.
### Post-processing: Segmentation correction
A set of rules was designed to correct over/under segmentation errors using the tracking information. We especially observed such errors in the PhC-C2DH-PA14 dataset, which consists of high-speed acquisition videos (typically 100 frames-per-second for a few seconds) of motile rod-shaped bacteria that divide typically every hour. The invagination of the cell membrane at the center of the mother cell is the last stage of the cell cycle before the separation of the two daughter cells. In phase contrast imaging, this invagination appears as a bright region within the cell body, which is similar to the bright area that connects two separate cells when they are in contact (Figure S5). It is virtually impossible to determine from a single frame whether an object represents a late-dividing long single cell or two adjacent short cells. The DNN architecture already helps avoid such errors by considering a mid-term temporal context (typically fifteen frames).
To consider long-term temporal information (which can cover an extended period of time, up to the duration of the entire video), we analyze the trajectories obtained during the tracking step. We focus on the merge and split links. Merge and split links consistent with predicted link multiplicity are considered mitosis or fusion, and are left untouched. In contrast, merge and split links that contradict with predicted link multiplicity are suspected errors. We treat them using the following general principle: if an object has always been seen as one cell, it should remain as one; however, if it was detected as two distant cells at any point in the past or future, this indicates that it should be considered two cells (Figure 2A). This approach is based on the assumption that errors are rare and can be corrected by looking at errorless past and future. The high efficiency of our DNN-based combined segmentation and tracking algorithm supports this assumption.
In practice, for each merge link, we check if all cells before the link are in contact (two objects are considered to be in contact if the distance between their contours is lower than a threshold ; for rod-shaped bacteria, an alignment criterion is also used). If they are, then we merge all of the cells. Otherwise, we split the objects following the link by applying a watershed transform on the EDM (Figure 2A-i). Cell fragments are linked to the previous objects using the same procedure as in section 2.2. If the watershed algorithm generates more fragments than there are previous objects, fragments linked to the same previous object are merged. Similarly, for each split link, if all cells after the link are in contact, then we merge all of the cells. Otherwise, we split the objects before the link (Figure 2A-ii). Common examples are depicted in Figure 2B-D.
Figure 2: Post-processing uses temporal information on large timescale to correct segmentation errors. A: Diagrams presenting the procedure to correct wrong links. B: Illustrative example of two distinct cells that are transiently detected as one object (under-segmented in frame \(t=65\)). Because objects involved in the merge link are sometimes seen separated, we can assume the object in frame \(t=65\) should be split. C: Illustrative example of one cell that is transiently detected as two objects (over-segmented in frame \(t=79\)). Because the cell is detected as a single object throughout the entire video (200 frames), except for one frame in which it is seen as two objects, we can assume that the two objects should be merged into one. D: Illustrative example of a more complex error that implies more lineages. In frame \(t=145\), one cell is under-segmented and one cell is over-segmented. Here again, temporal information at the scale of the entire video enables us to correct the segmentation errors.
### Model architecture
The DNN has an encoder-decoder architecture, with a single encoder and one decoder per output type (EDM, GDCM, displacement and link multiplicity). The encoder and decoders are shared between frames for better training efficiency. For segmentation outputs (EDM and GDCM), one prediction per frame is made, whereas for tracking outputs (displacement and link multiplicity) one prediction per frame pair is made. For a DNN time window of size \((N-3)\delta+3\) centered on frame \(t\), the \(N\) considered frames are the three central frames (\(t-1\), \(t\), \(t+1\)), and \(m=(N-3)/2\) frames on each side of the central frames (\(t-1-m\delta\),..., \(t-1-2\delta\), \(t-1-\delta\) and \(t+1+\delta\), \(t+1+2\delta\),..., \(t+1+m\delta\)) (Figure S1). \(2N-4\) frame pairs are defined as follow (Figure S7):
* \(N-1\) frame pairs between consecutive considered frames, for short-range displacements,
* \(N-3\) frame pairs between the central frame (frame \(t\)) and each other frame except frames \(t-1\) and \(t+1\), for mid-range displacements.
Figure 3 displays the global architecture. The detailed architecture of each box is described in Online Methods 3.1. Encoder and decoders are mainly composed of residual blocks of two successive convolutions. Between the encoder and the decoders, we introduced a blending module --a sequence of operations that blends the encoded features of all frames together-- and then extract one
Figure 3: Model architecture. The encoder is fed by successive frames (green, blue and red rectangles) and produces encoded features (green, blue and red cubes). Features are processed in pairs (corresponding to successive frames) by the pair blender module, which produces feature pairs. Encoded features and feature pairs are blended together by the blender module (see Figure S6 for details). The segmentation extractor generates three segmentation features corresponding to each frame for both EDM and GDCM, that are decoded by two distinct decoders to produce images of the same size as the input image. Likewise, the tracking extractor generates two tracking features corresponding to each frame pair for both displacement and link multiplicity, that are decoded by two distinct decoders. For simplicity only three frames have been represented but we considered seven frames in this work, and only one segmentation decoder and one tracking decoder are represented instead of two.
feature per frame (resp. frame pair) that are fed to segmentation (resp. tracking) decoders. Extraction sequences simply contain a distinct convolution per frame (resp. frame pair). Resulting tensors are combined with encoded features (resp. feature pairs). We define the combine operation as a 1x1 convolution applied to the concatenation of two tensors.
The blending-extraction sequence makes the information from the whole time window available for the prediction of each output at each frame. This contrasts with [20] in which segmentation is performed independently on each other frame.
This architecture has the advantage of allowing proxy predictions to be made only for the central frame and the frame pair (t, t+1) during the prediction phase, which improves speed and reduces memory consumption.
### Software
The software associated with our method is BACMMAN [23], an ImageJ [24] plugin that was initially developed for analysis of bacterial cells growing in microchannels, with displacement along the microchannel axis. Such data was naturally displayed on kymographs, in which the horizontal axis represents time. In order to display 2D data, we added the hyperstack visualization mode (see Figure S8), in which lineage information is displayed as colored contours. All the features of the graphical user interface of BACMMAN such as interactive navigation through images, manual curation, two-way interplay with R/Python for statistical analysis, are thus available for 2D data. BACMMAN was also augmented with new features: generation of training sets as well as DistNet2D training and prediction can now be performed directly from the software. BACMMAN also provides a command-line interface, enabling its use on a computational cluster.
### Evaluation Metrics
Objective metrics for segmentation and tracking were previously introduced in [25]. In that work, cell tracking results were represented using an acyclic oriented graph, in which nodes corresponded to the detected cells and edges represented links (i.e temporal relations) between them. Metrics were based on the number of operations required to transform the result graph into the reference graph. Those operations were {split/delete/add} a node and {delete/add/change the semantics of} an edge (e.g.: a change between a split link and a normal link). Correspondence between a reference (R) and a result (S) segmented object were established using the following criterion: \(|R\cap S|>0.5\cdot|R|\), which implied that each reference cell could correspond to one result cell at most. We consider this a limitation because it does not allow to take over-segmentation into account; over-segmented cells were thus systematically considered as false positives. Instead, we used the following criterion: \(|R\cap S|>0.5\cdot min(|R|,|S|)\)_OR_\(|R\cap S|>C\) with \(C\) a user-defined constant that is typically \(50\%\) of the average reference cell size. The second term accounts
for cases of under-segmentation and partial overlap with ground truth, where the relative overlap is too low but the absolute overlap is significant.
This approach identifies four types of segmentation errors: false positives (result cells with no reference counterpart), false negatives (reference cells with no result counterpart), over-segmentation (when \(N\) result cells match a given reference cell, \(N-1\) over-segmentation are counted) and under-segmentation (when \(N\) reference cells match a given result cell, \(N-1\) under-segmentation are counted).
We also observed that the procedure proposed in [25] does not fully distinguish between tracking and segmentation errors: for instance, over-segmentation of one object into two parts is counted as a false positive segmentation error as well as a false negative link. However, if the over-segmented cell parts were all linked to the correct cell(s), no tracking error should be counted (Figure S9). We thus developed a procedure inspired by [25] to identify tracking errors that are independent of segmentation errors. In other words, our procedure evaluates the tracking efficiency _per se_, given the segmentation errors.
To do so, for each frame pair (\(t\), \(t+1\)), we transform the nodes of the result graph so that they match with the nodes of the reference graph by applying four successive operations (Figure S9): splitting under-segmented cells at \(t+1\), splitting under-segmented cells at \(t\), merging over-segmented cells at \(t\) and merging over-segmented cells at \(t+1\). At each split/merge operation, links are propagated to the resulting nodes. In the case of splitting, if this implies linking \(M\) nodes at \(t\) to \(N\) nodes at \(t+1\), where \(M>1\) and with \(N>1\), links are determined by a simple linear assignment algorithm that minimizes the distance between cell centers. In the case of merging, all links are simply added to the resulting node. After these transformations, all nodes of the transformed result graph correspond to a single node in the reference graph, except for false positives, which have no counterparts in the result graph. This enables counting false positive and false negative links: the former are links found in the transformed result graph and not in the reference graph, except links automatically added by splitting an under-segmented object. The latter are links from the reference graph that are not found in the transformed result graph and that do not involve false negative objects. Our choice of link propagation in case of merging during graph transformation would miss some false negative links, that are thus added to the count: in case a result cell was merged at frame \(t+1\) but has no link with cells at \(t\) and the corresponding reference cell is linked, a false negative link is counted. The same applies for a result cell merged at frame \(t\) that has no link with cells at \(t+1\).
This procedure is applied for each pair of frames, but this does not tell us how these errors are distributed among the different cell lineages. This information is crucial for evaluating an algorithm, as recovering more error-free cell lineages can be more useful even if it makes more frame-pair-wise errors [26]. We therefore counted the number of error-free cell lineages, allowing for a user-defined tolerance to the frame at which mitosis is detected.
Lastly, we noticed that many errors arise from cells that are partially out of bounds. We believe that these errors can be easily removed automatically and should not be counted as errors. Therefore, our procedure simply ignores errors that are related to cells that touch edges or lineages that contain at least one cell that touches an edge.
### Evaluations
DeLTA 2.0 independently performs segmentation and tracking using two independent U-Net models, with _ad hoc_ procedures specifically designed for rod-shaped bacteria, such as a skeletonization of the cell body shape to set a maximal weight at the center of the cell during training. EmbedTrack simultaneously performs segmentation and tracking using a single DNN, but it uses a total time window of only 2 frames (t,t+1) and only forward predictions.
DistNet2D outperforms DeLTA 2.0 and EmbedTrack on all our metrics: segmentation errors, tracking errors, and incomplete lineages (Table 1). Notably, DistNet2D achieved perfect tracking accuracy on the bacterial dataset. Additionally, DistNet2D ran faster than the other two methods on both datasets.
We also evaluated the contribution of several components of our method by performing ablation experiments (Table 2). Switching the DNN time window from two frames (a single frame pair \((t,t+1)\), as in EmbedTrack and DeLTA 2.0) to fifteen frames, without post-processing, increased segmentation performance and notably decreased the number of incomplete lineages. This shows that mid-term context is leveraged for segmentation and brings temporal consistency. While the two-frame version without post-processing
\begin{table}
\begin{tabular}{l l r r r r}
**Dataset** & **Method** & **Segmentation** & **Tracking** & **Incomplete** & **Prediction** \\ & & **errors (\%)** & **errors (\%)** & **linesages (\%)** & **time (s/frame)** \\ \hline \multirow{3}{*}{PhC-C2DH-PA14} & DeLTA 2.0 & 1.1 & 0.22 & 21 & 22 \\ \cline{2-6} & EmbedTrack & 1.2 & 0.14 & 7.1 & 2.7 \\ \cline{2-6} & DistNet2D & 0.24 & 0.00 & 0.53 & 0.51 \\ \hline \multirow{3}{*}{Fluo-C2DH-HBEC} & DeLTA 2.0 & 1.9 & 0.92 & 37 & 0.84 \\ \cline{2-6} & EmbedTrack & 0.89 & 0.51 & 25 & 1.4 \\ \cline{1-1} \cline{2-6} & DistNet2D & 0.49 & 0.10 & 5.0 & 0.27 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of DistNet2D, DeLTA 2.0 and EmbedTrack on datasets PhC-C2DH-PA14 and Fluo-C2DH-HBEC. Segmentation errors is the sum of false positive, false negative, under- and over-segmentation, divided by the number of cells in the ground truth. Tracking errors is the sum of false positive and false negative links divided by the number of links in the ground truth. Incomplete lineages is the number of lineages with at least one segmentation or tracking error divided by the number of lineages in the ground truth. For datasets PhC-C2DH-PA14 and Fluo-C2DH-HBEC respectively, the total number of cells is 123,057 and 4,273, the total number of links is 145,306 and 4,890, and the total number of lineages is 561 and 60. As explained in the main text, only cells that do not touch edges (and lineages with no cell touching edges) are taken into account.
exhibits fewer segmentation errors compared to EmbedTrack (0.65% versus 1.2%), it also results in more incomplete lineages (10% versus 7.1%). This difference arises from the contrasting under-segmentation to over-segmentation ratios observed in the two methods, with under-segmentations having a more pronounced impact on incomplete lineages.
By leveraging temporal information across the entire video, post-processing effectively corrected suspected errors (defined by merge links and split links that are not confirmed by predicted link multiplicity), leading to a substantial reduction in both object-wise segmentation errors and, more notably, lineage-wise errors, enhancing the overall temporal coherence of predictions. We acknowledge that post-processing might introduce new errors that will be propagated over entire tracks, but we consider this is an acceptable drawback considering its great performance in terms of reducing incomplete lineages. We also concede that post-processing works well because the DNN already makes few errors. Post-processing and DNN work synergistically, as shown by the poor performance when both components are turned off.
We then tested how DistNet2D's performances in leveraging long-term information are affected by frame sampling. First, we varied how the seven considered frames were spread apart, by playing with the parameter \(\delta\), which changed the range of the DNN time window (Figure 4A). We found that a wider DNN time window improved the segmentation performance (the tracking efficiency was already very high, even for small \(\delta\)). This suggests that DistNet2D benefits from having access to a longer temporal context. The accuracy over entire lineages was also improved with greater \(\delta\). However, this effect was eliminated after post-processing, possibly because the number of remaining incomplete lineages is too small (lower than 5).
To further assess the influence of time sampling, we compared the performance of DistNet2D, EmbedTrack, and Delta 2.0 on subsampled evaluation datasets (Figure 4B). While the accuracy of segmentation stayed roughly the same for all methods across the tested range, the accuracy of tracking was sensitive to subsampling, both at the level of individual links or entire tracks. However, DistNet2D remained fairly accurate for tracking, unlike the other two
\begin{table}
\begin{tabular}{p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}} \multicolumn{3}{c|}{Ablations} & \multicolumn{3}{c}{Results} \\ \hline
**Post-processing** & **DNN Time window** & **Segmentation Errors (\%)** & **Tracking Errors (\%)** & **Incomplete Lineages (\%)** \\ \hline No & 2 & 0.65 & 0.00 & 10 \\ \hline No & 15 & 0.47 & 0.00 & 5.3 \\ \hline Yes & 2 & 0.50 & 0.00 & 0.89 \\ \hline Yes & 15 & 0.24 & 0.00 & 0.53 \\ \end{tabular}
\end{table}
Table 2: Ablation experiments on dataset PhC-C2DH-PA14 (the total number of cells is 123,057, links is 145,306, and lineages is 561). The DNN time window of 2 corresponds to a simplified version of the DNN that considers only frames \((t,t+1)\). The last case, which corresponds to DistNet2D, has a DNN time window of 15 frames (\(N=7\) considered frames and \(\delta=3\) using the subsampling definition presented in Figure S1), included post-processing, and achieved the best performance.
methods which quickly lost their effectiveness. For example, DistNet2D performed just as well on the 6-fold subsampled dataset as the other two methods did on the full dataset. Its robustness to subsampling can be explained by the awareness of mid-range temporal information and by the random subsampling performed during data augmentation (see Online Methods 3.3).
Overall, the ablation experiments and subsampling experiments confirm that leveraging temporal information improves the segmentation and tracking performance.
### Showcase of DistNet2D's performance on biological systems
We demonstrate the potential of DistNet2D by applying it to bacterial and eukaryotic datasets.
#### 2.8.1 Monolayer of bacterial cells
We measured the mean-squared-displacement (MSD) of _P. aeruginosa_ cells at the surface of an agar gel, at a surface fraction of \(\phi=0.719\), where the cell monolayer appears "jammed". As shown in section 2.7, DistNet2D was able to extract long tracks: 3404 1000-frame tracks out of an average of 3620
Figure 4: Temporal insights of the DistNet2D’s performance, calculated with dataset PhC-C2DH-PA14. A: Influence of DNN time window in predictions. DNN time window was varied by changing the gap between considered frames (\(\delta\)) from 1 to 10, while keeping the number of considered frames constant (\(N=7\)). B: Robustness to acquisition subsampling. We compared the three methods (trained with the complete training dataset) on computationally-generated time subsampled versions of the evaluation dataset. Note that for DistNet2D, \(\delta\) was set to 1 for all points, which explains the difference in segmentation errors compared to Table 1, where \(\delta=3\).
cells that were visible in one field of view. Track duration statistics indicates very few errors (Table S2). The MSD scales approximately with \(t\), confirming diffusive behavior over 3 decades of time (Figure 5A). At lower surface fraction (\(\phi=0.466\)), cells were more motile and a larger fraction of them left the field of view within the duration of the video (1413 1000-frame tracks out of an average of 2911 visible cells). Accordingly, the behavior at short time scales was over-diffusive but remained diffusive at longer time scales.
We also correlated the length of each cell with its speed. At low density, short cells moved faster than longer cells (Figure 5B). We hypothesize that this is a signature of single-cell motility, as viscous drag varies monotonously with the length of a rod [27]. This trend disappeared at high density as cells collectively blocked each other, regardless of their length.
#### Eukaryotic cells
The ability of DistNet2D to extract precise cell contours and long trajectories enables the correlation of migrating speed and direction with cellular shape. At low density, HBEC cells typically migrate perpendicular to their major axis (Figure 5C). At higher density, collisions reorient cells, leading to a less peaked angle distribution. Interestingly, while the largest displacements are perpendicular to the cell body major axis at low density, the trends reverses at high density as some cells move along other cells (Figure 5D).
## 3 Discussion
DistNet2D introduces several novel components to address the challenges of segmentation and tracking in bioimages. These components include a novel segmentation method that combines EDM and GDCM, improving the separation of adjacent cells, while being able to segment a wide range of cell morphologies. DistNet2D also employs a novel approach to predicting backward and forward tracking proxies to handle cell division and fusion events and
Figure 5: A: Mean-squared-displacement measured on bacterial cells at low surface fraction (\(\phi=0.466\), blue dots) and high surface fraction (\(\phi=0.719\), orange dots). Guide lines have exponent 1 (dashed lines) and 2 (solid line). B: Cell speed as a function of cell length, using the same color code. Error bars are standard errors of the mean (error bars are hidden behind dots for the high surface fraction dataset). Each dot is the mean speed of all cells binned by length, with a bin size of 0.088 \(\mu m\) (one pixel). Averaging is weighted by the duration of trajectories. The solid lines represent the number of cells in each bin, weighted by the duration of trajectories (a value of 1 is attributed to a cell tracked for 1000 frames). C: Histogram of angles between the velocity vector and the major axis of the HBEC cell body (obtained by ellipse fitting), for five time intervals (0h-15h, 15h-30h, 30h-45h-60h, 60h-75h). Each interval includes N=40,939/53,284/76,305/115,860/138,749 data points (respectively). No chirality is measured in the data. Inset: average cell density for each time interval. D: Norm of the velocity vector with respect to the angle between the velocity vector and the major axis of the HBEC cell body, for the same five time intervals.
improve robustness of tracking to under/over-segmentation errors. Long-range temporal context is leveraged in a novel post-processing stage that corrects incorrect merge and split links, relying on the predicted link multiplicity. This stage strongly reduces the lineage-wise error rate by analyzing entire lineages and correcting them when necessary. This strategy is efficient even if errors are also generated at the lineage scale as long as the cell-wise error rate before post-processing is very low.
We developed a series of carefully chosen innovations for the implementation of the DistNet2D algorithm. Tracking and segmentation proxies are predicted by a DNN with a novel architecture designed for leveraging mid-range temporal information for segmentation, while being optimized for size and training efficiency. This range is further increased thanks to a gapped input strategy at no additional GPU memory cost. The loss function was chosen in order to effectively guide the training process by generating gradients of similar magnitude between segmentation and tracking proxies, regardless of cell size and displacement amplitude. Carefully designed data augmentation allows generalization to diverse imaging conditions without requiring retraining. In particular, we introduced on-the-fly random frame sub-sampling, which improves robustness to changes in acquisition rate, but also increases tracking performances by diversifying displacements during training. Each of these components plays a crucial role in enhancing the performance of DistNet2D, making it a powerful tool for analyzing biological processes involving cell movement.
Following [19], we tried to introduce an attention layer at the _Pair Blender_ module, but this did not improve performance. This is likely because the two datasets used in the study only contained short-range displacements, which could be captured by dilated convolutions. However, an attention layer may be useful for processing datasets with longer-range displacements or displacements that depend on location, such as in microfluidic devices [28; 29].
We demonstrated the performance of DistNet2D on two different datasets: a bacterial dataset, where cells are densely packed, have similar shape and only differ in size, and an eukaryotic dataset where cells are sparse, change shape, and divide. Some methods are specifically designed for a precise type of datasets, with _ad hoc_ procedures. This is the case for DeLTA 2.0, which was designed for bacterial datasets. Despite its specificity, DeLTA 2.0 under-performs compared to DistNet2D. A major strength of DistNet2D is its ability to leverage both mid- and long-range temporal context, unlike other considered methods: DeLTA 2.0 performs segmentation and tracking separately and does not try to leverage temporal information for segmentation. EmbedTrack performs segmentation and tracking jointly, but its DNN does not blend temporal information and thus does not make it available for all frames, thus segmentation is performed on each frame independently. In both DeLTA 2.0 and EmbedTrack, segmentation does not have access to temporal context.
We believe DistNet2D is general and can be used to segment and track any type of moving object in a 2D setting: living or inert, with changing
or with fixed shape, with division, with fusion, at any surface density. Any type of imaging can be used: fluorescence, phase contrast, brightfield, etc. The graphical user interface, BACMMAN, is an ImageJ plugin that enables training, curation, manual correction, re-training, distribution of the trained DNN weights to other labs, and data export as tabular data or as label images. Moreover, BACMMAN is able to handle multiple classes of object simultaneously, for instance cell membrane/cell nucleus, cells/foci, head/tail. This could be of particular interest in microbiology, cell biology, soft matter, active matter, but also ethology. The expansion of DistNet2D to 3D datasets is a promising avenue for future exploration. However, the increased memory requirements associated with processing 3D images necessitate further research to ensure efficient training.
Like any other supervised DNN-based method, DistNet2D training requires a training dataset that can be cumbersome to generate. To facilitate _de novo_ dataset creation, BACMMAN includes a DNN-based segmentation method with very low annotation requirements [30]. Automated tracking can be done in BACMMAN, that includes several tracking methods, such those in Trackmate [31], an ImageJ plugin that BACMMAN is connected to. The entire pipeline for creating the training dataset (annotations, DNN-based segmentation, manual correction of segmentation, tracking, manual correction of tracking), as well as training of DistNet2D, can be performed within BACMMAN.
DistNet2D, a novel method for segmentation and tracking of cells in bioimages, effectively addresses the challenges of segmentation and tracking by leveraging temporal context and employing carefully designed deep learning architectures. The integrated graphical user interface, BACMMAN, offers a comprehensive pipeline that streamlines the process of generating training datasets, training DNN, and applying DNN-based segmentation and tracking, making it more accessible and practical for a broad range researchers. Together, DistNet2D and BACMMAN form a versatile framework for analyzing biological processes involving cell movement.
Acknowledgments.This project has received financial support from the ANR X-BACAMAT project (ANR-21-CE30-0025). We are grateful to Pascal Silberzan and Charles Ollion for helpful discussions.
## Declarations
JO and MD were involved in conceptualization, planning and supervised the work. JO developed and implemented DistNet2D. MM and MD generated the bacterial datasets. EV and CG generated the eukaryotic datasets. MM analyzed the bacterial data and EV analyzed the eukaryotic data. MM, MD, and JO participated in the manual correction of training datasets. JO and MM performed training, evaluations, and ablation experiments. All authors wrote the manuscript. JO is the founder and director of the company SABILab. Other authors declare no competing interest. Code will be available upon publication.
This work is licensed under a Creative Commons Attribution 4.0 International License (CC-BY-4.0).
|
2306.04690 | DELVE 6: An Ancient, Ultra-Faint Star Cluster on the Outskirts of the
Magellanic Clouds | We present the discovery of DELVE 6, an ultra-faint stellar system identified
in the second data release of the DECam Local Volume Exploration (DELVE)
survey. Based on a maximum-likelihood fit to its structure and stellar
population, we find that DELVE 6 is an old ($\tau > 9.8$ Gyr, at 95%
confidence) and metal-poor ($\rm [Fe/H] < -1.17$ dex, at 95% confidence)
stellar system with an absolute magnitude of $M_V = -1.5^{+0.4}_{-0.6}$ mag and
an azimuthally-averaged half-light radius of $r_{1/2} =10^{+4}_{-3}$ pc. These
properties are consistent with the population of ultra-faint star clusters
uncovered by recent surveys. Interestingly, DELVE 6 is located at an angular
separation of $\sim 10\deg$ from the center of the Small Magellanic Cloud
(SMC), corresponding to a three-dimensional physical separation of $\sim 20$
kpc given the system's observed distance ($D_{\odot} = 80$ kpc). This also
places the system $\sim 35$ kpc from the center of the Large Magellanic Cloud
(LMC), lying within recent constraints on the size of the LMC's dark matter
halo. We tentatively measure the proper motion of DELVE 6 using data from
$\textit{Gaia}$, which we find supports a potential association between the
system and the LMC/SMC. Although future kinematic measurements will be
necessary to determine its origins, we highlight that DELVE 6 may represent
only the second or third ancient ($\tau > 9$ Gyr) star cluster associated with
the SMC, or one of fewer than two dozen ancient clusters associated with the
LMC. Nonetheless, we cannot currently rule out the possibility that the system
is a distant Milky Way halo star cluster. | W. Cerny, A. Drlica-Wagner, T. S. Li, A. B. Pace, K. A. G. Olsen, N. E. D. Noël, R. P. van der Marel, J. L. Carlin, Y. Choi, D. Erkal, M. Geha, D. J. James, C. E. MartÃnez-Vázquez, P. Massana, G. E. Medina, A. E. Miller, B. Mutlu-Pakdil, D. L. Nidever, J. D. Sakowska, G. S. Stringfellow, J. A. Carballo-Bello, P. S. Ferguson, N. Kuropatkin, S. Mau, E. J. Tollerud, A. K. Vivas | 2023-06-07T18:00:07Z | http://arxiv.org/abs/2306.04690v1 | # DELVE 6: An Ancient, Ultra-Faint Star Cluster on the Outskirts of the Magellanic Clouds
###### Abstract
We present the discovery of DELVE 6, an ultra-faint stellar system identified in the second data release of the DECam Local Volume Exploration (DELVE) survey. Based on a maximum-likelihood fit to its structure and stellar population, we find that DELVE 6 is an old (\(\tau>9.8\) Gyr, at 95% confidence) and metal-poor (\(\rm[Fe/H]<-1.17\) dex, at 95% confidence) stellar system with an absolute magnitude of \(M_{V}=-1.5^{+0.4}_{-0.6}\) mag and an azimuthally-averaged half-light radius of \(r_{1/2}=10^{+4}_{-3}\) pc. These properties are consistent with the population of ultra-faint star clusters uncovered by recent surveys. Interestingly, DELVE 6 is located at an angular separation of \(\sim 10^{\circ}\) from the center of the
Small Magellanic Cloud (SMC), corresponding to a three-dimensional physical separation of \(\sim 20\) kpc given the system's observed distance (\(D_{\odot}=80\) kpc). This also places the system \(\sim 35\) kpc from the center of the Large Magellanic Cloud (LMC), lying within recent constraints on the size of the LMC's dark matter halo. We tentatively measure the proper motion of DELVE 6 using data from _Gaia_, which we find supports a potential association between the system and the LMC/SMC. Although future kinematic measurements will be necessary to determine its origins, we highlight that DELVE 6 may represent only the second or third ancient (\(\tau>9\) Gyr) star cluster associated with the SMC, or one of fewer than two dozen ancient clusters associated with the LMC. Nonetheless, we cannot currently rule out the possibility that the system is a distant Milky Way halo star cluster.
star clusters, Magellanic Clouds +
Footnote †: journal: DALE Collaboration
## 1 Introduction
Recent large-scale digital sky surveys have revolutionized our understanding of the Magellanic Clouds (MCs) and their environments. In particular, sensitive surveys with the VISual and Infrared Telescope for Astronomy, (e.g., VMC; Cioni et al., 2011), the VLT Survey Telescope (e.g., STEP and YMCA; Ripepi et al., 2014; Gatto et al., 2021), and the Dark Energy Camera on the 4m Blanco Telescope (e.g., DES, SMASH, and MagLiteS; DES Collaboration, 2005; Nidever et al., 2017; Bechtol, 2017) have provided an unprecedentedly deep view of the diverse stellar populations of the MCs, enabling detailed characterization of their star formation histories (e.g., Rubele et al., 2015, 2018; Mazzi et al., 2021; Massana et al., 2022), 3D geometries (e.g., Ripepi et al., 2017; Choi et al., 2018; Ripepi et al., 2022), and substructures (e.g., Pieres et al., 2017; Mackey et al., 2018; Choi et al., 2018; Massana et al., 2020; Gatto et al., 2022; El Youssoufi et al., 2021; Cullinane et al., 2022). Furthermore, these surveys have significantly expanded the census of star clusters and satellite galaxies in the main bodies and outskirts of the Clouds (e.g., Bechtol et al., 2015; Koposov et al., 2015; Martin et al., 2016; Drlica-Wagner et al., 2016; Koposov et al., 2018; Torrealba et al., 2018; Cerny et al., 2021; Gatto et al., 2021), allowing for constraints on the MCs' masses, dark matter halos, orbits, and interaction histories (e.g., Jethwa et al., 2016; Kallivayalil et al., 2018; Bitsakis et al., 2018; Erkal & Belokurov, 2020; Patel et al., 2020; Dias et al., 2021) especially when paired with the precise phase-space information provided by the _Gaia_ satellite (Gaia Collaboration et al., 2016; Battaglia et al., 2022; Pace et al., 2022). Lastly, these efforts to survey and characterize the satellite populations of the MCs have enabled novel observational tests of hierarchical galaxy formation within the \(\Lambda\)CDM paradigm at a lower host mass scale than offered by the Milky Way (e.g., Sales et al., 2016; Dooley et al., 2017; Jahn et al., 2019).
In this _Letter_, we present the newest discovery in this ongoing census of Magellanic satellites: DELVE 6, an ancient, ultra-faint star cluster in the distant outskirts of the MCs. This low-mass system was identified through matched-filter searches over imaging from the Dark Energy Camera (DECam; Flaugher et al., 2015) processed as part of the second data release of the DECam Local Volume Exploration survey (DELVE DR2; Drlica-Wagner et al., 2022). We find that it has an old and metal-poor stellar population, joining the less-than-two-dozen ancient globular clusters known in the Magellanic environment, and that it falls at an unusually large separation from its likely hosts. Thus, DELVE 6 potentially represents an exciting and novel window into the stellar populations inhabiting the periphery of the LMC/SMC system. Here, we present an initial characterization of this system's basic properties, and briefly highlight possibilities for its origins that can be tested with deeper imaging and spectroscopic followup.
## 2 Discovery and Characterization
### Identification in DELVE DR2 and the Legacy Surveys DR10
In Cerny et al. (2022), we presented the results of an extensive search for ultra-faint stellar systems in the Milky Way halo using DECam data processed as part of DELVE DR2 (Drlica-Wagner et al., 2022). Briefly, this search involved applying the open-source simple search code1, which implements an isochrone matched-filter in color-magnitude space to identify overdensities of resolved stars in the Milky Way halo consistent with an old, metal-poor stellar population. Over the entire DELVE DR2 footprint (\(\sim\) 21,000 deg\({}^{2}\)), this search resulted in \(\mathcal{O}(10^{4})\) overdensities, six of which we confirmed as _bona fide_ ultra-faint stellar systems in Cerny et al. (2022) on the basis of deeper follow-up imaging.
Footnote 1: [https://github.com/DarkEnergySurvey/simple](https://github.com/DarkEnergySurvey/simple)
During the late stages of preparation of the aforementioned work, new multi-band co-added images built from the DECam data bec
early version of the Legacy Surveys Data Release 10.2 Motivated by the new availability of these images, we performed a visual inspection of a subset of the initial high-significance (\(>5.5\sigma\)) satellite candidates that resided in regions where comparable color images were not previously available. The primary goal of this effort was to identify systems that were clearly identifiable as overdensities of blue stars in these images, but may have initially been missed due to their marginal signals seen in their smoothed spatial distribution and observed color-magnitude diagrams (CMDs) generated as part of simple's diagnostic plots. One such candidate, DELVE J0212-6603 (DELVE 6), was identified at high significance (\(\sim 5-7\sigma\); comparable to the candidates presented in Cerny et al., 2022) in our multi-pronged search, but was initially passed over during prior inspection due to its sparse CMD. However, as seen in Figure 1, this system is visible as a tight clustering of faint blue sources in the color images provided by the Legacy Surveys DR10 and was easily identified during our visual inspection.
Footnote 2: [https://www.legacysurvey.org/dr10/description/](https://www.legacysurvey.org/dr10/description/)
After confirming that the system has not been reported in literature catalogs of star clusters and dwarf galaxies in the environment of the MCs (e.g., Bica et al., 2008, 2020; Gatto et al., 2020), we proceeded to characterize the newly-discovered system's structure and stellar population, as described in the following subsection. In the absence of timely deeper follow-up imaging, we continued to use the photometric catalogs provided by DELVE DR2 for our analysis. The DELVE DR2 data coincident with DELVE 6 are relatively deep, reaching (extinction-corrected) S/N = 10 magnitude limits of \(g_{0}\sim 24.0\) mag and \(r_{0}\sim 23.8\) mag. These limits are roughly 0.5 mag and 0.8 mag deeper than the median DELVE DR2 depth in the \(g\) and \(r\) bands, respectively (see Drlica-Wagner et al., 2022 for specific information about this public dataset). This depth was therefore found to be sufficient to characterize the newly-discovered system despite its low luminosity.
Throughout the analyses described below, we separated stars from galaxies based on the selection \(0\leq\texttt{EXTENDED\_CLASS\_G}\leq 2\), matching our DECam analyses described in Cerny et al. (2022). This broadly allowed for a higher degree of stellar completeness at the cost of increased galaxy contamination at fainter magnitudes (Drlica-Wagner et al., 2022).
### Structural and Stellar Population Fit
We fit DELVE 6's morphological and stellar population properties using the Ultra-faint Galaxy LIkelihood software toolkit (ugali), which implements an unbinned Poisson maximum-likelihood approach based on the sta
Figure 1: (Left) \(2^{\prime}\times 2^{\prime}\) false-color image cutout centered on DELVE 6 based on \(griz\) DECam imaging, taken from the Legacy Survey Sky Viewer. A clustering of faint blue sources is visible amidst a number of foreground stars and background galaxies. We have increased the brightness of this cutout image to enhance the visibility of faint cluster member stars. (Right) Map of star clusters and dwarf galaxies in the Magellanic environment, plotted in Magellanic Stream coordinate system from Nidever et al. (2008). Each black point corresponds to a star cluster included in the main cluster catalogs from Bica et al. (2008, 2020). Recently-discovered ultra-faint star clusters in this region are shown as red dots, whereas candidate and confirmed ultra-faint dwarf galaxies potentially associated with the MCs are shown as blue triangles. We caution that some of these objects have uncertain classifications and/or tentative associations with the MCs. In addition, we plot the centroid position of the SMC Northern Overdensity (SMCNOD; Pieres et al., 2017) in orange. Lastly, DELVE 6 is shown as a yellow star, positioned near \(B_{\rm MS}\sim 0\); this latitude falls along the projected track of the Magellanic Stream.
tistical formalism presented in Appendix C of Drlica-Wagner et al. (2020). We modelled DELVE 6's structure with an elliptical Plummer (1911) radial stellar density profile, and we fit a PARSEC isochrone (Bressan et al., 2012) to its observed \(g,r\)-band CMD. The eight free parameters for these models were the centroid coordinates (\(\alpha_{2000}\) and \(\delta_{2000}\)), extension (\(a_{h}\)), ellipticity (\(\epsilon\)), position angle (P.A.) of the Plummer profile, and the age (\(\tau\)), metallicity ([Fe/H]), and distance modulus (\((m-M)_{0}\)) of the isochrone model. All eight of these parameters were constrained simultaneously by sampling their posterior probability distribution functions using the affine-invariant Markov Chain Monte Carlo ensemble sampler (Goodman and Weare, 2010) implemented in the Python package emcee(Foreman-Mackey et al., 2013). This sampling was performed with 80 walkers each taking 35,000 steps, with the first 12,500 steps discarded as burn-in; these parameters were set to ensure dense sampling of the age-metallicity bimodality described later in the next subsection.
We report the best-fit values of each parameter and their associated uncertainties in Table 1. These results were derived from a fit assuming a nominal magnitude limit of \(g_{0},r_{0}=23.8\) mag, and with the size of the concentric annulus used by ugali to construct the foreground/background model used for its joint fit of stellar color, magnitude, and spatial distributions set to \(0.5^{\circ}<r<1.5^{\circ}\). To assess whether these fit hyperparameters might affect our results given the complex, spatially-variable foreground/background stellar density associated with the MCs, we explored the sensitivity of our derived estimates of DELVE 6's properties to variations in the assumed magnitude limit and outer radius of the background annulus. Specifically, we re-ran the full ugali MCMC procedure for magnitude limits in the interval [23.6 mag, 24.0 mag] and background annulus radii in the interval [1.0\({}^{\circ}\),2.0\({}^{\circ}\)]. In these tests, we found that all fits that converged resulted in parameter estimates consistent within uncertainties with those reported in Table 1; this was true for all parameters except the position angle, which is mostly unconstrained due to the negligible ellipticity of DELVE 6. However, we did need to apply a weak prior on the extension (sizes \(0.001^{\circ}<a_{h}<0.1^{\circ}\)) in order to avoid convergence to non-physical results. Our fiducial results presented in Table 1 were derived with this prior applied.
### Properties of DELVE 6
As depicted in Figure 2-3, we find that DELVE 6 is a compact (\(r_{1/2}=10^{+4}_{-3}\) pc), ultra-faint (\(M_{V}=-1.5^{+0.4}_{-0.6}\) mag) stellar system with a round morphology (\(\epsilon<0.56\) at 95% confidence). These properties place DELVE 6 in a region of the \(M_{V}\)-\(r_{1/2}\) plane that is dominated by the population of ultra-faint Milky Way halo star clusters discovered by recent surveys, which are generally fainter and more compact than their ultra-faint dwarf galaxy counterparts (see Figure 4). We do observe that DELVE 6 is among the more extended (candidate) ultra-faint star clusters discovered to date, though. We tentatively classify DELVE 6 as an ultra-faint star cluster on the basis of its small physical size, although a future spectroscopic measurement of its velocity and/or metallicity dispersion will provide a more definitive classification for the system.
Figure 2: (Left) Spatial distribution of stars in a \(0.12^{\circ}\times 0.12^{\circ}\) (\(7.2^{\prime}\times 7.2^{\prime}\)) field centered on DELVE 6. Stars are colored by their probability of being a member of the candidate system, as determined through our ugali fit described in Section 2.2; stars with probabilities \(p<0.1\) are shown in grey. (Center) CMD for the stars shown in the left panel. The best-fit isochrone with \(Z=0.0001\) and \(\tau=13.5\) Gyr is shown in black, although see the discussion in Section 2.2 and see Figure 3 for caveats about this model. (Right) Radial stellar density profile for DELVE 6. The best-fit Plummer model is shown as a solid blue curve, assuming the half-light radius reported in Table 1 and an ellipticity \(\epsilon=0\). This ellipticity is approximately matched to the mode of the marginalized posterior distribution for \(\epsilon\) derived from our MCMC analysis; however, our constraint on the system’s ellipticity (\(\epsilon<0.56\) at 95% confidence) is relatively weak due to the small number of observed member stars.
In addition, we find that DELVE 6 is most consistent with an ancient stellar population, as evident from the clear main-sequence turnoff, extended subgiant branch, and (sparse) red giant branch seen in its observed CMD (Figure 2-3). The best-fit isochrone favored by our ugali fit was consistent with the maximum age and the minimum metallicity of our isochrone grid (\(\tau=13.5\) Gyr and \(\rm[Fe/H]=-2.19\) dex, respectively) although the marginalized posterior distribution for these two parameters was found to be bimodal, with a secondary peak at \(\tau\sim 10\) Gyr and \(\rm[Fe/H]\sim-1.2\) dex (see left panel of Figure 3). These modes appear to depend on whether the single blue horizontal branch (BHB) star candidate shown in the right panel of Figure 3 is a true member, or alternately whether the star positioned near the red horizontal branch (RHB) of the second model is a true member. In our nominal best-fit parameter constraints, the BHB star is included as a high-confidence member, driving the fit toward the lower-metallicity mode (blue isochrone in Figure 3). Removing the BHB star from our photometric catalog and re-running the fit resulted in the secondary mode becoming the favored best-fit solution (red isochrone in the same figure). If we instead removed the candidate RHB star, the results are qualitatively similar to those shown in Figure 3.
Using the BHB-blue straggler separation technique introduced in Li et al. (2019), we found that the BHB star's \((g-r)_{0}\) and \((i-z)_{0}\) color is consistent with a classification as a _bona fide_ BHB star. Furthermore, the star's _Gaia_ DR3 proper motion (Gaia Collaboration et al., 2022) is sufficiently small that we cannot clearly identify it as a foreground star, and its parallax is consistent with zero within errors; we therefore cannot rule out its membership in the distant DELVE 6 system. By contrast, the RHB candidate's proper motion of \((\mu_{\alpha}\cos\delta,\mu_{\delta})=(-3.0\pm 0.4,-12.2\pm 0.3)\) implies a tangential velocity that is inconsistent with that of a typical star in a bound orbit at DELVE 6's distance; this suggests the star is not a member of DELVE 6. We have opted to present our ugali fit results from a fit with both stars included in our catalog. This decision fully specifies the age/metallicity solution that we report for DELVE 6 due to the strong impact of the BHB candidate, but the upper/lower limits presented in the Table 1 encompass both plausible solutions for these two parameters. Despite the uncertainty in the age and metallicity, we found that the structural properties presented in Table 1 are consistent within uncertainties with the results derived from fits with either one of the two stars in question removed.
Although we cannot conclusively determine which age/metallicity is most appropriate for DELVE 6 until deeper imaging and/or a spectroscopic metallicity measurement becomes available, both of these isochrone fits strongly suggest that the system is ancient (\(\tau>9.8\) Gyr). DELVE 6's old age, as well as its position in the outskirts of the LMC and SMC, raises interesting questions about its formation and evolution. We study the systemic proper motion of DELVE 6 below and then explore several possibilities for its origin in Section 3.
### Systemic Proper Motion of DELVE 6
In addition to the aforementioned BHB and RHB candidates, we identified one additional nearby mem
\begin{table}
\begin{tabular}{l c c} \hline \hline \multicolumn{1}{c}{ Parameter} & Value & Units \\ \hline IAU Name & DELVE J0212\(-\)6603 &... \\ Constellation & Hydrus &... \\ \(\alpha_{2000}\) & \(33.070^{+0.003}_{-0.004}\) & deg \\ \(\delta_{2000}\) & \(-66.056^{+0.002}_{-0.002}\) & deg \\ \(r_{\rm h}\) & \(0.43^{+0.18}_{-0.12}\) & arcmin \\ \(r_{1/2}\) & \(10^{+4}_{-3}\) & pc \\ \(\epsilon\) & \(<0.56\) &... \\ P.A. & \(14^{+40}_{-63}\) & deg \\ \(M_{V}\)a & \(-1.5^{+0.4}_{-0.6}\) & mag \\ \(\tau\) & \(>9.8\) & Gyr \\ \([\rm Fe/H]\) & \(<-1.17\) & dex \\ \((m-M)_{0}\) & \(19.51^{+0.04}_{-0.12}\) (stat.) \(\pm\) 0.1b & mag \\ \(D_{\odot}\) & \(80^{+2}_{-4}\) (stat.) \({}^{+4}_{-4}\) (sys.) & kpc \\ \(D_{\rm GC}\) & \(78^{+2}_{-4}\) (stat.) \({}^{+4}_{-4}\) (sys.) & kpc \\ \(D_{\rm LMC}\) & \(35^{+2}_{-3}\) (stat.) \({}^{+3}_{-3}\) (sys.) & kpc \\ \(D_{\rm SMC}\) & \(20^{+2}_{-3}\) (stat.) \({}^{+3}_{-3}\) (sys.) & kpc \\ \(E(B-V)\)c & 0.036 & mag \\ \hline \(\mu_{\alpha}\cos\delta\) & \(0.93^{+0.39}_{-0.39}\) & mas yr\({}^{-1}\) \\ \(\mu_{\delta}\) & \(-1.28^{+0.38}_{-0.38}\) & mas yr\({}^{-1}\) \\ \hline \end{tabular} Note. – Uncertainties for each parameter were derived from the highest-density interval containing 68% of the marginalized posterior distribution. For the ellipticity, metallicity, and age, the posterior distribution peaked at the boundary of the allowed parameter space. Therefore, we quote the upper, upper, and lower bound for these three parameters (respectively) at 95% confidence.
\end{table}
Table 1: Properties of DELVE 6
ber candidate with a _Gaia_ proper motion measurement. This star (_Gaia_ DR3 source_id: 4698076296289956352) is the brightest RGB star consistent with our best-fit isochrone in Figure 2 and was identified as a high-probability member in our ugali fit (\(p_{\tt ugali}=0.99\); see Appendix A, Table 2). Notably, this RGB star's proper motion is consistent with the BHB star at the \(1.3\sigma\) level.3 The adequate agreement in proper motion supports the interpretation that the BHB and RGB stars may both be members of DELVE 6, as the joint probability of having these two stars by chance lying on a single isochrone while also sharing a statistically consistent non-zero proper motion is small. It is thus reasonable to derive the proper motion of DELVE 6 from these two stars, which we find to be \((\mu_{\alpha}\cos\delta,\mu_{\delta})=(0.93\pm 0.39,-1.28\pm 0.38)\) mas yr\({}^{-1}\) through a simple two-parameter fit.
Footnote 3: Here, the confidence level calculated appropriately for the case of two-dimensional Gaussians was converted to the equivalent \(\sigma\)-distance for a one-dimensional Gaussian; however, this calculation neglects the covariance between proper motion components.
Although this measurement is tentative and should be interpreted cautiously, DELVE 6's kinematics are singularly important for unravelling its origins. Therefore, we briefly explored what this measurement might tell us about the connection of DELVE 6 to possible host systems. To do so, we calculated its azimuthal and polar velocity components in Galactocentric spherical coordinates (hereafter \(v_{\phi}\) and \(v_{\theta}\)) by sampling from the posterior distributions for the available 5D phase-space measurements and sampling the unknown radial velocity from a uniform distribution on the interval [\(-500\) km s\({}^{-1}\), \(500\) km s\({}^{-1}\)]. We then compared the values of these velocitity components to those expected from stars belonging to the distant MW halo and the MCs.
Using galpy(Bovy, 2015) to carry out the velocity transformation, we find \(v_{\phi}=-50\pm 150\) km s\({}^{-1}\), \(v_{\theta}=-430\pm 150\) km s\({}^{-1}\), where the best-fit values and the upper/lower uncertainties correspond to the median and 16th/84th percentiles across 100,000 random 6D samples. This value of \(v_{\phi}\) is uninformative, as it is consistent with the expected velocity distributions of all three hosts. However, we do observe that DELVE 6's estimated polar velocity \(v_{\theta}\) is approximately consistent with that of the LMC (\(v_{\theta}\sim-305\) km s\({}^{-1}\)) and SMC (\(v_{\theta}\sim-260\) km s\({}^{-1}\)). The polar velocity distribution of MW halo tracers at 80 kpc is expected to be a Gaussian centered near \(v_{\theta}=0\) km \({}^{-1}\) with dispersion \(\sigma_{v_{\theta}}\lesssim 100\) km s\({}^{-1}\) (see Fig. 4 of Bird et al., 2019), and
Figure 3: (Left) 2D posterior probability distribution for the metallicity and age of DELVE 6. The blue shaded contours denote the \(1\sigma,\ 2\sigma\), and \(3\sigma\) 2D confidence regions in this plane. (Right) CMD for all stars within \(2r_{1/2}\) of DELVE 6’s centroid. Two isochrone models are shown. The blue model depicts an isochrone with [Fe/H] \(=-2.19\) and \(\tau=13.5\) Gyr, consistent with the posterior mode shown in the bottom-righthand corner of the left panel, whereas the red model depicts a younger, more metal-rich isochrone corresponding to a fit lying within the medium-blue \(2\sigma\) contour near the top-center of the left panel. Here, both models are fixed to the distance modulus reported in Table 1, despite small differences in distance between the corresponding samples. We highlight that one BHB candidate lies along the older, more metal-poor (blue) model, and one (likely non-member) star lies near the RHB of the younger, more metal-rich (red) model.
thus DELVE 6 appears to be kinematically distinct from typical stars in the outer halo in this velocity component. We therefore conclude that DELVE 6's proper motion, taken at face value, would argue in favor of a Magellanic association. This conclusion only weakly depends on the unknown line-of-sight velocity but is limited by the large error on DELVE 6's proper motion. Reduced proper motion errors (e.g., from future _Gaia_ data releases) will provide more definitive kinematic evidence for an association with one host or another.
## 3 Discussion
At its projected sky position (Figure 1) and Galactocentric distance (\(D_{\rm GC}=78\) kpc), DELVE 6 is located 20 kpc from the center of the SMC and 35 kpc from the center of the LMC.4 Despite this relative proximity, DELVE 6 is located beyond the tidal radius (\(r_{t}\)) of the LMC due to the MW (\(r_{t}\sim 22\) kpc; van der Marel & Kallivayalil, 2014) and the tidal radius of the SMC due to the LMC (\(r_{t}\sim 5\) kpc; Massana et al., 2020), suggesting that these two possible hosts have a weak gravitational influence on DELVE 6. It is therefore not immediately evident based on its position alone whether this system is associated with either of the Clouds. On the basis of its separation from the LMC and SMC, as well as the possible ages and metallicities discussed above, we speculate that DELVE 6 is most likely described by one of three scenarios: (1) a distant ultra-faint Milky Way halo star cluster coincidentally located near the MCs; (2) a LMC cluster residing in its host's outer halo in a weakly-bound or unbound orbit, or (3) a cluster formed within the SMC that has been stripped from its host and now orbits in the MW+LMC potential.
Footnote 4: We assume Galactocentric distances for the LMC, SMC, and Galactic center from Pietrzyński et al. (2013); Graczyk et al. (2020) and GRAVITY Collaboration et al. (2019), respectively. We neglect the uncertainties on these distances and on each object’s centroid coordinates when calculating \(D_{\rm LMC}\), \(D_{\rm SMC},D_{\rm GC}\) because they are subdominant relative to the uncertainty on DELVE 6’s heliocentric distance.
The first of these scenarios is supported by the similarity between DELVE 6's age and metallicity to the properties of the MW's "classical" globular clusters and the growing population of ultra-faint MW halo star clusters. This scenario also mitigates the need for an explanation of DELVE 6's apparently large separation from the LMC and SMC. Roughly a dozen MW star clusters are known at \(D_{\odot}>70\) kpc, suggesting that an LMC/SMC origin is not necessary to explain DELVE 6's large Galactocentric distance.5 This all being said, our exploration of DELVE 6's tentative proper motion (Section 2.4) suggests that its overall kinematics may be more consistent with the LMC/SMC system and inconsistent with MW halo tracers at its distance.
Footnote 5: This number includes ultra-faint systems, but excludes clusters in the MW’s dwarf satellites (e.g. Fornax and Eridanus II).
On that note, the second of our proposed scenarios - namely that the system was formed in the LMC and remains in a weakly-bound or unbound orbit around its host - may also explain the ancient age of DELVE 6 given the \(\sim 15\) known LMC globular clusters with ages \(>9\) Gyr (e.g., Mackey & Gilmore, 2004), but does not directly explain the system's present-day Galactocentric distance and 3D separation. Indeed, if confirmed to be an LMC satellite, DELVE 6 would reside at a larger separation from the LMC than all but two star clusters believed to be associated with the MCs.6 Nevertheless, there is evidence that the LMC dark matter halo extends \(>50\) kpc in radius (e.g., Koposov et al., 2022), making it plausible that DELVE 6 lies in the outskirts of the LMC halo. Furthermore, we note that at least two ultra-faint dwarf galaxies that are likely to be associated with the LMC lie at larger separations from their (original) host compared to DELVE 6: Horologium I and Phoenix II
Figure 4: Absolute magnitude (\(M_{V}\)) vs. azimuthally-averaged half-light radius (\(r_{1/2}\)) for a large sample of classical globular clusters, candidate and confirmed Milky Way satellite galaxies, and recently-discovered ultra-faint halo star clusters. The location of DELVE 6 in this plane is indicated by a yellow star. A complete reference list for this figure is available in Appendix B.
(see Pace et al., 2022). Detailed modelling of the MW and LMC dark matter halo structures (including the LMC's dynamical friction wake) and these satellites' orbits within the associated potential suggests that the former of these two ultra-faint dwarfs is unbound from the LMC, while the latter is likely bound (Garavito-Camargo et al., 2021). By analogy, we conclude that DELVE 6 could plausibly be either weakly bound to the LMC or unbound given its current position.
The last possibility is that DELVE 6 was initially formed within the SMC, but was stripped from its host due to interactions between the LMC and the SMC. The strongest (and arguably only) observational evidence for this conclusion is DELVE 6's large present-day Galactocentric distance and its on-sky location. These properties place the system in a 3D position where a relatively high density of SMC satellites/debris is expected based on numerical simulations (Jethwa et al., 2016). In further support of this possibility, we highlight that a similar stripping scenario has also been proposed for the recently-discovered ultra-faint star cluster YMCA-1 on the basis of its proper motion (Piatti and Lucchini, 2022); like DELVE 6, this system lies well beyond the SMC's tidal radius.
Disfavoring this stripping scenario is the fact that DELVE 6's age and metallicity from our nominal best-fit solution are inconsistent with the known SMC globular cluster population, which includes only a single comparably old system with a robustly-measured age (NGC 121, at \(\tau\sim 11\) Gyr; Glatt et al., 2008). Indeed, our measurement suggests that DELVE 6 would be the second or third oldest star cluster associated with the SMC, depending on the status of the aforementioned YMCA-1 system, which has a somewhat uncertain age (\(\tau\sim 9.6-11.7\) Gyr; Piatti and Lucchini, 2022; Gatto et al., 2022) and only a tentative association with the SMC. Nonetheless, such a stripping scenario has been proposed as one explanation for the apparent dearth of old-aged star clusters associated with the SMC. By tracing the orbits of star clusters throughout a period of dynamical interaction between the LMC and SMC, Carpintero et al. (2013) found that for large eccentricities (\(e>0.5\)), \(\sim 15\%\) of SMC star clusters are captured by the LMC and an additional \(\sim 20-50\%\) of clusters are ejected into the intergalactic medium. We believe that either of these two capture scenarios is more likely than the possibility that DELVE 6 inhabits a weakly-bound orbit in the outer halo of the SMC (analogous to the case above for the LMC) given that the system lies at \(\sim 4r_{\rm t,SMC}\) compared to \(\sim 1.6r_{\rm t,LMC}\).
Spectroscopic follow-up will be critical for distinguishing between these scenarios. Specifically, a radial velocity measurement would enable the possibility of rewinding DELVE 6's orbit in the combined MW+LMC+SMC potential, elucidating its origins. If an LMC or SMC origin can be robustly established on the basis of its orbit, DELVE 6 may provide a significant constraint on its host's age-metallicity relation at a very large radius. Deeper imaging reaching the main-sequence turnoff feature of DELVE 6's color-magnitude diagram will be necessary to realize this possibility and robustly confirm this system's ancient age, and a spectroscopic metallicity measurement would aid in breaking the age-metallicity degeneracy inherent to isochrone fitting.
Lastly, we highlight that the discovery of DELVE 6 emphasizes that the observational census of ultra-faint systems in the Magellanic environment is incomplete. Considering the continual discovery of similar ultra-faint star cluster systems near the MCs in recent years, we speculate that a more extensive population of old, metal-poor ultra-faint star clusters may exist around the LMC, SMC, and perhaps even other low-mass galaxies in the Local Group, waiting to be uncovered by current and future surveys.
## 4 Acknowledgments
Codes and data products associated with this work are available online at [https://github.com/wcerny/DELVE6_Paper](https://github.com/wcerny/DELVE6_Paper).
This project is partially supported by the NASA Fermi Guest Investigator Program Cycle 9 No. 91201. This work is partially supported by Fermilab LDRD project L2019-011. W.C. gratefully acknowledges support from a Gruber Science Fellowship at Yale University. CEMV is supported by the international Gemini Observatory, a program of NSF's NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation, on behalf of the Gemini partnership of Argentina, Brazil, Canada, Chile, the Republic of Korea, and the United States of America. JAC-B acknowledges support from FONDECYT Regular N 1220083.
This project used data obtained with the Dark Energy Camera, which was constructed by the Dark Energy Survey (DES) collaboration. Funding for the DES Projects has been provided by the DOE and NSF (USA), MISE (Spain), STFC (UK), HEFCE (UK), NCSA (UIUC), KICP (U. Chicago), CCAPP (Ohio State), MIFPA (Texas A&M University), CNPQ, FAPERJ, FINEP (Brazil), MINECO (Spain), DFG (Germany), and the collaborating institutions in the Dark Energy Survey, which are Argonne Lab, UC Santa Cruz, University of Cambridge, CIEMAT-Madrid, University of
Chicago, University College London, DES-Brazil Consortium, University of Edinburgh, ETH Zurich, Fermilab, University of Illinois, ICE (IEEC-CSIC), IFAE Barcelona, Lawrence Berkeley Lab, LMU Munchen, and the associated Excellence Cluster Universe, University of Michigan, NSF's National Optical-Infrared Astronomy Research Laboratory, University of Nottingham, Ohio State University, OzDES Membership Consortium University of Pennsylvania, University of Portsmouth, SLAC National Lab, Stanford University, University of Sussex, and Texas A&M University.
Based on observations at Cerro Tololo Inter-American Observatory, NSF's National Optical-Infrared Astronomy Research Laboratory (2019A-0305; PI: Drlica-Wagner), which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.
The Legacy Surveys consist of three individual and complementary projects: the Dark Energy Camera Legacy Survey (DECaLS; Proposal ID 2014B-0404; PIs: David Schlegel and Arjun Dey), the Beijing-Arizona Sky Survey (BASS; NOAO Prop. ID 2015A-0801; PIs: Zhou Xu and Xiaohui Fan), and the Mayall z-band Legacy Survey (MzLS; Prop. ID 2016A-0453; PI: Arjun Dey). DECaLS, BASS and MzLS together include data obtained, respectively, at the Blanco telescope, Cerro Tololo Inter-American Observatory, NSF's NOIRLab; the Bok telescope, Steward Observatory, University of Arizona; and the Mayall telescope, Kitt Peak National Observatory, NOIRLab. Pipeline processing and analyses of the data were supported by NOIRLab and the Lawrence Berkeley National Laboratory (LBNL). The Legacy Surveys project is honored to be permitted to conduct astronomical research on Iolkam Du'ag (Kitt Peak), a mountain with particular significance to the Tohono O'odham Nation.
NOIRLab is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. LBNL is managed by the Regents of the University of California under contract to the U.S. Department of Energy.
BASS is a key project of the Telescope Access Program (TAP), which has been funded by the National Astronomical Observatories of China, the Chinese Academy of Sciences (the Strategic Priority Research Program "The Emergence of Cosmological Structures" Grant # XDB09000000), and the Special Fund for Astronomy from the Ministry of Finance. The BASS is also supported by the External Cooperation Program of Chinese Academy of Sciences (Grant # 114A11KYSB20160057), and Chinese National Natural Science Foundation (Grant # 12120101003, # 11433005).
The Legacy Survey team makes use of data products from the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE), which is a project of the Jet Propulsion Laboratory/California Institute of Technology. NEOWISE is funded by the National Aeronautics and Space Administration.
The Legacy Surveys imaging of the DESI footprint is supported by the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy under Contract No. DE-AC02-05CH1123, by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract; and by the U.S. National Science Foundation, Division of Astronomical Sciences under Contract No. AST-0950945 to NOAO.
This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement.
This work made use of Astropy:7 a community-developed core Python package and an ecosystem of tools and resources for astronomy.
Footnote 7: [http://www.astropy.org](http://www.astropy.org)
This manuscript has been authored by Fermi Research Alliance, LLC, under contract No. DE-AC02-07CH11359 with the US Department of Energy, Office of Science, Office of High Energy Physics. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
Footnote 7: [http://healpix.sourceforge.net](http://healpix.sourceforge.net)
Blanco, _Gaia_.
numpy(van der Walt et al., 2011; Harris et al., 2020), scipy(Virtanen et al., 2020), emcee(Foreman-Mackey et al., 2013), HEALPix(Gorski et al., 2005),8 healpy(Zonca et al., 2019), ugali(Bechtol et al., 2015),9, ChainConsumer(Hinton, 2019), simple(Bechtol,
## References
* Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33
* Astropy Collaboration et al. (2018) Astropy Collaboration, Price-Whelan, A. M., Sipocz, B. M., et al. 2018, AJ, 156, 123
* Astropy Collaboration et al. (2022) Astropy Collaboration, Price-Whelan, A. M., Lim, P. L., et al. 2022, ApJ, 935, 167
* Balbinot et al. (2013) Balbinot, E., Santiago, B. X., da Costa, L., et al. 2013, ApJ, 767, 101
* Battaglia et al. (2022) Battaglia, G., Taibi, S., Thomas, G. F., & Fritz, T. K. 2022, A&A, 657, A54
* Bechtol (2017) Bechtol, K. 2017, in American Astronomical Society Meeting Abstracts, Vol. 229, American Astronomical Society Meeting Abstracts #229, 416.06
* Bechtol et al. (2015) Bechtol, K., Drlica-Wagner, A., Balbinot, E., et al. 2015, ApJ, 807, 50
* Bica et al. (2008) Bica, E., Bonatto, C., Dutra, C. M., & Santos, J. F. C. 2008, MNRAS, 389, 678
* Bica et al. (2020) Bica, E., Westera, P., Kerber, L. d. O., et al. 2020, AJ, 159, 82
* Bird et al. (2019) Bird, S. A., Xue, X.-X., Liu, C., et al. 2019, AJ, 157, 104
* Bitsakis et al. (2018) Bitsakis, T., Gonzalez-Lopezira, R. A., Bonfini, P., et al. 2018, ApJ, 853, 104
* Bovy (2015) Bovy, J. 2015, ApJS, 216, 29
* Bressan et al. (2012) Bressan, A., Marigo, P., Girardi, L., et al. 2012, MNRAS, 427, 127
* Cantu et al. (2021) Cantu, S. A., Pace, A. B., Marshall, J., et al. 2021, ApJ, 916, 81
* Carlin et al. (2017) Carlin, J. L., Sand, D. J., Munoz, R. R., et al. 2017, AJ, 154, 267
* Carpintero et al. (2013) Carpintero, D. D., Gomez, F. A., & Piatti, A. E. 2013, MNRAS, 435, L63
* Cerny et al. (2021a) Cerny, W., Pace, A. B., Drlica-Wagner, A., et al. 2021a, ApJ, 910, 18
* Cerny et al. (2021b) --. 2021b, ApJ, 920, L44
* Cerny et al. (2022) Cerny, W., Martinez-Vazquez, C. E., Drlica-Wagner, A., et al. 2022, arXiv e-prints, arXiv:2209.12422
* Cerny et al. (2023) Cerny, W., Simon, J. D., Li, T. S., et al. 2023, ApJ, 942, 111
* Choi et al. (2018a) Choi, Y., Nidever, D. L., Olsen, K., et al. 2018a, ApJ, 866, 90
* Choi et al. (2018b) --. 2018b, ApJ, 869, 125
* Cioni et al. (2011) Cioni, M. R. L., Clementini, G., Girardi, L., et al. 2011, A&A, 527, A116
* Conn et al. (2018) Conn, B. C., Jerjen, H., Kim, D., & Schirmer, M. 2018, ApJ, 852, 68
* Crnojevic et al. (2016) Crnojevic, D., Sand, D. J., Zaritsky, D., et al. 2016, ApJ, 824, L14
* Cullinane et al. (2022) Cullinane, L. R., Mackey, A. D., Da Costa, G. S., et al. 2022, MNRAS, 510, 445
* DES Collaboration (2005) DES Collaboration. 2005, arXiv e-prints, astro
* Dias et al. (2021) Dias, B., Angelo, M. S., Oliveira, R. A. P., et al. 2021, A&A, 647, L9
* Dooley et al. (2017) Dooley, G. A., Peter, A. H. G., Carlin, J. L., et al. 2017, MNRAS, 472, 1060
* Drlica-Wagner et al. (2015) Drlica-Wagner, A., Bechtol, K., Rykoff, E. S., et al. 2015, ApJ, 813, 109
* Drlica-Wagner et al. (2016) Drlica-Wagner, A., Bechtol, K., Allam, S., et al. 2016, ApJ, 833, L5
* Drlica-Wagner et al. (2020) Drlica-Wagner, A., Bechtol, K., Mau, S., et al. 2020, ApJ, 893, 47
* Drlica-Wagner et al. (2022) Drlica-Wagner, A., Ferguson, P. S., Adamow, M., et al. 2022, ApJS, 261, 38
* El Youssoufi et al. (2021) El Youssoufi, D., Cioni, M.-R. L., Bell, C. P. M., et al. 2021, MNRAS, 505, 2020
* Erkal & Belokurov (2020) Erkal, D., & Belokurov, V. A. 2020, MNRAS, 495, 2554
* Fadely et al. (2011) Fadely, R., Willman, B., Geha, M., et al. 2011, AJ, 142, 88
* Flaugher et al. (2015) Flaugher, B., Diehl, H. T., Honscheid, K., et al. 2015, AJ, 150, 150
* Foreman-Mackey et al. (2013) Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, PASP, 125, 306
* Gaia Collaboration et al. (2016) Gaia Collaboration, Prusti, T., de Bruijne, J. H. J., et al. 2016, A&A, 595, A1
* Gaia Collaboration et al. (2022) Gaia Collaboration, Vallenari, A., Brown, A. G. A., et al. 2022, arXiv e-prints, arXiv:2208.00211
* Garavito-Camargo et al. (2021) Garavito-Camargo, N., Besla, G., Laporte, C. F. P., et al. 2021, ApJ, 919, 109
* Gatto et al. (2022a) Gatto, M., Ripepi, V., Bellazzini, M., et al. 2022a, ApJ, 931, 19
* Gatto et al. (2020) --. 2020, MNRAS, 499, 4114
* Gatto et al. (2021) --. 2021, Research Notes of the American Astronomical Society, 5, 159
* Gatto et al. (2022b) --. 2022b, ApJ, 929, L21
* Gatt et al. (2008) Gatt, K., Gallagher, John S., I., Grebel, E. K., et al. 2008, AJ, 135, 1106
* Goodman & Weare (2010) Goodman, J., & Weare, J. 2010, Communications in Applied Mathematics and Computational Science, 5, 65
* Gorski et al. (2005) Gorski, K. M., Hivon, E., Banday, A. J., et al. 2005, ApJ, 622, 759
* Graczyk et al. (2020) Graczyk, D., Pietrzynski, G., Thompson, I. B., et al. 2020, ApJ, 904, 13
GRAVITY Collaboration, Abuter, R., Amorim, A., et al. 2019, A&A, 625, L10
* Harris et al. (2020) Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357
* Harris (2010) Harris, W. E. 2010, arXiv e-prints, arXiv:1012.3224
* Hinton (2019) Hinton, S. R. 2019, ChainConsumer: Corner plots, LaTeX tables and plotting walks, Astrophysics Source Code Library, record ascl:1910.017,, ascl:1910.017
* Homma et al. (2018) Homma, D., Chiba, M., Okamoto, S., et al. 2018, PASJ, 70, S18
* Homma et al. (2019) Homma, D., Chiba, M., Komiyama, Y., et al. 2019, PASJ, 71, 94
* Jahn et al. (2019) Jahn, E. D., Sales, L. V., Wetzel, A., et al. 2019, Monthly Notices of the Royal Astronomical Society, 489, 5348. [https://doi.org/10.1093/mnras/stz2457](https://doi.org/10.1093/mnras/stz2457)
* Jethwa et al. (2016) Jethwa, P., Erkal, D., & Belokurov, V. 2016, MNRAS, 461, 2212
* Ji et al. (2021) Ji, A. P., Koposov, S. E., Li, T. S., et al. 2021, ApJ, 921, 32
* Kallivayalil et al. (2018) Kallivayalil, N., Sales, L. V., Zivick, P., et al. 2018, ApJ, 867, 19
* Kim & Jerjen (2015a) Kim, D., & Jerjen, H. 2015a, ApJ, 808, L39
* Kim & Jerjen (2015b) --. 2015b, ApJ, 799, 73
* Kim et al. (2016) Kim, D., Jerjen, H., Mackey, D., Da Costa, G. S., & Milone, A. P. 2016, ApJ, 820, 119
* Kim et al. (2015) Kim, D., Jerjen, H., Milone, A. P., Mackey, D., & Da Costa, G. S. 2015, ApJ, 803, 63
* Koposov et al. (2007) Koposov, S., de Jong, J. T. A., Belokurov, V., et al. 2007, ApJ, 669, 337
* Koposov et al. (2015) Koposov, S. E., Belokurov, V., Torrealba, G., & Evans, N. W. 2015, ApJ, 805, 130
* Koposov et al. (2018) Koposov, S. E., Walker, M. G., Belokurov, V., et al. 2018, MNRAS, 479, 5343
* Koposov et al. (2022) Koposov, S. E., Erkal, D., Li, T. S., et al. 2022, arXiv e-prints, arXiv:2211.04495
* Li et al. (2019) Li, T. S., Koposov, S. E., Zucker, D. B., et al. 2019, MNRAS, 490, 3508
* Longeard et al. (2019) Longeard, N., Martin, N., Ibata, R. A., et al. 2019, MNRAS, 490, 1498
* Longeard et al. (2018) Longeard, N., Martin, N., Starkenburg, E., et al. 2018, MNRAS, 480, 2609
* Luque et al. (2016) Luque, E., Queiroz, A., Santiago, B., et al. 2016, MNRAS, 458, 603
* Luque et al. (2018) Luque, E., Santiago, B., Pieres, A., et al. 2018, MNRAS, 478, 2006
* Mackey & Gilmore (2004) Mackey, A. D., & Gilmore, G. F. 2004, MNRAS, 352, 153
* Mackey et al. (2018) Mackey, D., Koposov, S., Da Costa, G., et al. 2018, ApJ, 858, L21
* Martin et al. (2008) Martin, N. F., de Jong, J. T. A., & Rix, H.-W. 2008, ApJ, 684, 1075
* Martin et al. (2015) Martin, N. F., Nidever, D. L., Besla, G., et al. 2015, ApJ, 804, L5
* Martin et al. (2016) Martin, N. F., Jungbluth, V., Nidever, D. L., et al. 2016, ApJ, 830, L10
* Massana et al. (2020) Massana, P., Noel, N. E. D., Nidever, D. L., et al. 2020, MNRAS, 498, 1034
* Massana et al. (2022) Massana, P., Ruiz-Lara, T., Noel, N. E. D., et al. 2022, MNRAS, 513, L40
* Mau et al. (2019) Mau, S., Drlica-Wagner, A., Bechtol, K., et al. 2019, ApJ, 875, 154
* Mau et al. (2020) Mau, S., Cerny, W., Pace, A. B., et al. 2020, ApJ, 890, 136
* Mazzi et al. (2021) Mazzi, A., Girardi, L., Zaggia, S., et al. 2021, MNRAS, 508, 245
* McConnachie (2012) McConnachie, A. W. 2012, AJ, 144, 4
* Moskowitz & Walker (2020) Moskowitz, A. G., & Walker, M. G. 2020, ApJ, 892, 27
* Munoz et al. (2018) Munoz, R. R., Cote, P., Santana, F. A., et al. 2018, ApJ, 860, 66
* Munoz et al. (2012) Munoz, R. R., Geha, M., Cote, P., et al. 2012, ApJ, 753, L15
* Mutlu-Pakdil et al. (2018) Mutlu-Pakdil, B., Sand, D. J., Carlin, J. L., et al. 2018, ApJ, 863, 25
* Nidever et al. (2008) Nidever, D. L., Majewski, S. R., & Butler Burton, W. 2008, ApJ, 679, 432
* Nidever et al. (2017) Nidever, D. L., Olsen, K., Walker, A. R., et al. 2017, AJ, 154, 199
* Pace et al. (2022) Pace, A. B., Erkal, D., & Li, T. S. 2022, ApJ, 940, 136
* Patel et al. (2020) Patel, E., Kallivayalil, N., Garavito-Camargo, N., et al. 2020, ApJ, 893, 121
* Piatti & Lucchini (2022) Piatti, A. E., & Lucchini, S. 2022, MNRAS, 515, 4005
* Pieres et al. (2017) Pieres, A., Santiago, B. X., Drlica-Wagner, A., et al. 2017, MNRAS, 468, 1349
* Pieres et al. (2017) Pieres, A., Santiago, B., Drlica-Wagner, A., et al. 2017, Monthly Notices of the Royal Astronomical Society, 468, 1349
* Pietrzynski et al. (2013) Pietrzynski, G., Graczyk, D., Gieren, W., et al. 2013, Nature, 495, 76
* Plummer (1911) Plummer, H. C. 1911, MNRAS, 71, 460
* Richstein et al. (2022) Richstein, H., Patel, E., Kallivayalil, N., et al. 2022, ApJ, 933, 217
* Ripepi et al. (2014) Ripepi, V., Cignoni, M., Tosi, M., et al. 2014, MNRAS, 442, 1897
* Ripepi et al. (2017) Ripepi, V., Cioni, M.-R. L., Moretti, M. I., et al. 2017, MNRAS, 472, 808
* Ripepi et al. (2022) Ripepi, V., Chemin, L., Molinaro, R., et al. 2022, MNRAS, 512, 563
* Rubele et al. (2015) Rubele, S., Girardi, L., Kerber, L., et al. 2015, MNRAS, 449, 639
* Rubele et al. (2018) Rubele, S., Pastorelli, G., Girardi, L., et al. 2018, MNRAS, 478, 5017
* Sales et al. (2016) Sales, L. V., Navarro, J. F., Kallivayalil, N., & Frenk, C. S. 2016, Monthly Notices of the Royal Astronomical Society, 465, 1879. [https://doi.org/10.1093/mnras/stw2816](https://doi.org/10.1093/mnras/stw2816)
* Schlafly & Finkbeiner (2011) Schlafly, E. F., & Finkbeiner, D. P. 2011, ApJ, 737, 103
* Schlegel et al. (1998) Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525
* Simon et al. (2020) Simon, J. D., Li, T. S., Erkal, D., et al. 2020, ApJ, 892, 137
* Torrealba et al. (2019) Torrealba, G., Belokurov, V., & Koposov, S. E. 2019, MNRAS, 484, 2181
* Torrealba et al. (2016a) Torrealba, G., Koposov, S. E., Belokurov, V., & Irwin, M. 2016a, MNRAS, 459, 2370
* Torrealba et al. (2016b) Torrealba, G., Koposov, S. E., Belokurov, V., et al. 2016b, MNRAS, 463, 712
* Torrealba et al. (2018) Torrealba, G., Belokurov, V., Koposov, S. E., et al. 2018, MNRAS, 475, 5085
* van der Marel & Kallivayalil (2014) van der Marel, R. P., & Kallivayalil, N. 2014, ApJ, 781, 121
* van der Walt et al. (2011) van der Walt, S., Colbert, S. C., & Varoquaux, G. 2011, Computing in Science and Engineering, 13, 22
* Virtanen et al. (2020) Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261
* Wang et al. (2019) Wang, M. Y., de Boer, T., Pieres, A., et al. 2019, ApJ, 881, 118
* Weisz et al. (2016) Weisz, D. R., Koposov, S. E., Dolphin, A. E., et al. 2016, ApJ, 822, 32
* Zonca et al. (2019) Zonca, A., Singer, L., Lenz, D., et al. 2019, The Journal of Open Source Software, 4, 1298
## Appendix A Candidate Delve 6 Members Identified in Gaia DR3
In Table 2, we summarize the properties of the three potential DELVE 6 member stars that we identified in _Gaia_ DR3. We refer the reader to Section 2.2 for a more detailed discussion of these sources.
|
2310.08994 | Systematic Evaluation of Generative Machine Learning Capability to
Simulate Distributions of Observables at the Large Hadron Collider | Monte Carlo simulations are a crucial component when analysing the Standard
Model and New physics processes at the Large Hadron Collider. This paper aims
to explore the performance of generative models for complementing the
statistics of classical Monte Carlo simulations in the final stage of data
analysis by generating additional synthetic data that follows the same
kinematic distributions for a limited set of analysis-specific observables to a
high precision. Several deep generative models are adapted for this task and
their performance is systematically evaluated using a well-known benchmark
sample containing the Higgs boson production beyond the Standard Model and the
corresponding irreducible background. The paper evaluates the autoregressive
models and normalizing flows and the applicability of these models using
different model configurations is investigated. The best performing model is
chosen for a further evaluation using a set of statistical procedures and a
simplified physics analysis. By implementing and performing a series of
statistical tests and evaluations we show that a machine-learning-based
generative procedure can be used to generate synthetic data that matches the
original samples closely enough and that it can therefore be incorporated in
the final stage of a physics analysis with some given systematic uncertainty. | Jan Gavranovič, Borut Paul Kerševan | 2023-10-13T10:20:22Z | http://arxiv.org/abs/2310.08994v3 | Systematic Evaluation of Generative Machine Learning Capability to Simulate Distributions of Observables at the Large Hadron Collider
###### Abstract
Monte Carlo simulations are a crucial component when analysing the Standard Model and New physics processes at the Large Hadron Collider (LHC). This paper aims to explore the use of generative models for increasing the statistics of Monte Carlo simulations in the final stage of data analysis by generating synthetic data that follows the same kinematic distributions for a limited set of analysis-specific observables to a high precision. Several state-of-the-art generative machine learning algorithms are adapted to this task, best performance is demonstrated by the normalizing flow architectures, which are capable of fast generation of an arbitrary number of new events. As an example of analysis-specific Monte Carlo simulated data, a well-known benchmark sample containing the Higgs boson production beyond the Standard Model and the corresponding irreducible background is used. The applicability of normalizing flows with different model parameters and numbers of initial events used in training is investigated. The resulting event distributions are compared with the original Monte Carlo distributions using statistical tests and a simplified statistical analysis to evaluate their similarity and quality of reproduction required in a physics analysis environment in a systematic way.
High energy physics LHC machine learning normalizing flows Monte Carlo simulations
## 1 Introduction
In data analysis of new physics searches at the LHC [1] experiments, the use of Monte Carlo (MC) simulation is essential to accurately describe the kinematics of the known _background_ processes in order to determine an eventual discrepancy with the measured data and attribute such a deviation to a certain new physics _signal_ hypothesis. Describing LHC data precisely through Monte Carlo (MC) simulations involves several key steps (see e.g. [2]). First, particles are generated based on a calculated differential cross-section, a process referred to as "event generation". This generation is carried out using MC generator tools like Pythia [3] or Sherpa [4]. Next, these particles are simulated as they pass through the detector volume and interact with the detector's materials. This stage is typically performed using the Geant4 toolkit [5] and is known as "detector simulation". During this step, the model also incorporates various factors that accurately represent the collision environment, such as the response to multiple simultaneous proton collisions in the LHC, often referred to as "pile-up" events. These interactions in the detector are converted into simulated electronic response of the detector electronics ("digitization" step) and at this stage, the MC simulated data matches the real recorded collision data as closely as possible. Subsequently, the response of the detector trigger system is modeled, and then the simulated data undergoes the same complex reconstruction procedure as the, involving the reconstruction of basic analysis objects, such as electrons or jets, followed by reconstruction of more involved event kinematics.
Eventually, the final data analysis optimizes the data selection ("filtering") procedure to maximize measurement accuracy and new physics discovery potential (statistical significance) and determines the potential presence of new physics using statistical tests on the final data selection. The filtering and final statistical analysis are based on comparing the MC signal and background predictions with the real data by using several \(\mathcal{O}(10)\) kinematic variables. Obviously, the statistics of the simulated events limits the prediction accuracy of the background and signal events - ideally, the number of simulated events would exceed the data predictions by several orders of magnitude to minimize the impact of finite MC statistics on the systematics uncertainty of the final measurement. While the simulated background events can typically be shared between several analyses, the simulation of a chosen signal process, and the subsequent choice of the relevant kinematic observables is very analysis-specific.
### High-Luminosity LHC and the need for more computing power
A fundamental problem with the standard MC simulation procedure described above is the need for large computational resources, which restricts the physics analysis discovery process due to the speed and high cost of CPU and disk space needed to generate and store events describing high energy particle collisions [6]. The full procedure of MC event simulation at the LHC's detectors may take several minutes per event and produce \(\mathcal{O}(1\text{ MB})\) of data of which only \(\mathcal{O}(1\text{ RB})\) of high-level features, i.e. reconstructed kinematic observables, is used in a specific final analysis of a specific physics process of interest.
With the current Run 3 and, in the future, the High-Luminosity LHC (HL-LHC) it is expected that the experiments will require even more computing power for MC simulation to match the precision requirements of physics analysis, which increase proportionally to the size of the collected datasets that will come with the increase in luminosity. Taken together, the data and MC requirements of the HL-LHC physics programme are formidable, and if the computing costs are to be kept within feasible levels, very significant improvements will need to be made both in terms of computing and storage, as stated in Ref. [7] for the ATLAS detector [8]. The majority of resources will indeed be needed to produce simulated events from physics modelling (event generation) to detector simulation, reconstruction, and production of analysis formats.
### Machine learning for fast event generation
To address the limitation posed by insufficient Monte Carlo (MC) statistics in constraining the potential of physics analysis, a promising approach is the utilization of Machine Learning (ML), specifically focusing on deep generative modelling. The general idea is to create large numbers of events at limited computing cost using a learning algorithm that was trained on a comparatively smaller set of MC-simulated events. Several different approaches exist, trying to replace different MC stages in simulation chain with faster ML solutions. Recent examples involve event generation [9; 10; 11], replacing the most computationally intensive parts of the detector simulation, such as the calorimeter response [12] as well as the full simulation chain for analysis-specific purposes [13; 14], using different ML approaches, from Generative Adversarial Networks (GAN) to variational autoencoders (VAE) and most recently several implementations of normalizing flows.
The study presented in this paper specifically focuses on the approach of developing a generative ML procedure for a finite set of analysis-specific reconstructed kinematic observables. The generative ML algorithm is thus trained on a set of MC-simulated and reconstructed events using the kinematic distributions used in the final analysis. The requirement is to be able to extend the statistics of the existing MC using this procedure by several orders of magnitude with the generation being fast enough that the events can be produced on-demand without the need for expensive data storage. In other words, the ML algorithm will learn to model the multi-dimensional distributions with a density \(p(\mathbf{x})\) of \(\mathcal{O}(10)\) observables \(\mathbf{x}\) that can be used in the final stage of a given physics analysis, thus the approach can also be interpreted as smoothing ("kriging") of the multi-dimensional kinematic distributions derived from a finite learning sample.
The ML procedures used in particle physics are built upon the most recent advances in ML tools used for commercial purposes (e.g. [15]), whereby the goals and precision requirements in commercial applications are very different from the ones used in a particle physics analysis. While ML approaches can achieve a visually impressive agreement between the training MC sample and the events generated using the derived ML procedure [13; 14], one still needs to systematically evaluate whether the agreement is good enough in terms of the requirements of the corresponding physics data analysis. This paper aims to provide a systematic evaluation of:
* what are the actual precision requirements in terms of statistical analysis of a new physics search,
* what is the required training MC dataset size to actually achieve the required level of precision,
and in view of these evaluates the performance of a few custom implementations of the recently trending ML tools.
Training MC dataset
The study in this paper uses the publicly available simulated LHC-like HIGGS dataset [16] of a new physics beyond the Standard model (BSM) Higgs boson production and a background process with identical decay products in the final state but distinct kinematic features to illustrate the performance of ML data generation in high-dimensional feature spaces. The HIGGS dataset MC simulation uses the DELPHES toolkit [17] to simulate the detector response of a LHC experiment.
The advantage of using this benchmark dataset is that it is publicly available and often used in evaluations of new ML tools by computer science researchers and developers, and referenced even in groundbreaking works such as Ref. [18].
The signal process is the fusion of two gluons \(gg\) into a heavy neutral Higgs boson \(H^{0}\) that decays into heavy charged Higgs bosons \(H^{\pm}\) and a \(W\) boson. The \(H^{\pm}\) then decays to a second \(W\) boson and to a light Higgs boson \(h^{0}\) that decays to \(b\) quarks. The whole signal process can be described as:
\[gg\to H^{0}\to W^{\mp}H^{\pm}\to W^{\mp}W^{\pm}h^{0}\to W^{\mp}W^{\pm}b\bar{b}. \tag{1}\]
The background process, which features the same intermediate state state \(W^{\mp}W^{\pm}b\bar{b}\) without the Higgs boson production is the production and decay of a pair of top quarks. Events were generated assuming 8 TeV collisions of protons at the LHC with masses set to \(m_{H^{0}}=425\) GeV and \(m_{H^{\pm}}=325\) GeV.
Observable final state decay products include electrically charged leptons \(\ell\) (electrons or muons) and particle jets \(j\). The dataset contains semi-leptonic decay modes, where the first \(W\) boson decays into \(\ell\nu\) and the second into \(jj\) giving us decay products \(\ell\nu b\) and \(jjb\). Further restrictions are also set on transverse momentum \(p_{\text{T}}\), pseudorapidity \(\eta\), type and number of leptons, and number of jets. Events that satisfy the above requirements are characterized by a set of observables (features) that describe the experimental measurements using the detector data. These basic kinematic observables were labelled as "low-level" features and are:
* jet \(p_{\text{T}}\), \(\eta\) and azimuthal angle \(\phi\),
* jet \(b\)-tag,
* lepton \(p_{\text{T}}\), \(\eta\) and \(\phi\),
* missing energy \(E_{m}\),
which gives us 21 features in total. More complex derived kinematic observables can be obtained by reconstructing the invariant masses of the different intermediate states of the two processes. These are the "high-level" features and are:
\[m_{jj},\ m_{jjj},\ m_{lv},\ m_{jlv},\ m_{b\bar{b}},\ m_{Wb\bar{b}},\ m_{WWb \bar{b}}. \tag{2}\]
Ignoring azimuthal angles \(\phi\) due to the detector symmetry (giving a uniform distribution), and focusing only on continuous features finally results in an 18-dimensional feature space.
### Data preprocessing
In order to achieve optimal performance of the ML algorithms, a data normalization (i.e. feature scaling) of the HIGGS dataset has been performed before training the ML algorithms in the following way:
1. min-max normalization \[\mathbf{x}_{1}=\frac{\mathbf{x}-\text{min}(\mathbf{x})}{\text{max}(\mathbf{x })-\text{min}(\mathbf{x})}\,\] (3)
2. logit transformation \[\mathbf{x}_{2}=\log\frac{\mathbf{x}_{1}}{1-\mathbf{x}_{1}}\,\] (4)
3. standardization \[\mathbf{x}_{3}=\frac{\mathbf{x}_{2}-\boldsymbol{\mu}(\mathbf{x}_{2})}{ \boldsymbol{\sigma}(\mathbf{x}_{2})}\,\] (5)
with similar inverse transformations. While this procedure demonstrably gives a good ML performance, the reasoning behind this particular normalization is the observation that ML procedures built using the _normalising flows_ approach do not perform well on distributions with sharp cut-off edges (lepton and jet \(p_{\text{T}}\) distributions in our example). Min-max normalization scales the data to the \([0,1]\) range for the logit transformation that then scales it to the \((-\infty,\infty)\) range. In the last step, the standardization makes the distribution of each feature have zero mean and unit variance, which further helps with ML algorithm training.
Implemented ML methods
A short description of _normalizing flows_ is presented in this section in order to give a relevant context to the ML approaches evaluated in this paper. The description closely follows the reviews from Refs. [19; 20].
Let \(\mathbf{u}\in\mathbb{R}^{D}\) be a random vector with a known probability density function \(p_{\mathbf{u}}(\mathbf{u}):\mathbb{R}^{D}\rightarrow\mathbb{R}\). Distribution \(p_{\mathbf{u}}(\mathbf{u})\) is called a base distribution and is usually chosen to be something simple, such as a normal distribution. Given data \(\mathbf{x}\in\mathbb{R}^{D}\), one would like to know the distribution \(p_{\mathbf{x}}(\mathbf{x})\) it was drawn from. The solution is to express \(\mathbf{x}\) as a transformation \(T\) of a random variable \(\mathbf{u}\), distributed according to a distribution \(p_{\mathbf{u}}(\mathbf{u})\) in such a way that
\[\mathbf{x}=T(\mathbf{u})\;,\quad\mathbf{u}\sim p_{\mathbf{u}}(\mathbf{u})\;, \tag{6}\]
where \(T\) is implemented using ML components, such as a neural network. The transformation \(T\) must be a diffeomorphism, meaning that it is invertible and both \(T\) and \(T^{-1}\) are differentiable. Under these conditions, the density \(p_{\mathbf{x}}(\mathbf{x})\) is well-defined and can be calculated using the usual change-of-variables formula
\[\begin{split} p_{\mathbf{x}}(\mathbf{x})&=p_{ \mathbf{u}}\left(T^{-1}\left(\mathbf{x}\right)\right)\left|\text{det}J_{T} \left(T^{-1}\left(\mathbf{x}\right)\right)\right|^{-1}\\ &=p_{\mathbf{u}}\left(T^{-1}\left(\mathbf{x}\right)\right)\left| \text{det}J_{T^{-1}}\left(\mathbf{x}\right)\right|\;,\end{split} \tag{7}\]
where \(J_{T}\) is a \(D\times D\) Jacobian matrix of partial derivatives.
Invertible and differentiable transformations are composable, which allows one to construct a _flow_ by chaining together different transformations. This means that one can construct a complicated transformation \(T\) with more expressive power by composing many simpler transformations:
\[T=T_{K}\circ\ldots\circ T_{1}\quad\text{and}\quad T^{-1}=T_{1}^{-1}\circ\ldots \circ T_{K}^{-1}\;. \tag{8}\]
A flow is thus referring to the trajectory of samples from the base distribution as they get sequentially transformed by each transformation into the target distribution. This is known as forward or generating direction. The word normalizing refers to the reverse direction, taking samples from data and transforming them to the base distribution, which is usually normal. This direction is called inverse or normalizing direction and is the direction of the model training. Flows in forward and inverse directions are then, respectively,
\[\begin{split}\mathbf{z}_{k}=T_{k}(\mathbf{z}_{k-1})\quad\text{ for}\quad k=1,\ldots,K\;,\\ \mathbf{z}_{k-1}=T_{k}^{-1}(\mathbf{z}_{k})\quad\text{for}\quad k =K,\ldots,1\;,\end{split} \tag{9}\]
where \(\mathbf{z}_{0}=\mathbf{u}\) and \(\mathbf{z}_{K}=\mathbf{x}\).
The log-determinant of a flow is given by
\[\log\left|\text{det}\,J_{T}\left(\mathbf{z}_{0}\right)\right|=\log\left|\prod _{k=1}^{K}\text{det}\,J_{T_{k}}\left(\mathbf{z}_{k-1}\right)\right|=\sum_{k=1} ^{K}\log\left|\text{det}\,J_{T_{k}}\left(\mathbf{z}_{k-1}\right)\right|\;. \tag{10}\]
A trained flow model provides event sampling capability by Eq. (6) and density estimation by Eq. (7).
The best description of the unknown probability density \(p_{\mathbf{x}}(\mathbf{x})\) is obtained by fitting a parametric flow model \(p_{\mathbf{x}}(\mathbf{x};\boldsymbol{\theta})\) with free parameters \(\boldsymbol{\theta}\) to a target distribution by using a maximum likelihood estimator computing the average log-likelihood over \(N\) data points
\[\mathcal{L}(\boldsymbol{\theta})=-\frac{1}{N}\sum_{n=1}^{N}\log p_{\mathbf{x}} (\mathbf{x}_{n};\boldsymbol{\theta})\;. \tag{11}\]
The latter defines the loss function of the ML algorithm and is thus the quantity optimized using gradient-based methods while training the ML procedure. This can be done because the exact log-likelihood of input data is tractable in flow-based models. In order to keep the computing load at a manageable level, averaging is performed over batches of data and not on the whole dataset, as is customary in practically all ML procedures.
Rewriting Eq. (11) in terms of variables \(\mathbf{u}\) using Eq. (7) and introducing a parametric description of the distribution \(p_{\mathbf{u}}(\mathbf{u};\boldsymbol{\psi})\), one gets
\[\mathcal{L}(\boldsymbol{\theta})=-\frac{1}{N}\sum_{n=1}^{N}[\log p_{\mathbf{u}} \left(T^{-1}(\mathbf{x}_{n};\boldsymbol{\phi});\boldsymbol{\psi}\right)-\log \left|\text{det}J_{T^{-1}}(\mathbf{x}_{n};\boldsymbol{\phi})\right|]\;, \tag{12}\]
where \(\boldsymbol{\theta}=\{\boldsymbol{\phi},\boldsymbol{\psi}\}\) are the parameters of the target and base distributions, respectively. The parameters \(\boldsymbol{\psi}\) of the base distribution are usually fixed, for example, the zero mean and the unit variance of a normal distribution. From Eq.
(12), one can see that in order to fit the flow model parameters one needs to compute the inverse transformation \(T^{-1}\), the Jacobian determinant, the density \(p_{\mathrm{u}}(\mathbf{u};\mathbf{\psi})\) and be able to differentiate through all of them. For sampling the flow model, one must also compute \(T\) and be able to sample from \(p_{\mathrm{u}}(\mathbf{u};\mathbf{\psi})\).
For applications of flow models, the Jacobian determinant should have at most \(\mathcal{O}(D)\) complexity, which limits the flow-model-based design.
### Coupling models
The main principle of finding a set of transformations optimally suited to be used in flow-based ML generative models, introduced by Ref. [21], focus on a class of transformations that produce Jacobian matrices with determinants reduced to the product of diagonal terms. These classes of transformations are called _coupling layers_.
A coupling layer splits the feature vector \(\mathbf{z}\) into two parts at index \(d\) and transforms the second part as a function of the first part, resulting in an output vector \(\mathbf{z}^{\prime}\). In the case of RealNVP model [22] the implementation is a follows:
\[\begin{split}\mathbf{z}^{\prime}_{\leq d}&=\mathbf{ z}_{\leq d}\;,\\ \mathbf{z}^{\prime}_{>d}&=\mathbf{z}_{>d}\cdot\exp \left(\mathbf{\sigma}(\mathbf{z}_{\leq d})\right)+\mathbf{\mu}(\mathbf{z}_{\leq d})\;. \end{split} \tag{13}\]
The affine transformation \(\mathbf{s}\cdot\mathbf{z}+\mathbf{t}\), consisting of separate scaling (\(\mathbf{s}=\exp\left(\mathbf{\sigma}\right)\)) and translation (\(\mathbf{t}=\mathbf{\mu}\)) operations, is implemented by distinct neural networks \(\mathbf{f}\). These operations depend on the variables \(\mathbf{z}_{i}\) in the other half of the block (\(i\leq d\)), i.e. \(\mathbf{\mu}=f_{\mathbf{\mu}}(\mathbf{z}_{\leq d})\) and \(\mathbf{\sigma}=f_{\mathbf{\sigma}}(\mathbf{z}_{\leq d})\). It is worth noting that this affine transformation possesses a straightforward inverse, eliminating the need to compute the inverses of \(\mathbf{s}\) and \(\mathbf{t}\). Furthermore, it exhibits a lower triangular Jacobian with a block-like structure, enabling the determinant to be computed in linear time.
When sequentially stacking coupling layers, the elements of \(\mathbf{z}\) need to be permuted between each of the two layers so that every input has a chance to interact with every other input. This can be done with a trained permutation matrix
\[\mathbf{z}^{\prime}=\mathbf{W}\mathbf{z} \tag{14}\]
as in the Glow model [23]. The idea is thus to alternate these invertible linear transformations with coupling layers.
In order to further speed up and stabilize flow training, batch normalization is introduced at the start of each coupling layer as described in Ref. [22]. One block of such a flow is schematically presented in Fig. 1.
The expressive power of a flow can be increased by composing multiple blocks of coupling layers, batch normalizations and permutations. The number of blocks in the flow is the main hyper-parameter of such a model. Fig. 2 shows the dependence of validation loss, i.e. the value of the loss function from Eq. (11) obtained on an validation sub-sample of the HIGGS dataset, on the number of blocks in a flow. Models with more blocks are harder to train, and show signs of over-fitting earlier. For the subsequent studies presented in this paper, one has chosen to use 10 blocks for all flow models used. A list of all other hyper-parameters is given in appendix A.
Figure 1: The building block of a coupling layer in a flow. The block consists of a coupling layer with batch normalization and learned permutations. The scaling and translation networks have the same architecture but differ in activation functions. Scaling network uses tanh functions, whereas the translation network uses ReLU functions.
### Autoregressive models
Using the chain rule of probability, one can rewrite any joint distribution over \(D\) variables (as discussed in Ref. [24]) in the form of a product of conditional probabilities
\[p(\mathbf{z})=\prod_{i=1}^{D}p_{i}\left(\mathbf{z}_{i};c_{i}(\mathbf{z}_{<i}) \right)\;, \tag{15}\]
where \(c_{i}\) is some conditional or context on inputs. If \(p_{i}\left(\mathbf{z}_{i};c_{i}(\mathbf{z}_{<i})\right)\) is conditioned on a mixture of Gaussians (MOG), one gets a RNADE model from Ref. [25]:
\[p_{i}(\mathbf{z}_{i};\mathbf{z}_{<i})=\sum_{c=1}^{C}\boldsymbol{\alpha}_{i,c} \mathcal{N}(\mathbf{z}_{i};\boldsymbol{\mu}_{i,c},\boldsymbol{\sigma}_{i,c}^{ 2})\;. \tag{16}\]
The mixture model parameters are calculated using a neural network that returns a vector of outputs \((\boldsymbol{\mu}_{i},\boldsymbol{\sigma}_{i},\boldsymbol{\alpha}_{i})=f_{i}( \mathbf{z}_{<i};\boldsymbol{\theta}_{i})\), as illustrated on Fig. 3.
Figure 3: Mixture of Gaussians (MOG) output of a neural network implemented with MADE.
Figure 2: Validation loss as a function of training steps (epochs) and the number of flow blocks. Early stopping was employed to account for over-fitting. All other hyper-parameters were kept constant. Models were trained on \(2.5\times 10^{5}\) events. The uncertainties presented in the right plot were estimated by repeated training and validation using random sampling of events. One can observe rapidly diminishing gains by using more than 10 blocks.
Specifically, the mixture of Gaussians parameters for the conditionals is calculated in the following sequence:
\[\mathbf{h}(\mathbf{z}) =\text{ReLU}(\mathbf{W}^{\top}\mathbf{z}+\mathbf{b})\;, \tag{17}\] \[\boldsymbol{\alpha}(\mathbf{z}) =\text{softmax}(\mathbf{W}_{\alpha}^{\top}\mathbf{h}(\mathbf{z})+ \mathbf{b}_{\alpha})\;,\] \[\boldsymbol{\mu}(\mathbf{z}) =\mathbf{W}_{\boldsymbol{\mu}}^{\top}\mathbf{h}(\mathbf{z})+ \mathbf{b}_{\mu}\;,\] \[\boldsymbol{\sigma}(\mathbf{z}) =\text{ELU}(\mathbf{W}_{\boldsymbol{\sigma}}^{\top}\mathbf{h}( \mathbf{z})+\mathbf{b}_{\sigma})+1+\varepsilon\;,\]
where \(\mathbf{W}\) are the weight matrices, and \(\mathbf{b}\) the bias vectors. ReLU, softmax and ELU are the activation functions. The event sampling generative step is performed simply as sampling from a Gaussian mixture. As a substantial simplification, a single neural network with \(D\) inputs and \(D\) outputs for each parameter _vector_ can be used instead of using \(D\) separate neural networks (\(f_{i}\)) for each parameter. This is done with a MADE network [26] that uses a masking strategy to ensure the autoregressive property from Eq. (15). Adding Gaussian components to a MADE network increases its flexibility. The model was proposed in Ref. [27] and is known as MADEMOG. The MADE network was implemented using residual connections from [28] as described in Ref. [29]. MADE networks can also be used as building blocks to construct a Masked Autoregressive Flow or MAF [27] by sequentially stacking MADE networks. In this case, the \(i\)-th conditional is given by a single Gaussian
\[p_{i}\left(\mathbf{z}_{i};c_{i}(\mathbf{z}_{<i})\right)=\mathcal{N}\left( \mathbf{z}_{i};\boldsymbol{\mu}_{i},(\exp(\boldsymbol{\sigma}_{i}))^{2}\right)\;, \tag{18}\]
where \(\boldsymbol{\mu}_{i}=f_{\boldsymbol{\mu}_{i}}(\mathbf{z}_{<i})\) and \(\boldsymbol{\sigma}_{i}=f_{\boldsymbol{\sigma}_{i}}(\mathbf{z}_{<i})\) are MADE networks. The generative sampling in this model is straightforward:
\[\mathbf{z}_{i}=\mathbf{u}_{i}\cdot\exp(\boldsymbol{\sigma}_{i})+\boldsymbol{ \mu}_{i}\;,\quad\text{where}\quad\mathbf{u}_{i}\sim\mathcal{N}(0,1) \tag{19}\]
with a simple inverse
\[\mathbf{u}_{i}=(\mathbf{z}_{i}-\boldsymbol{\mu}_{i})\exp(-\boldsymbol{\sigma} _{i})\;, \tag{20}\]
which is the flow model training direction. Due to the autoregressive structure, the Jacobian is triangular (the partial derivatives \(\partial x_{i}/\partial u_{j}\) are identically zero when \(j>i\)), hence its determinant is simply the product of its diagonal entries. One block of such a flow is presented in Fig. 4. A further option is to stack a MADEMOG on top of a MAF, which is then labelled as a MAFMADEMOG.
Training curves for all three types of models are shown in Fig. 5, where one can see that the MADEMOG and MAFMADEMOG achieve similar performance, in both cases significantly better than the simpler MAF. The dependence on the number of summed Gaussians is also shown, where one can see that the model performance saturates at a relatively low number of chosen Gaussian mixtures. The subsequent studies in this paper were done by choosing the number of mixture components to be the same as the number of features, i.e. 18.
### Spline transformations
At the end of the flow block, one needs to choose a specific transformation \(f\). So far, affine transformations of Eq. (13) and Eq. (19) were used, which were combined with sampling from Gaussian conditionals as in Eq. (16). Rather than relying on basic affine transformations, this approach can be extended to incorporate spline-based transformations.
A spline is defined as a monotonic piecewise function consisting of \(K\) segments (bins). Each segment is a simple invertible function (e.g. linear or quadratic polynomial, as in Ref. [31]). In this paper, rational quadratic splines are implemented, as proposed in Ref. [30] and used by Ref. [32]. Rational quadratic functions are differentiable and are also analytically invertible, if only monotonic segments are considered. The spline parameters are then determined from neural networks in an autoregressive way. The resulting autoregressive model, replacing affine transformations of
Figure 4: Block of a used MAF model. The architecture follows the one outlined in Ref. [30].
Eq. (19) and its inverse Eq. (20) with equivalent expressions for splines, is labelled as RQS in the studies of this paper. Symbolically, the generative step is thus:
\[\mathbf{z}_{i}=\text{RQS}_{i}\left(\mathbf{u}_{i},K_{i},\boldsymbol{\theta}_{i} \right),\quad\text{where}\quad\mathbf{u}_{i}\sim\mathcal{N}(0,1) \tag{21}\]
with an inverse
\[\mathbf{u}_{i}=\text{RQS}_{i}^{-1}\left(\mathbf{z}_{i},\mathbf{u}_{i},K_{i}, \boldsymbol{\theta}_{i}\right), \tag{22}\]
where the \(K\) represents the bin parameters and \(\boldsymbol{\theta}\) denotes the remaining model (spline) parameters.
Fig. 6 shows the validation loss for different number of spline bins. The validation loss does not seem to decrease substantially with the increasing number of bins, however, as it will be shown, the quality of generated samples does, in fact, improve with more bins. For the studies performed in this paper, rational splines in 32 bins were eventually chosen.
Figure 5: Validation loss as a function of training steps (epochs) for different types of models and the dependance of validation loss on the number of Gaussian components. Models were trained on \(2.5\times 10^{5}\) events.
Figure 6: Validation loss as a function of training steps and number of spline bins. Models were trained on \(2.5\times 10^{5}\) events.
Performance evaluation of the ML techniques in an analysis
After defining the appropriate evaluation data set and generative ML procedures, one can focus on the main objective of this paper, which is to systematically evaluate the performance of such tools with a finite-sized training sample in terms of requirements based on a physics analysis.
As described in the introduction, a typical particle physics analysis with the final selection and the statistical evaluation procedure uses an order of \(\mathcal{O}(10)\) reconstructed kinematic observables (which are very often used to construct a final selection variable using ML techniques). The corresponding available statistics of MC signal and background events used to construct the predicted distributions of these kinematic observables, is however often of the order \(\mathcal{O}(10^{5})\) events or less due to the computing resource constraints and severe filtering requirements of a new physics search. Consequently, the final kinematic region, where one searches for new physics signal, can even contain orders of magnitude fewer MC events, since one is looking for unknown new processes at the limits of achievable kinematics, which translates to "tails" of kinematic distributions.
### Generated distributions of observables
Histograms of generated distributions for all three considered generative ML models, i.e. Glow, MAFMADEMOG and RQS, are shown in Fig. 7. Generated distributions are compared with their MC distributions on which the model was trained. All three models quite reliably model the original distributions, where the model with Gaussian mixtures has more success with some tails of distributions.
Histograms containing binned ratios of generated ML events and MC-simulated events are shown in Fig. 8 for the implemented models, overlaid with the shape of the normalized MC distribution (red curve) for comparison. It is evident that the deviations between the ML and MC are for all cases, except for pseudo-rapidity \(\eta\), most pronounced in the tails where the event count becomes very low due to the fact that the learning algorithm did not see enough rare events in the tails of distributions and could not reproduce a reliable distribution in that region. In the case of the pseudo-rapidity distributions, the ratio deviates both at the centre and at both ends of the distribution.
### Simple statistical tests
The aim of this paper is to systematically define and evaluate the performance of the implemented generative ML algorithms in the context of a physics analysis. As the first step in the exploration of evaluation criteria, two of the most common statistical two-sample comparison tests are performed on the original MC events and the derived ML events from the generative procedure (trained on these MC events). When looking for possible testing algorithms, it was found that there is a notable absence of multi-variate (multi-dimensional) statistical compatibility tests that would scale to the sample sizes and dimensions expected in particle-physics analyses at the LHC.
In order to compare two samples \(A\) and \(B\) (MC and ML-generated data) of sizes \(N\) and \(M\) histogrammed in \(K\) bins, a two-sample \(\chi^{2}\) test described in e.g. Ref. [33] was performed. In this case, one can write \(\chi^{2}\) statistics for \(a_{k}\) and \(b_{k}\) events from samples \(A\) and \(B\) in bin \(k\) as
\[\chi^{2}=\sum_{k=1}^{K}\frac{(C_{1}a_{k}-C_{2}b_{k})^{2}}{a_{k}+b_{k}} \tag{23}\]
where
\[C_{1}=\sqrt{\frac{M}{N}}=\sqrt{\frac{\sum_{k=1}^{K}b_{k}}{\sum_{k=1}^{K}a_{k} }}\quad\text{and}\quad C_{2}=\sqrt{\frac{N}{M}}=\sqrt{\frac{\sum_{k=1}^{K}a_{ k}}{\sum_{k=1}^{K}b_{k}}} \tag{24}\]
are weight factors. To estimate an optimal number of bins \(K\), one used
\[K=\text{max}\left\{2\frac{\text{IQR}}{N^{1/3}},\,\log_{2}N+1\right\}\,, \tag{25}\]
where the IQR denotes the interquartile range and \(N\) the sample size, combining the Freedman-Diaconis rule [34] and the Sturges' rule [35] and which is the automatic value used in optimized automated histogramming of the NumPy package [36]. Using this rule, an arbitrary choice of binning is avoided, which could bias the observations of this paper.
Figure 7: Distributions of \(5\times 10^{6}\) events generated with models trained on \(1\times 10^{6}\) events. All three models from Section 3 (Glow, MAFMADEMOG and RQS) are considered and compared with the original MC distribution in grey.
Figure 8: Histograms containing binned ratios of ML-generated events and MC-simulated events are shown for all implemented ML models. Red curves represent the normalised distributions of MC events, with the scale given in red on the right-hand side of the plots. Discrepancy between bins for MC events and ML-generated events increases in the tails of the distributions with smaller ratio \(N_{i}/N\).
The so-derived test statistic follows, approximately, a chi-square distribution with \(K\) degrees of freedom, and the compatibility hypothesis is rejected if the test statistic is above the critical value \(\chi_{c}(1-\alpha,K)\) at significance level of \(\alpha\).
Another common test one can use is the Kolmogorov-Smirnov (KS) test (see e.g. Ref. [33; 37] for further discussion). In the case of two samples, the distance \(D\) measured between two empirical distributions is calculated as the supremum of the set of distances between the distribution functions (cumulants, ECDF) \(S_{1,2}\) of the two samples.
\[D=\underset{x}{\text{sup}}|S_{1}(x)-S_{2}(x)|\;. \tag{26}\]
The compatibility test is again performed using a critical value \(\text{KS}_{c}(\alpha)\) at significance level \(\alpha\).
Both tests require one-dimensional distributions. One can thus either apply the tests feature-wise (for each observable independently), or use a method to reduce the dimensionality of our dataset to one dimension. In this paper, a classification neural network was used to achieve this, which closely matches what would also be done in an actual physics analysis as the final filtering (event selection) step and exploits the fact that there are both background and new physics signal contributions available in the HIGGS dataset, identifiable by a binary label. The input to the classifier is a vector of observables and a label (1 for signal events and 0 for background events). The output score value is a scalar that can be used to distinguish background from signal. Consequently, a transformation from a vector to a scalar was obtained, which one can use to reduce the dimensionality of our data. As a classifier, a simple neural network with an unbound output, and an MSE loss function was used. The test accuracy of such a network was estimated to be about \(75\%\).
An example of dimensionality reduction with a classifier is shown in Fig. 9 for both signal and background contributions. One can observe that the performance of the ML-generated sample is quite close to the original MC sample.
### Test scaling with increasing number of generated events
The critical value \(\chi_{c}(1-\alpha,K)\) of the \(\chi^{2}\) test depends on the number of bins \(K\) and thus indirectly, using the automated rule of Eq. (25), also on the event statistics, whereas the critical value \(\text{KS}_{c}(\alpha)\) of the Kolmogorov-Smirnov test depends on the number of events directly. Consequently, since the generative ML model is an approximation of the MC process, one can expect that at a certain number of ML-generated events, the statistical tests will necessarily fail as a result of the increasing deviation between the growth of the critical value and the test statistics. The value of \(N\), at which the divergence between the two values and the subsequent test failure occurs, is determined by the quality of our generative model.
This scaling of the critical values and the test scores as a function of the number of events \(N\) is shown in Fig. 10 for the classifier-reduced data dimensionality. The tests show that a statistically acceptable value for the number of events is around \(10^{5}\). Above this number of events, however, the test value and the critical value start to diverge.
This can in fact be interpreted as an important observation, namely that the advanced generative ML algorithms exhaust their matching power to the MC at \(\mathcal{O}(10^{5})\) events, which thus gives a measure of the training sample statistics that is appropriate (or required) for these ML algorithms. Since, as already stated, the available MC samples of physical
Figure 9: Dimensionality reduction with a neural network classifier, resulting in a scalar score value. Both MC-simulated and ML-generated distributions using the Glow method are shown.
processes at the LHC are typically of the order of \({\cal O}(10^{5})\) events, which means that it is in fact feasible to construct generative ML procedures using such training samples.
Another point to stress in the interpretation of these results is that these simple tests are likely overly strict when comparing the predictive precision of the ML-generated distributions. In other words, even the best representative ML generative algorithm would never be good enough to extend the simulated statistics by orders of magnitude, as is the goal of the ML generative approaches in the existing particle physics studies including in this paper. Instead, we should consider what approximation is good enough in terms of the achievable precision of an analysis, involving limited data statistics as well as a plethora of measurement systematic uncertainties and specific statistical analysis procedures, which are further explored in the following sections of this paper.
To gain further insight, one can also perform both tests for each generated observable distribution separately, as shown in Fig. 11. It is evident that some observables get modeled better than others, depending on the sensitivity region of a specific test. Results depend on the model used, and it is clear that the performance of autoregressive methods is significantly better than the simpler alternatives.
The impact of the number of training events used in the range of \({\cal O}(10^{4}-10^{6})\) is shown in Fig. 12. The figure shows the convergence of the ratios of the test statistics and their corresponding critical values for a fixed size of ML data generated by models trained on different number of MC events. The improvement with larger training sets is evident. As already stated, the increase in the training (MC) sample size nonetheless has a diminishing return because, at some point, the limit saturates as the ML model exhausts its matching capacity (i.e. learning potential) to the MC sample. The impact of the increase in the training dataset size is, in fact, most evident in the better generation of events in the tails of distributions, where the poorest statistics is available, as also evident from Figs 7 and 8.
Figure 10: Performance of the two-sample tests using \(\chi^{2}\) (left) and the Kolmogorov-Smirnov test (right) is shown for different implementations of the generative ML algorithms using Glow (top) MAFMADEMOG (middle) and RQS (bottom) approaches as described in Section 3. The plots show the test score as a function of the ML-generated number of events \(N\) (error bars) compared to the critical value of the test (red curve) at \(\alpha=0.05\). The dimensionality of the generated and MC data was reduced using a neural network classifier, and the distributions of events w.r.t. the classifier score were used in the tests. In addition, a linear fit (green line) has been performed on the test scores to approximately show the divergence with increasing \(N\).
Figure 11: Ratios of test results and critical values for each generated feature distribution.
Figure 12: Ratios of test statistic and its critical value for data generated by models trained on different number of events. For every size \(N\) of the training set, a newly initialized model was used. Test values were calculated 10 times for \(10^{5}\) generated events and then averaged for every feature. In order to obtain the uncertainty bands, the extreme values obtained in all scenarios were selected.
## 5 ML performance evaluation in a physics analysis
In order to provide a relevant quantitative evaluation of the impact of the imperfect agreement of ML-generated distributions with the MC-generated ones in an analysis environment, a simplified analysis setup was constructed that matches the statistical analysis of a new physics search in an LHC experiment. In this setup, the HIGGS sample was used for training the ML-generative algorithms for both signal and background. In the following studies, the MADEMOG model trained on the HIGGS dataset was used to generate the new samples.
In order to emulate a typical analysis procedure in an LHC experiment, a further step was introduced, namely the ML generative algorithm was trained and applied on the full HIGGS sample (signal and background training was performed separately), while the statistical analysis studies were done _after_ an additional cut on the classifier from Figure 9 at the score value of 0.50. This is intended to match the standard analysis approach, where first a relatively simple baseline selection is applied on the data and MC samples as the first stage of analysis optimisation, and which approximately matches the agreement of kinematics of the signal and background in the HIGGS sample used in this study. After the baseline selection, a current LHC analysis then uses the MC samples to train an advanced classifier (usually ML-based approach is used) to perform a final data selection to achieve an optimal signal-to-background separation. The simulated samples in new physics searches tend to have quite low statistics after the final selection, too low to be usable to effectively train a ML generative procedure.
Consequently, the same steps were also taken in this paper, which is an innovative implementation of the study presented in this paper. The procedure can be summarised as follows:
1. train a generative model on full MC samples and generate large datasets of new events using it,
2. apply the baseline selection on the generated ML samples using a neural network classifier,
3. normalise the generated ML samples to match the data sample after the baseline selection,
4. perform the statistical analysis.
By introducing a (ML) classifier cut, one can also see how well the multi-dimensional generation of correlated variables is in fact performing. As a further insight from a different perspective, without final cuts, each relevant kinematic distribution can just be modelled (smoothed) independently, without a need for a generative procedure.
As one can observe in Figure 13, good agreement is preserved between the MC and ML samples, demonstrating that the variable correlations are adequately modelled and that the ML generative approach is thus usable for such an analysis approach.
In order to provide representative event yields, the background normalisation of the binned distributions used in statistical analysis after the final selection was set to yield \(B=10^{3}\) events at an integrated luminosity of \(L=\)100 fb\({}^{-1}\), which matches the order of magnitude of a typical background contribution in the final data selection of a Run 2 new physics search at the LHC. For further studies with varying luminosity, the background yield was evaluated in the range \(B\in[10^{3},10^{4}]\), corresponding to a luminosity increase up to one order of magnitude (\(L\in[100\) fb\({}^{-1},1000\) fb\({}^{-1}]\)). The signal yield (\(S\)) fraction, denoted \(\alpha=S/B\), was set to have a value of up to 10% of the background yield in the performed tests, whereby the signal normalisation was varied in this range in the sensitivity studies presented below. An inclusive relative systematic uncertainty (\(\beta\)), which in a real analysis would comprise both theoretical and experimental uncertainties, was set to \(\beta=\)10%, excluding the systematic uncertainty due to finite simulated sample statistics, which is the focus of this study and is being treated separately and has the expected total Poisson-like \(\sim\sqrt{N}\) dependence on the number of generated events for the normalised background prediction, translating into the appropriate multinomial uncertainties in binned distributions.
The data prediction was constructed by adding the MC-simulated samples of background and signal with a chosen signal content, giving a so-called _Asimov_ dataset, which perfectly matches the MC-samples and is often used in the performance validation of a statistical analysis in an LHC experiment. For the purpose of the studies in this paper, the MC-simulated background is then replaced with ML-generated samples, whereby the ML procedure was trained on these MC-samples.
The signal prediction is retained as the MC one because in LHC analyses, the signal simulation, which generally has a comparatively small number of generated events, is typically not as much of an issue as the background. The final analysis selection is centred around the signal prediction, retaining most of the signal statistics while only selecting the low-statistics tails of the background samples. This aims to provide a good measure of the quality of the ML approach in a LHC analysis environment - ideally the results after this replacement would still perfectly match the results of the original MC setup on the Asimov data set, however, an agreement within the joint statistical and systematic uncertainty is still deemed as acceptable, and thus a successful ML implementation.
Figure 13: Cuts on the classifier from Figure 9 at an output score of 0.50.
The achieved statistical agreement certainly depends on the luminosity, i.e. data statistics, assuming a constant presence of a fixed total systematic uncertainty, which then translates into a requirement on the quality of ML-generated samples, assuming that an arbitrarily large statistics (size) of these ML-generated samples is easily achievable.
In addition to a standard upper-limit and \(p\)-value evaluation in this analysis setup, a sequence of tests proposed in Ref. [38] in terms of spurious signal evaluation was performed to give a relevant quantitative evaluation of the impact of ML-generated samples.
With the aim of closely matching a typical statistical analysis as done in LHC experiments, the HistFactory model from the pyhf [39, 40] statistical tool was used, and different standard procedures of evaluation of the agreement between data and simulation predictions were implemented (profile likelihood calculation, upper limit estimation, background-only hypothesis probability etc...). The statistical analysis was performed using two different possibilities for choosing the optimal variable, resulting in a binned distribution w.r.t. the \(m_{bb}\) in the first case and classifier output observable in the second case, which aims to give an optimal separation between the shape of the background and signal predictions. In the statistical analysis, the signal presence is evaluated by using a scaling factor \(\mu\) (signal strength) of the predicted signal normalisation. Statistical error scale factors \(\boldsymbol{\gamma}=\{\gamma_{i}\}\) are used to model the uncertainty in the bins due to the limited statistics of (ML-)simulated samples. By using the fast ML event generation, one aims to push this uncertainty to negligible values, as is indeed done in the study presented below. As already stated, the simulated predictions are given an additional overall relative uncertainty of \(\beta=\)10% in each bin to model the systematic uncertainty contribution as \(\boldsymbol{\beta}=\{\beta_{i}\}\). In the likelihood calculation, the parameters \(\boldsymbol{\gamma}\) and \(\boldsymbol{\beta}\) are modelled as nuisance parameters in addition to \(\mu\) as the main parameter of the likelihood fit.
The binned distributions of samples used in this statistical analysis, together with the injected uncertainties, are shown in Figure 14 for the two relevant observables. One can see that the agreement between the Asimov data and the simulation prediction using the ML-generated background sample is well within the simulated sample uncertainties, which is an encouraging starting point for a detailed analysis.
In more detail, the likelihood is defined as the product over all bins of the Poisson probability to observe \(N_{i}\) events in a particular bin \(i\):
\[\begin{split}& L_{\text{phys}}(\text{data}\,|\,\mu)=L_{\text{phys}}( \mu)\\ &=\prod_{i\in\text{bins}}\text{Pois}(N_{i}\,|\,\mu S_{i}+B_{i}) \\ &=\prod_{i\in\text{bins}}\frac{(\mu S_{i}+B_{i})^{N_{i}}}{N_{i}!} e^{-(\mu S_{i}+B_{i})},\end{split} \tag{27}\]
where \(S_{i}\) and \(B_{i}\) are the expected signal and background yields, respectively. As already stated, the main parameter of interest is the signal strength, denoted as \(\mu\).
The systematic uncertainties are included in the likelihood via a set of nuisance parameters (NP), denoted as \(\boldsymbol{\theta}=(\boldsymbol{\beta},\boldsymbol{\gamma})\), that modify the expected background yield, i.e. \(\{B_{i}\}\rightarrow\{B_{i}(\boldsymbol{\theta})\}\). The overall relative systematic uncertainties
Figure 14: \(m_{bb}\) and classifier distributions after the classifier cut. The “MC data” (crosses) represents the Asimov data set composed from MC signal and background prediction and matches quite well the combined MC signal (blue) and ML background (orange) prediction (green histogram).
can be encoded into Gaussian functions and subsequently into an auxiliary likelihood function \(L_{\text{aux}}(\boldsymbol{\beta})\), while the uncertainties on the background predictions due to the limited number of simulated events are accounted for in the likelihood function considering Poisson terms
\[L_{\text{stat}}(\boldsymbol{\gamma})=\prod_{i\in\text{bins}}\frac{(\gamma_{i}B _{i})^{B_{i}}e^{-\gamma_{i}B_{i}}}{\Gamma(B_{i})}, \tag{28}\]
where \(\Gamma\) is the Gamma function.
The final _profile likelihood_ function can finally be defined as a product of three likelihoods
\[L(\mu,\boldsymbol{\theta})=L_{\text{phys}}(\mu)\cdot L_{\text{aux}}( \boldsymbol{\beta})\cdot L_{\text{stat}}(\boldsymbol{\gamma}). \tag{29}\]
A likelihood fit using this likelihood is then performed to determine the value of \(\mu\) and its uncertainty, as well as the nuisance parameters. The estimates of \(\mu\) and \(\boldsymbol{\theta}\) are obtained as the values of the parameters that maximise the likelihood function \(L(\mu,\boldsymbol{\theta})\) or, equivalently, minimise \(-\ln L(\mu,\boldsymbol{\theta})\).
This profile likelihood function is also used to construct statistical tests of w.r.t. the hypothesised value of \(\mu\). A profile likelihood ratio, \(\lambda(\mu)\), is defined as
\[\lambda(\mu)=\frac{L(\mu,\hat{\hat{\boldsymbol{\theta}}}(\mu))}{L(\hat{\mu}, \hat{\boldsymbol{\theta}})}, \tag{30}\]
where \(\hat{\mu}\) and \(\hat{\boldsymbol{\theta}}\) are the parameters that maximise the overall likelihood, and \(\hat{\hat{\boldsymbol{\theta}}}\) are the NP values that maximise the likelihood for a particular value of \(\mu\). The _test statistic_ is then defined as \(q_{\mu}=-2\ln\lambda(\mu)\), for which the lower values indicate better compatibility between the data and the hypothesised value of \(\mu\). The test statistic is used to calculate a _p-value_ that quantifies the agreement
\[p_{\mu}=\int_{q_{\mu}^{\text{obs}}}^{\infty}f(q_{\mu}\,|\,\mu)\,\text{d}q_{\mu}, \tag{31}\]
where \(q_{\mu}^{\text{obs}}\) is the value of test statistics observed in the data, and \(f(q_{\mu}\,|\,\mu)\) is the probability density function of the test statistic \(q_{\mu}\) under the signal strength assumption, \(\mu\). The \(p\)-value can be expressed in units of Gaussian standard deviations \(Z=\Phi^{-1}(1-p)\), where \(\Phi^{-1}\) is the inverse Gaussian CDF. The rejection of the background-only hypothesis (\(\mu=0\)) with a significance of at least \(Z=5\) (corresponding to \(p_{0}=2.87\times 10^{-7}\)) is considered a discovery.
A test statistic used in this analysis is the one for the positive signal discovery in which the background-only hypothesis with \(\mu=0\) is tested. If the data is compatible with the background-only hypothesis, the nominator and the denominator of the test statistic, \(q_{0}=-2\ln\lambda(\mu=0)\), are similar. Then, \(q_{0}\) is close to 0, and \(p_{0}\) value is 0.5. For this scenario, upper limits on the signal strength are derived at a CL=95% confidence level using the CL\({}_{\text{s}}\) method [41], for which both the signal plus background, \(p_{S+B}\), and background-only, \(p_{B}\), \(p\)-values need to be calculated. For a given set of signal masses or branching ratios, the signal hypothesis is tested for several values of \(\mu\). The final confidence level CL\({}_{\text{s}}\) is computed as the ratio CL\({}_{\text{s}}\equiv\frac{p_{S+B}}{1-p_{B}}\), which excludes a signal hypothesis at CL=95% when giving a value below 5%.
### Results of statistical tests
Examples of distributions obtained from profile likelihood fit results to the Asimov data described above are shown on Figure 15 for different variable selections. One can observe a very nice agreement between the fitted prediction and Asimov data, well within the uncertainty band. The improvement of the agreement between the Asimov data and prediction using ML-generated background, after the likelihood fit to the distribution of the indicated observable, is shown in Figure 16.
In Figure 17 the reproduction of the expected value of the signal strength is shown w.r.t. the increase in integrated luminosity. The signal strength is scaled to the value of the injected signal fraction \(\alpha\), giving \(\mu=\alpha\) as the ideal outcome. An impact of the size (fraction) \(\alpha\) of the injected signal with respect to the background and the performance of the likelihood fit w.r.t. the increase in integrated luminosity is also shown. One can observe that for decreasing signal presence (lower \(\alpha\) values) the performance of the fit is progressively worse, leading to a bias even with increasing integrated luminosity. The (biased) values are still within the uncertainty, as shown on Figure 19, for the difference \(\mu-\alpha\), but it is of course clear that background mis-modelling, present when using ML-generated background, leads to sensitivity loss and possible biases in an analysis with relatively tiny signal presence. Figure 18 gives a nice quantitative measure of this.
The discrepancy between the injected and estimated signal in the final analysis fit can also be interpreted as the presence of a _spurious_ signal. Its evaluation is a common approach in LHC analyses. An example of the spurious signal presence
in this analysis is given in Figure 19 as a function of integrated luminosity. One can observe that using the ML-generated background indeed gives a non-zero spurious signal content as a bias, which is however well within the estimated total uncertainty of the measurement across the relevant luminosity range. The value of the bias depends on the variable used in the fitting scenario.
Using the derived profile likelihood in a test statistic to determine the signal and background hypothesis probabilities (_p-values_ in LHC jargon) and the eventual \(\mathrm{CL_{s}}\) value, as described above, is shown in Figure 20 as a function of the injected signal fraction \(\alpha\). Results for the ideal scenario, using only the MC simulated samples, which identically match the Asimov dataset, are shown for comparison with the ML-generated background results. One can see that the relevant observables, in particular \(\mathrm{CL_{s}}\), converge nicely with increasing signal presence, while there is a discrepancy when the signal presence is diminishing, which agrees with observations from Figure 17.
Figure 15: Post-fit distributions for profile likelihood fits to the \(m_{bb}\) or the classifier output variable.
Figure 16: Ratios between pre-fit and post-fit results for both scenarios (\(m_{bb}\) and classifier output).
Figure 17: Fitted signal strength \(\mu\) as a function of integrated luminosity \(L\) for both scenarios (\(m_{bb}\) and classifier output) of likelihood fit at different values of \(\alpha=S/B\).
Figure 18: Difference of fitted signal strength \(\mu\) and Asimov \(\alpha\) value as a function of integrated luminosity \(L\) for both scenarios (\(m_{bb}\) and classifier output).
Figure 19: Estimated spurious signal as a function of integrated luminosity \(L\) for both scenarios (\(m_{bb}\) and classifier output) of likelihood fits with \(\alpha=0.1\) from Fig. 17.
Figure 20: Estimated \(p\)-values for signal and background, background and CL\({}_{\rm s}\) for both scenarios (\(m_{bb}\) and classifier output) at different values of \(\alpha=S/B\).
Upper limits
As the final step in this physics analysis study, one aims to evaluate the upper limits on the signal strength, together with the uncertainty estimates using the profile-likelihood-based test statistics, as is done in LHC analyses. The dependence of the extracted upper limit as a function of integrated luminosity is shown in figures Fig. 21 and Fig. 22 for different values of injected signal fraction \(\alpha\) and the two fitting scenarios. Again, the ideal (reference) scenario, using the MC simulated samples both for Asimov data construction and simulated predictions is used as a reference, and is in the figures shown together with the derived uncertainty bands. One can observe that the shifts in upper limit estimation are on a scale compatible with the estimated uncertainties for the reference scenario.
From these results it is evident that the ML-generated samples can indeed be used in a physics analysis as a surrogate model for the background prediction. However, to further minimize the impact of the background ML mis-modelling, one would need to work on implementing techniques that go even beyond the current commercial state-of-the-art approaches used in this paper, and to understand how to optimally adapt them for this use case in high energy physics.
statistics, thereby minimizing the analysis uncertainty due to the statistics limitations of MC events or, equivalently, to smooth and minimize the uncertainties on the predicted kinematic event distributions used in the final statistical analysis of the data.
The presented work builds on the ideas of [13] and [14] where GANs and VAEs were considered and gives better or comparable results to Ref. [42] for the ATLAS detector. Furthermore, aside from the achievable accuracy, the advantage of using normalizing flows is that they provide density estimation alongside sampling, which can also be incorporated into the analysis. The downside of this architecture is that it does not provide much flexibility in the design of the method due to the invertibility and differentiability constraints of transformations and the calculation of their determinants.
The envisaged ML modelling strategy is to learn the \(D\)-dimensional distributions of kinematic observables used in a representative physics analysis at the LHC, and produce large amounts of ML-generated events at a low computing cost (see Appendix B). The input kinematic distributions are derived from the MC-simulated samples, which are statistics-limited by design since they are produced in a very computationally expensive procedure, albeit giving very accurate predictions of the physics performance of the LHC experiments. Furthermore, the study presented in this paper replicates the realistic case of the final event selection in a physics analysis using a cut on a ML-derived discriminating parameter, which defines the final data sets for physics analysis. In this final set, the MC statistics is usually too low to effectively train a ML procedure, thus the training needs to happen at the stage before the final filtering and it is essential that the ML training reproduce the correlations between the observables so that the agreement between the ML-generated and original MC-simulated data is preserved after the final event selection. This paper demonstrates that this can, in fact, be achieved by using the implemented ML techniques, with reasonable precision.
The generative ML approach described in this paper is inherently analysis-specific, meaning that each analysis would require a dedicated training setup. As a demonstrator for this paper, a number of different state-of-the-art normalizing flow architectures with different parameters and number of training examples were implemented. The procedures were not fine-tuned for specific analysis and/or MC dataset to preserve generality, but could potentially achieve even better performance with further optimization of hyper-parameters in Appendix A, as done, e.g. in Refs. [10, 13]. The implemented models were trained on the LHC-specific dataset (beyond the Standard model Higgs boson decay) with \(\mathcal{O}(10)\) observables. Both coupling layer models and autoregressive models were considered with different transformation functions (Gaussian and rational quadratic splines) as discussed in section 3. All of the models were capable of learning complicated high-dimensional distributions to some degree of accuracy, with autoregressive models having an advantage at the cost of somewhat longer sampling times, which does not pose a problem since our distributions are relatively low-dimensional and the absolute speed-ups are impressive in all cases, i.e. orders of magnitude lower with respect to the standard MC generation procedures used at the LHC.
Detailed performance evaluations using \(\chi^{2}\) and Kolmogorov-Smirnov two-sample tests as well as a simplified statistical analysis, matching the procedures used for upper limit estimation of new physics searches at the LHC, show that the generally available MC samples of \(\mathcal{O}(10^{5})\) events are indeed enough to train such state-of-the-art generative ML models to a satisfactory precision to be used to reduce the systematic uncertainties due to the limited MC statistics. The generative modelling strategy described in this paper could alleviate some of the high CPU and disk size requirements (and costs) of generating and storing simulated events. When trained, these models provide not only fast sampling but also encode all of the distributions in the weights and biases of neural networks, which take up significantly less space than the full MC datasets and can generate analysis-specific events practically on-demand, which is a functionality that goes beyond the scope of this paper but should be studied in a dedicated project.
The listed advantages become even more crucial when considering future LHC computing requirements for physics simulation and analysis, as it is clear that the increase in collision rates will result in much larger data volumes and even more complex events to analyse. Using generative modelling could thus aid in faster event generation as well as future storage requirements coming with the HL-LHC upgrade and beyond.
## 8 Acknowledgements
The authors would like to acknowledge the support of Slovenian Research and Innovation agency (ARIS) by funding the research project J1-3010 and programme P1-0135. |
2308.15231 | Multi-party Goal Tracking with LLMs: Comparing Pre-training,
Fine-tuning, and Prompt Engineering | This paper evaluates the extent to which current Large Language Models (LLMs)
can capture task-oriented multi-party conversations (MPCs). We have recorded
and transcribed 29 MPCs between patients, their companions, and a social robot
in a hospital. We then annotated this corpus for multi-party goal-tracking and
intent-slot recognition. People share goals, answer each other's goals, and
provide other people's goals in MPCs - none of which occur in dyadic
interactions. To understand user goals in MPCs, we compared three methods in
zero-shot and few-shot settings: we fine-tuned T5, created pre-training tasks
to train DialogLM using LED, and employed prompt engineering techniques with
GPT-3.5-turbo, to determine which approach can complete this novel task with
limited data. GPT-3.5-turbo significantly outperformed the others in a few-shot
setting. The `reasoning' style prompt, when given 7% of the corpus as example
annotated conversations, was the best performing method. It correctly annotated
62.32% of the goal tracking MPCs, and 69.57% of the intent-slot recognition
MPCs. A `story' style prompt increased model hallucination, which could be
detrimental if deployed in safety-critical settings. We conclude that
multi-party conversations still challenge state-of-the-art LLMs. | Angus Addlesee, Weronika Sieińska, Nancie Gunson, Daniel Hernández Garcia, Christian Dondrup, Oliver Lemon | 2023-08-29T11:40:03Z | http://arxiv.org/abs/2308.15231v1 | # Multi-party Goal Tracking with LLMs: Comparing Pre-training, Fine-tuning, and Prompt Engineering
###### Abstract
This paper evaluates the extent to which current Large Language Models (LLMs) can capture task-oriented multi-party conversations (MPCs). We have recorded and transcribed 29 MPCs between patients, their companions, and a social robot in a hospital. We then annotated this corpus for multi-party goal-tracking and intent-slot recognition. People share goals, answer each other's goals, and provide other people's goals in MPCs - none of which occur in dyadic interactions. To understand user goals in MPCs, we compared three methods in zero-shot and few-shot settings: we fine-tuned T5, created pre-training tasks to train DialogLM using LED, and employed prompt engineering techniques with GPT-3.5-turbo, to determine which approach can complete this novel task with limited data. GPT-3.5-turbo significantly outperformed the others in a few-shot setting. The'reasoning' style prompt, when given 7% of the corpus as example annotated conversations, was the best performing method. It correctly annotated 62.32% of the goal tracking MPCs, and 69.57% of the intent-slot recognition MPCs. A'story' style prompt increased model hallucination, which could be detrimental if deployed in safety-critical settings. We conclude that multi-party conversations still challenge state-of-the-art LLMs.
## 1 Introduction
Spoken Dialogue Systems (SDSs) are increasingly being embedded in social robots that are expected to seamlessly interact with people in populated public spaces like museums, airports, shopping centres, or hospital waiting rooms Foster et al. (2019); Tian et al. (2021); Gunson et al. (2022). Unlike virtual agents or voice assistants (e.g. Alexa, Siri, or Google Assistant), which typically have dyadic interactions with a single user, social robots are often approached by pairs and groups of individuals Al Moubayed et al. (2012); Moujahid et al. (2022). Families may approach a social robot in a museum, and patients are often accompanied by a family member when visiting a hospital. In these multi-party scenarios, tasks that are considered trivial for SDSs become substantially more complex Traum (2004); Zhong et al. (2022); Addlesee et al. (2023). In multi-party conversations (MPCs), the social robot must determine which user said an utterance, who that utterance was directed to, when to respond, and what it should say depending on whom the robot is addressing Hu et al. (2019); Gu et al. (2021, 2022). These tasks are collectively referred to as "who says what to whom" in the multi-party literature Gu et al. (2022), but these tasks alone provide no incentive for a system to actually help a user reach their goals. State of the art "who says what to whom" systems can, therefore, only mimic what a good MPC _looks like_Addlesee et al. (2023), but for practical systems we also need to know what each user's goals are. We therefore propose two further tasks that become substantially more complex when considered in a multi-party setting: goal tracking and intent-slot recognition Addlesee et al. (2023).
Dialogue State Tracking (DST) is a well-established task Lee et al. (2021); Feng et al. (2022) that is considered crucial to the success of a dialogue system Williams et al. (2016). DST corpora are abundant Henderson et al. (2014, 2014), but they only contain dyadic conversations. No corpus exists containing MPCs with goal tracking or
\begin{table}
\begin{tabular}{c c c} \hline \hline
1 & U1: & What time was our appointment? \\
2 & U2: & We have an appointment at 10.30pm. \\
3 & U1: & Ok. \\ \hline \hline \end{tabular}
\end{table}
Table 1: An example extract from our new corpus. This example illustrates that people complete other user’s goals in an MPC. The system must understand that U1’s question was answered by U2, and it does not need to answer this question as if it was a dyadic interaction. Further annotated examples can be found in Table 3.
intent-slot annotations, yet there are important differences. Consider the example in Table 1 (from our new corpus, detailed in Section 2). In turn 1, we can identify that User 1 (U1) wants to know their appointment time. Before the social robot had time to answer, User 2 (U2) answered in turn 2. This obviously does not occur in a dyadic interaction, yet this understanding is essential for natural system behaviour. The SDS must determine that it should not repeat the answer to the question, so data must be collected to learn this. Other major differences exist too. For example, current DST corpora do not contain a concept of'shared goals' (Eshghi and Healey, 2016). If two people approach a cafe counter, the barista must determine whether the two people are separate (two individuals wanting to get coffee), or together (two friends with the shared goal to get coffee) (Keizer et al., 2013). The interaction changes depending on this fact, it would be unusual to ask "are you paying together" to two individuals. Shared goals can commonly be identified through explicit dialogue. For example, the use of 'we' in "We are looking for the bathrooms". Similar to answering each other's questions, people may also ask questions on behalf of others. In our corpus, a person said "ARI, the person that I'm accompanying feels intimidated by you, and they'd like to know where they can eat".
In this paper, we present several contributions. (1) We collected a corpus of multi-party interactions between a social robot and patients with their companions in a hospital memory clinic. (2) This corpus was annotated for the standard "who says what to whom" tasks, but also for multi-party goal tracking and intent-slot recognition. We followed current DST annotation instructions, tweaked to enable annotation of multi-party phenomena (detailed in Section 2). (3) We then evaluated Large Language Models (LLMs) on these two new tasks using our collected corpus. Models were pre-trained, fine-tuned, or prompt engineered where applicable (detailed in Section 3). It is not possible to collect enormous corpora from patients in a hospital, so models were evaluated in zero-shot and few-shot settings. We found that the GPT-3.5-turbo model significantly outperformed others on both tasks when given a'reasoning' style prompt.
## 2 Dataset and Tasks
For the initial data collection, we partnered with a hospital in Paris, France, and secured ethical approval as part of the EU SPRING project1. We then recorded, transcribed, translated (from French to English), anonymised, and annotated 29 multi-party conversations (774 turns). These MPCs were between patients of the memory clinic, their companion (usually a family member), and a social humanoid robot created by PAL Robotics called ARI (Cooper et al., 2020). We hired a professional translator to avoid machine translation errors, and to enable faster experimentation as we are not French speakers. Future work based upon the findings in this paper will be evaluated in both English and French.
Footnote 1: [https://spring-h2020.eu/](https://spring-h2020.eu/)
We used a wizard-of-oz setup as this task is new, and we required this data to design a multi-party SDS for use in the hospital. A robot operator was therefore controlling what ARI said by selecting one of 31 response options (task-specific answers and some common responses like "yes", "no", "please", "thank you", and "I don't know"). Following our previously published data collection design (Addlese et al., 2023), each participant was given one or two goals, and asked to converse with ARI to try to achieve their goal. Both participants were given the same goals in some cases to elicit dialogues containing'shared goal' behaviour. In order to encourage lexical diversity, we provided pictograms to give each participant their goals. For example, if we told the patient that they want a latte, they would likely use the specific word "late" (Novikova et al., 2016), so we instead gave the participants pictograms as seen in the top-right of Figure 1. This worked as people didn't just ask for coffee when given this image, some asked for hot chocolate or herbal tea instead.
In this paper, we evaluated each model on both multi-party goal tracking, and multi-party intent-slot recognition. These are two related, yet distinct tasks. If ARI asked the user "Are you hungry?", and the user responded "yes", then the intent of that turn is an affirmation, but the user's goal is also established as wanting to eat. As explained in Section 1, standard DST annotation schemes are designed for dyadic interactions, which do not enable annotation of multi-party behaviours. Each turn is annotated with its intent and slot values where applicable, but goal annotations require both the goal and the user whose goal is being established. When a goal is detected in a dyadic interaction, no user information is needed as there is only a single
user. In multi-party interactions, multiple users can have multiple active goals. These goals may be different, they may be shared (see Table 2), users may answer each other's goals (see Table 1), and one user may provide another user's goal, for example by saying "My wife would love a coffee".
An annotated extract from an MPC in our collected corpus can be found in Table 2. In turn 1, U1 states that "we'd like a coffee", indicating that U1 and their companion U2 would _both_ like a coffee. This turn is annotated with two intents: greet (due to the "hello"), and request. This request intent has a slot value to indicate that the request is for a beverage - coffee. The goal tracking annotation signifies that a goal has been established in this turn with 'G'. The goal is shared by 'U1+U2', and their goal is to drink a coffee. In turn 2, ARI responds informing both users where the cafe is, hence the inform intent annotation. The goal tracking annotation is the same as turn 1, but starts with 'AG' (for 'answer-goal') instead of simply 'G'. This indicates that this goal has been answered, which is critical knowledge for the system to track which goals remain open. In this example, the goal is explicitly closed in turn 3, indicated by the corresponding 'CG' (close-goal) goal tracking annotation. Not all goals are explicitly closed by the user. A dialogue manager could decide to implicitly close an answered goal if the user does not reopen it within three turns, for example. We only annotate explicit goal closures, like the one in turn 3. There are two intents annotated in both turns 1 and 3 in Table 2, and multiple goal annotations can similarly exist, separated by a semicolon. For example, "T'm hungry but need the toilet first" simultaneously opens two goals. All of these annotations were completed using the ELAN tool (Brugman et al., 2004), and then mapped into JSON for model training2.
Footnote 2: Mapping code, annotated data, and training hyperparameters can be found here: [https://github.com/AddisseeHQ/mpgt-eval](https://github.com/AddisseeHQ/mpgt-eval).
With these two sets of annotations, we can evaluate various LLMs on two tasks: (1) multi-party intent-slot recognition; and (2) multi-party goal tracking. It is not possible to collect vast quantities of interactions with patients in the hospital, so these models must be able to learn from a corpus of limited size. We therefore decided to mask annotations in a randomised window selected from each MPC, providing the model with the surrounding context and speaker labels. That is, a random number of turns was selected in each MPC, and then the annotations were replaced by a '[MASK]' token. An example of this is shown in Table 3.
As the corpus size is limited, the window selection could potentially heavily impact model performance. We therefore randomised the selected window three times for each conversation and train/test split, and these _exact same_ windows were used to train and test each model. To clarify, all train/test splits and windows were randomised for multiple runs, but they were unchanged between each model. For example, run 1 with the 20/80 split in Section 4 for T5 contained the exact same test set, with the exact same window, as run 1 with the 20/80 split for DialogLED. This holds true for both tasks. Each masked window was bookended with a '[start]' and '[end]' tag to help the models learn this task too (Zhong et al., 2022). A shortened example from our corpus can be seen in Table 3.
## 3 Experimental Procedure
We evaluated three different models (each detailed below): T5 (Raffel et al., 2020), DialogLM using LED (DialogLED) (Zhong et al., 2022), and GPT-3.5-turbo3. Each approach was evaluated in a zero-shot and few-shot setting, with various train/test splits. We could not provide more data to GPT-3.5-turbo due to context window size, but the train/test
Figure 1: A sample of the pictograms used to represent user goals, given to patients and companions. These elicited dialogues without restricting vocabulary.
splits for T5 and DialogLED were: 0/100 (zero-shot), 20/80, 50/50, and 80/20. This allowed us to determine how each model learned to do these tasks when given more training examples. As described in Section 2, we ran each experiment three times with randomised splits and windows, but these remained the same between-models to avoid few-shot problems such as recency bias Zhao et al. (2021). We trained all the T5-Large and DialogLED models on a machine containing a 16Gb NVIDIA GeForce RTX 3080 Ti GPU with 64Gb RAM and an Intel i9-12900HK processor.
### T5-Large
Older GPT models (GPT-3 and below) are pre-trained with the next token prediction objective on huge corpora Radford et al. (2019); Brown et al. (2020), an inherently directional task. The creators of T5 added two more objectives and give it the goal of minimising the combined loss function Raffel et al. (2020) across all three tasks. The two additional tasks were de-shuffling, and BERT-style de-masking Devlin et al. (2018). This latter pre-training task involves 'corrupting' tokens in the original text, which T5 must then predict. Importantly, this enabled T5 to work bidirectionally, becoming particularly good at using the surrounding context to predict tokens in corrupted sentences. This is not dissimilar to our task, in which the model must learn to use the surrounding MPC turns to predict the annotations that are masked. T5 also achieves state-of-the-art results on related tasks like Lee et al. (2021); Marselino Andreas et al. (2022), albeit, fine-tuned on larger datasets.
We used T5-Large in both a zero-shot setting, and fine-tuned with various train/test splits. T5 allows fine-tuning with a given named task like 'answer the question', or 'translate from French to German'. We used 'predict goals' and 'predict intent-slots' for goal tracking and intent-slot recognition, respectively, giving the same task names as input during testing. As the corpus is very small, there was no model performance boost beyond 3 epochs, which was expected Mueller et al. (2022).
### DialogLM using LED (DialogLED)
MPCs reveal unique new communication challenges Addlesee et al. (2023), as detailed in Section 1, so some LLMs have been developed specifically for the multi-party domain Hu et al. (2019); Gu et al. (2021, 2022). Microsoft published DialogLM Zhong et al. (2022), a pre-trained LLM based upon UniLMv2 Bao et al. (2020), but specifically designed for multi-party tasks. Alongside the base model, they released two variations: DialogLM-sparse for long dialogues over 5,120 words, and DialogLM using LED (DialogLED) which outperformed the others. DialogLED builds on Longform-Encoder-Decoder (LED) Beltagy et al. (2020), an attention mechanism that scales linearly with sequence length. Transformer-based models typically scale quadratically with the sequence length, restricting their ability to process long dialogiques.
DialogLED was pre-trained on five objectives designed specifically for MPCs, and the model's goal was to minimise the combined loss of all of these tasks. Their state-of-the-art results showed that their pre-training tasks did encourage the LLM to 'understand' multi-party interactions. The five
\begin{table}
\begin{tabular}{c l l l} \hline \hline & **User** & **Utterance** & **Intent-Slot Annotation** & **Goal Tracking Annotation** \\ \hline
1 & U1: & Hello, we’d like a coffee. Where can we go? & greet() ; request(beverage(coffee)) & G(U1+U2, drink(coffee)) \\
2 & ARI: & You have to enter the building behind you. & inform(directions(cafe)) & AG(U1+U2, drink(coffee)) \\
3 & U2: & Ok, well thank you very much. & acknowledge(); thank() & CG(U1+U2, drink(coffee)) \\ \hline \hline \end{tabular}
\end{table}
Table 2: A corpus example displaying shared goals with both intent-slot and goal tracking annotations.
\begin{table}
\begin{tabular}{c l l l} \hline \hline & **User** & **Masked Goal Tracking Utterance** & **Gold Annotation** \\ \hline
1 & ARI: & Hello, my name is ARI. How can I help you? & - \\ & & [start] & - \\
2 & U1: & My friend is intimidated by you, where can they eat? [MASK] & G(U2, eat()) \\
3 & ARI: & There’s a cafeteria on the ground floor, near the courtyard. [MASK] & AG(U2, eat()) \\ & & [end] & - \\
4 & U2: & My appointment is in room 17, where is it? G(U2, go-to(room_17)) & - \\ \hline \hline \end{tabular}
\end{table}
Table 3: A corpus example illustrating the goal tracking task. This process was the same for intent-slot recognition, with the corresponding annotations. Note that U1 asks U2’s question, and this is reflected in the annotation.
tasks were: (1) speaker masking, the model has to predict who spoke; (2) turn splitting, the model has to recognise when two utterances are likely the same turn; (3) turn merging, the opposite of (2), where the model has to recognise when the turns were likely separate; (4) text infilling, the model has to predict masked tokens within the turn; and (5) turn permutation, the model has to correctly re-order jumbled turns.
We cloned their repository4 and added two new tasks: (6) goal masking, the model has to predict goal tracking annotations; and (7) intent-slot masking, the model has to predict intent-slot annotations. In the zero-shot setting, we simply ran the test set through base DialogLED. We then ran their, now modified, code to run our few-shot evaluations three times for each data split.
Footnote 4: [https://github.com/microsoft/DialogLM](https://github.com/microsoft/DialogLM)
### GPT-3.5-turbo
Larger LLMs are not inherently better at following a user's intent (Ouyang et al., 2022) as they have no incentive to help the user achieve their goal, only to generate realistic looking outputs. This leads to significant problems, including the generation of false, biased, and potentially harmful responses. GPT-3 was therefore fine-tuned on prompts with human-feedback to create InstructGPT (Ouyang et al., 2022). OpenAI later followed this same approach to create the now famous ChatGPT family of models. At the time of writing, GPT-4 is the most powerful of these models, but it is currently in a waiting list phase. OpenAI recommends their GPT-3.5-turbo model while waiting as the next best option. We therefore decided to evaluate this model on the same two tasks.
Unlike T5 or DialogLED, there is no way to fine-tune your own version of GPT-3.5-turbo, or to edit their pre-training steps. People instead mould the model's behaviour through prompt-engineering (Lester et al., 2021; Wei et al., 2022; Weng, 2023). The newer GPT models allow developers to provide huge contexts, called prompts, containing instructions for the model to follow. GPT-3.5-turbo allows prompts of up to 4,096 tokens. Although these models have only exploded in popularity recently, there are many suggested prompt'styles' suggested online by conversation designers who are implementing these models in the real-world. We have analysed this space and devised six prompt styles for the two tasks. In the zero-shot setting, only the prompt and the masked MPC is provided to the model. In the few-shot setting, we additionally provide the model with 7% of the corpus as examples. This is crucial to highlight. T5 and DialogLED were trained on 20% of the corpus, 50% of the corpus, and finally 80% of the corpus. GPT-3.5-turbo's maximum context size can only fit 7% of the corpus, less than the other models.
The prompt styles we used were the following (the actual prompts are included in Appendix A):
* **Basic**: This is our baseline prompt. It very simply tells the model what it is going to get as input, and what we want as output. It contains no further special instructions.
* **Specific**: GPT practitioners report that when prompts are more detailed and specific, performance is boosted (Ye et al., 2023).
* **Annotation**: For annotation tasks, we would give fellow humans annotation instructions. In this prompt, we provide the model with annotation instructions.
* **Story**: This model was pre-trained on a very large quantity of data, including novels, film scripts, journalistic content, etc... It may be possible that by phrasing the prompt like a story, performance may be boosted due to its likeness to its training data.
* **Role-play**: Similar to the story prompt, it is reported that these models are very good at role-playing5. People ask ChatGPT to pretend to be a therapist, a lawyer, or even alter-egos that have no safety limitations (Taylor, 2023). We tell GPT-3.5-turbo that it is a 'helpful assistant listening to a conversation between two people and a social robot called ARI'. Footnote 5: [https://github.com/f/awesome-chatqpt-prompts](https://github.com/f/awesome-chatqpt-prompts)
* **Reasoning**: Finally, recent work suggests that these models improve in performance if you explain the reasoning for desired outputs (Fu et al., 2022). We therefore added one fictitious turn to this prompt, and explained the reasoning behind its annotation.
## 4 Results
We evaluated T5, DialogLED, and GPT-3.5-turbo as described in Section 3 on multi-party goal track
ing, and multi-party intent-slot recognition. Outputs were annotated as either 'exact', 'correct', or 'partial' to distinguish each model's performance beyond simple accuracy. Exact matches were strictly annotated, but slight differences are allowed if the annotation meaning remains unchanged. For example: 'G(U1, go-to(lift))' and 'G(U1, go-to(lifts))' (note the plural 'lifts'). Outputs were marked as exact if every [MASK] in the MPC was exact, and marked as correct if every [MASK] was more broadly accurate. For example, if the annotation contained 'drink(coffee)' and the model output 'drink(hot_drink)', we considered this correct. The output was marked as partially correct if at least 60% of the [MASK] tags were correctly annotated. This latter metric allows us to distinguish between models that generate nonsense, and those that roughly grasp the task. Our inter-annotator agreements were 0.765 and 0.771 for goal tracking and intent-slot recognition, respectively. These are less than 0.8, and this was due to the broad definition of 'correct'. We plan to design automatic metrics for our future work (see Section 5).
### MPC Goal Tracking Results
The goal tracking results can be found in Table 4. An ANOVA test [12] indicated that there was an overall significant difference between the model's results. We therefore ran a Tukey HSD test [13] that showed that the GPT-3.5-turbo model in the few-shot setting did significantly outperform all the other models.
Firstly, the T5-Large model performed poorly, even when it was trained on 80% of our corpus. Upon further analysis, it generated complete nonsense in the zero-shot setting, but did start to generate strings that looked reasonable with only 20% of the data. Given the 50/50 train/test split, T5 consistently replaced the [MASK] tokens, but did still hallucinate turns. When given 80% of the data as training data, the T5 model preserved the original dialogue, and replaced the [MASK] tokens with goal annotations, they were just all completely wrong. This steady improvement as we increased the amount of training data suggests that T5 could be a viable option for similar tasks, just not where data is limited (such as our hospital use case).
The DialogLED model also generated nonsense in the zero-shot setting, but very quickly learned the task. Even with just 20% of the data used for training, DialogLED reliably preserved the original dialogue and replaced the [MASK] tokens with goal annotations. Most of the annotations were incorrect, for example 'G(U2, eat(ticket))', but DialogLED did correctly detect some goals opening, being answered, and being closed correctly, achieving a non-zero partial score. Given more training data, DialogLED did begin to use the surrounding contextual dialogue turns more accurately, but almost every result contained an incorrect prediction. This was often the mis-detection of shared goals, or closing goals early. Like T5, DialogLED would need a larger training set to accurately complete this task. This model learned the task quickly, so may need fewer examples.
In the zero-shot setting, GPT-3.5-turbo roughly 'understood' the task, generating many partially correct outputs. With all the prompt styles, it did frequently reformat the dialogue. This was particularly true when using the roleplay prompt, it would output all the goals per interlocutor, for example, rather than per turn. The worst zero-shot GPT-3.5-turbo prompt was the'story' style, not even generating one partially correct output. This was due to its increased hallucination. The story prompt noticeably produced more fictitious turns, and also rephrased and removed turns in the original dialogue. We believe this is likely because a story scenario is naturally a fictitious topic. The'reasoning' style prompt performed remarkably well, generating five times more correct outputs than the second-best prompt style, and generating 79.31% partially correct outputs, showing that it can grasp the concept of the task. The reasoning prompt commonly mis-identified shared goals, unfortunately.
In the few-shot setting, GPT-3.5-turbo's results improved significantly compared to every other approach. We would like to highlight again that each run's example prompts provided to the model were exactly the same for each prompt style. Performance differences were only due to the given prompt style. The'reasoning' prompt once again outperformed the others across all metrics, generating correct outputs 62.32% of the time, and partially correct 94.20% of the time. In our future work (see Section 5), we plan to utilise this prompt style's impressive performance on limited data. The'story' prompt was the only style to successfully attribute goals to other speakers, as in Table 3, but it still suffered from increased hallucination, which is not appropriate in a safety-critical
setting. We suspect that the other prompt styles failed to do this because of the rarity of this phenomenon in our corpus. We are eliciting more of these in ongoing experiments with a deployed system, not wizard-of-oz (Addlesee et al., 2023).
### MPC Intent-slot Recognition Results
The results for each model on the intent-slot recognition task can be found in Table 5. As with the goal tracking results, an ANOVA test (Fisher, 1992) indicated that there was an overall significant difference between our model's results. We therefore ran a Tukey HSD test (Tukey, 1949) that showed that the GPT-3.5-turbo model in the few-shot setting significantly outperformed all the other models.
As intent-slot annotations are well-established, T5 and DialogLED both started generating sensible-_looking_ outputs with only a few training examples. The T5 outputs were all incorrect again, however. DialogLED consistently improved as it was trained on progressively more data, annotating almost half of the MPCs partially correctly, and beginning to accurately annotate full MPCs. Given a larger corpus, we expect that DialogLED could potentially generate competitive results, but this is not the case for T5 in this setting with limited data.
GPT-3.5-turbo in the zero-shot setting also achieved higher partial scores, compared to the goal tracking results, due to the fact that intent-slot recognition is a more established task. Turns were commonly annotated with multiple gold goals, but this model tended to only output one per turn. For example: "Hello ARI, where is the cafe?" would only have the prediction 'greet', missing the request to locate the cafe entirely. This prevented the model from achieving higher correct scores.
In the few-shot setting, however, GPT-3.5-turbo significantly outperformed all the other models. The difference was remarkable. Almost all of the predictions were partially correct, and the'reasoning' prompts correctly annotated 70% of the MPCs. Other models tended to falter when anaphoric expressions couldn't be resolved with just the previous turn. They also struggled to identify the'suggest' intent, for example, when one person said "do you want to go to the toilet?". These were misclassified as request intents, likely due to their prominence in the corpus, and influence on the results due to GPT-3.5-turbo's limited input context.
\begin{table}
\begin{tabular}{|c c c|l l l|} \hline
**Model** & **train/test \%** & **Prompt Style** & **Exact \%** & **Correct \%** & **Partial \%** \\ \hline T5 & 0/100 & - & 0 & 0 & 0 \\ T5 & 20/80 & - & \(0\pm 0\) & \(0\pm 0\) & \(0\pm 0\) \\ T5 & 50/50 & - & \(0\pm 0\) & \(0\pm 0\) & \(0\pm 0\) \\ T5 & 80/20 & - & \(0\pm 0\) & \(0\pm 0\) & \(0\pm 0\) \\ \hline \hline DialogLED & 0/100 & - & 0 & 0 & 0 \\ DialogLED & 20/80 & - & \(0\pm 0\) & \(0\pm 0\) & \(5.80\pm 1.45\) \\ DialogLED & 50/50 & - & \(0\pm 0\) & \(2.38\pm 2.38\) & \(1.19\pm 0.63\) \\ DialogLED & 80/20 & - & \(0\pm 0\) & \(0\pm 0\) & \(20\pm 11.55\) \\ \hline \hline GPT 3.5-turbo & 0/100 & Basic & 0 & 3.45 & 31.03 \\ GPT 3.5-turbo & 0/100 & Specific & 0 & 3.45 & 24.14 \\ GPT 3.5-turbo & 0/100 & Annotation & 0 & 6.90 & 44.83 \\ GPT 3.5-turbo & 0/100 & Story & 0 & 0 & 0 \\ GPT 3.5-turbo & 0/100 & Role-play & 0 & 0 & 6.90 \\ GPT 3.5-turbo & 0/100 & Reasoning & 3.45 & 34.48 & 79.31 \\ \hline GPT 3.5-turbo & 7/80* & Basic & 11.59 \(\pm\) 3.83 & 30.43 \(\pm\) 10.94 & 86.96 \(\pm\) 6.64 \\ GPT 3.5-turbo & 7/80* & Specific & 20.29 \(\pm\) 3.83 & 43.48 \(\pm\) 9.05 & 92.75 \(\pm\) 2.90 \\ GPT 3.5-turbo & 7/80* & Annotation & 14.49 \(\pm\) 5.80 & 28.99 \(\pm\) 3.83 & 82.61 \(\pm\) 4.35 \\ GPT 3.5-turbo & 7/80* & Story & 17.39 \(\pm\) 6.64 & 36.23 \(\pm\) 13.83 & 86.96 \(\pm\) 4.35 \\ GPT 3.5-turbo & 7/80* & Role-play & 18.84 \(\pm\) 7.25 & 46.38 \(\pm\) 12.38 & 92.75 \(\pm\) 5.22 \\ GPT 3.5-turbo & 7/80* & Reasoning & **27.54 \(\pm\) 1.45** & **62.32 \(\pm\) 9.50** & **94.20 \(\pm\) 5.80** \\ \hline \end{tabular}
\end{table}
Table 4: The final multi-party goal tracking results for each model in both the zero- and few-shot settings. *We could not fit more than 7% of the training examples in GPT-3.5-turbo’s context window. We therefore used fewer examples than with T5 and DialogLED. The same 80% test sets were still used to enable model comparison.
## 5 Conclusion and Future Work
Multi-party conversations (MPCs) elicit complex behaviours which do not occur in the dyadic interactions that today's dialogue systems are designed and trained to handle. Social robots are increasingly being expected to perform tasks in public spaces like museums and malls, where conversations often include groups of friends or family. Multi-party research has previously focused on speaker recognition, addressee recognition, and tweaking response generation depending on whom the system is addressing. While this work is vital, we argue that these collective "who says what to whom" tasks do not provide any incentive for the social robot to complete user goals, and instead encourage it to simply mimic what a good MPC _looks like_. In this paper, we have detailed how the tasks of goal tracking and intent-slot recognition differ in a multi-party setting, providing examples from our newly collected corpus of MPCs in a hospital. We found that, given limited data,'reasoning' style prompts enable GPT-3.5-turbo to perform significantly better than other models.
We found that other prompt styles also perform well, but prompts that are story-like increase model hallucination. With the introduction of prompt fine-tuning with human feedback (Ouyang et al., 2022), generative LLMs do now have some incentive to avoid misleading or harming the user, providing outputs prepended with caveats, but the issue is not solved. OpenAI claims that GPT-4 generates 40% fewer hallucinations than GPT-3 (Hern and Bhuiyan, 2023), but these models should still not be applied directly in a hospital or other safety-critical setting without further evaluation. In the hospital setting, users are more likely to be from vulnerable population groups, and are more likely to be older adults that are not familiar with the capabilities of today's models. Multiple researchers and hospital staff members are present when conducting our data collections, so that if hallucinations do occur, they can be quickly corrected. We will, therefore, be able to evaluate response grounding, Guidance6, and other hallucination prevention strategies to determine whether these models can ever be used safely in a high-risk setting. These further experiments will also elicit further MPCs that can be annotated for various multi-party tasks.
Footnote 6: [https://github.com/microsoft/guidance](https://github.com/microsoft/guidance)
User inputs must be processed on external
\begin{table}
\begin{tabular}{|c c c|l l l|} \hline
**Model** & **train/test \%** & **Prompt Style** & **Exact \%** & **Correct \%** & **Partial \%** \\ \hline T5 & 0/100 & - & 0 & 0 & 0 \\ T5 & 20/80 & - & \(0\pm 0\) & \(0\pm 0\) & \(0\pm 0\) \\ T5 & 50/50 & - & \(0\pm 0\) & \(0\pm 0\) & \(0\pm 0\) \\ T5 & 80/20 & - & \(0\pm 0\) & \(0\pm 0\) & \(0\pm 0\) \\ \hline \hline DialogLED & 0/100 & - & 0 & 0 & 0 \\ DialogLED & 20/80 & - & \(0\pm 0\) & \(0\pm 0\) & \(5.80\pm 2.90\) \\ DialogLED & 50/50 & - & \(0\pm 0\) & \(0\pm 0\) & \(38.10\pm 10.38\) \\ DialogLED & 80/20 & - & \(0\pm 0\) & \(13.33\pm 6.67\) & \(46.67\pm 6.67\) \\ \hline \hline GPT 3.5-turbo & 0/100 & Basic & 0 & 3.45 & 51.72 \\ GPT 3.5-turbo & 0/100 & Specific & 0 & 0 & 13.79 \\ GPT 3.5-turbo & 0/100 & Annotation & 0 & 3.45 & 20.69 \\ GPT 3.5-turbo & 0/100 & Story & 0 & 0 & 24.14 \\ GPT 3.5-turbo & 0/100 & Role-play & 0 & 0 & 20.69 \\ GPT 3.5-turbo & 0/100 & Reasoning & 0 & 27.59 & 82.76 \\ \hline GPT 3.5-turbo & 7/80* & Basic & \(17.39\pm 6.64\) & \(36.23\pm 12.88\) & \(97.10\pm 2.90\) \\ GPT 3.5-turbo & 7/80* & Specific & \(27.54\pm 1.45\) & \(60.87\pm 9.05\) & \(94.20\pm 1.45\) \\ GPT 3.5-turbo & 7/80* & Annotation & \(18.84\pm 1.45\) & \(40.58\pm 6.32\) & \(91.30\pm 4.35\) \\ GPT 3.5-turbo & 7/80* & Story & \(26.09\pm 4.35\) & \(47.83\pm 10.04\) & \(94.20\pm 3.83\) \\ GPT 3.5-turbo & 7/80* & Role-play & \(20.29\pm 3.83\) & \(49.27\pm 12.88\) & \(97.10\pm 1.45\) \\ GPT 3.5-turbo & 7/80* & Reasoning & \(\mathbf{37.68\pm 1.45}\) & \(\mathbf{69.57\pm 10.94}\) & \(\mathbf{100\pm 0}\) \\ \hline \end{tabular}
\end{table}
Table 5: The final multi-party intent-slot recognition results for each model in both the zero- and few-shot settings. *We could not fit more than 7% of the training examples in GPT-3.5-turbo’s context window. We therefore used fewer examples than with T5 and DialogLED. The same 80% test sets were still used to enable model comparison.
servers when using industry LLMs, like GPT-3.5-turbo and Google's Bard. For this reason, these specific models cannot be deployed in the hospital setting. Patients may reveal identifiable or sensitive information during our data collection, which we subsequently remove from the corpus. This data must stay contained within approved data-controlled servers in the SPRING project. In this paper, we have reported the remarkable performance of an industry LLM, when given limited data, compared to prior model architectures. We will analyse open and transparent instruction-tuned text generators [11], which are able to meet our data security requirements.
The accessibility of today's SDSs is critical when working with hospital patients [1]. Speech production differs between the 'average' user, and user groups that remain a minority in huge training datasets. For example, people with dementia pause more frequently and for longer durations mid-sentence due to word-finding problems [12, 13]. We are utilising knowledge graphs to ensure that SDSs are transparent, controllable, and more accessible for these user groups [1, 1], and we see the unification of large language models and knowledge graphs [23] as the near-term future of our field.
We plan to design and run subsequent experiments in both the hospital memory clinic, and a newly established mock waiting room in our lab. This space will allow us to collect additional MPCs with more than two people, replicating scenarios in which whole families approach a social robot. We plan to evaluate whether prompt engineering can work modularly for N users. For example, we could use GPT-4 to correct speaker diarization [10], then to handle multi-party goal tracking, and then to generate responses to the user. This experimental setup will allow us to quickly test new ideas, such as automatic prompt optimization [11] in the lab, maximising the benefit of patients' time in the hospital.
## Acknowledgements
This research was funded by the EU H2020 program under grant agreement no. 871245 ([https://spring-h2020.eu/](https://spring-h2020.eu/)). We would also like to thank our anonymous reviewers for their time and valuable feedback.
|
2305.02969 | A Modular Quantum Compilation Framework for Distributed Quantum
Computing | For most practical applications, quantum algorithms require large resources
in terms of qubit number, much larger than those available with current NISQ
processors. With the network and communication functionalities provided by the
Quantum Internet, Distributed Quantum Computing (DQC) is considered as a
scalable approach for increasing the number of available qubits for
computational tasks. For DQC to be effective and efficient, a quantum compiler
must find the best partitioning for the quantum algorithm and then perform
smart remote operation scheduling to optimize EPR pair consumption. At the same
time, the quantum compiler should also find the best local transformation for
each partition. In this paper we present a modular quantum compilation
framework for DQC that takes into account both network and device constraints
and characteristics. We implemented and tested a quantum compiler based on the
proposed framework with some circuits of interest, such as the VQE and QFT
ones, considering different network topologies, with quantum processors
characterized by heavy hexagon coupling maps. We also devised a strategy for
remote scheduling that can exploit both TeleGate and TeleData operations and
tested the impact of using either only TeleGates or both. The evaluation
results show that TeleData operations may have a positive impact on the number
of consumed EPR pairs, while choosing a more connected network topology helps
reduce the number of layers dedicated to remote operations. | Davide Ferrari, Stefano Carretta, Michele Amoretti | 2023-05-04T16:13:23Z | http://arxiv.org/abs/2305.02969v1 | # A Modular Quantum Compilation Framework
###### Abstract
**For most practical applications, quantum algorithms require large resources in terms of qubit number, much larger than those available with current NISQ processors. With the network and communication functionalities provided by the Quantum Internet, Distributed Quantum Computing (DQC) is considered as a scalable approach for increasing the number of available qubits for computational tasks. For DQC to be effective and efficient, a quantum compiler must find the best partitioning for the quantum algorithm and then perform smart remote operation scheduling to optimize EPR pair consumption. At the same time, the quantum compiler should also find the best local transformation for each partition. In this paper we present a modular quantum compilation framework for DQC that takes into account both network and device constraints and characteristics. We implemented and tested a quantum compiler based on the proposed framework with some circuits of interest, such as the VQE and QFT ones, considering different network topologies, with quantum processors characterized by heavy hexagon coupling maps. We also devised a strategy for remote scheduling that can exploit both TeleGate and TeleData operations and tested the impact of using either only TeleGates or both. The evaluation results show that TeleData operations may have a positive impact on the number of consumed EPR pairs, while choosing a more connected network topology helps reduce the number of layers dedicated to remote operations.**
_Index terms--_ Distributed Quantum Computing, Quantum Compilation, Quantum Internet.
## 1 Introduction
Noisy Intermediate-Scale Quantum (NISQ) processors are characterized by few hundreds of quantum bits (qubits) with non-uniform quality and highly constrained physical connectivity. Hence, the growing demand for large-scale quantum computers is motivating research on Distributed Quantum Computing (DQC) architectures [1] as a scalable approach for increasing the number of available qubits for computational tasks, and experimental efforts have demonstrated some of the building blocks for such a design [2]. Indeed, with the network and communications functionalities provided by the _Quantum Internet_[3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13], remote quantum processing units (QPUs) can communicate and cooperate for executing computational tasks that each NISQ device cannot handle by itself.
In general, when moving from local to distributed quantum computing one faces two main challenges, namely, quantum algorithm partitioning and execution management [1]. To partition a monolithic quantum algorithm, a _quantum compiler_ must be used to find the best breakdown, i.e., the one that minimizes the number of gates that are applied to qubits stored at different devices. Such _remote gates_ can be implemented by means of three communication primitives that we denote as Teleport [14] (quantum state teleportation), Cat-Ent (cat-entanglement) and Cat-DisEnt (cat-disentanglement) [15]. These primitives require that an entangled state is consumed and a new one must be distributed between the remote processors through the quantum link before another inter-processor operation can be executed. Through this primitives one can perform two types of remote operations, namely TeleData and TeleGate [2, 16, 17], as shown in Fig. 1. The literature on quantum compilers [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28] focuses on qubit assignment and remote gate scheduling, while pay less attention to the integration of local routing.
Regarding execution management, in general, given a collection \(\mathcal{P}\) of quantum circuit instances to be executed, this collection should be divided into non-overlapping subsets \(\mathcal{P}_{i}\), such that \(\mathcal{P}=\cup_{i}\mathcal{P}_{i}\). One after the other, each subset must be assigned to the available QPUs. In other words, for each execution round \(i\), there exists a schedule \(S(i)\) that maps some quantum circuit instances to the quantum network. If
DQC is supported, some quantum circuit instances may be split into sub-circuit instances, each one to be assigned to a different QPU, as illustrated in Figure 2).
In this work, we focus on the first challenge, i.e., quantum algorithm partitioning. We present a modular quantum compilation framework for DQC that, for the first time, takes into account both network and device constraints and characteristics. We illustrate the experimental evaluation of a quantum compiler based on the proposed framework, using some circuits of interest (VQE, QFT, graph state preparation) and different network topologies, with quantum processors characterized by heavy hexagon coupling maps. The heavy-hexagon topology has been chosen by IBM [29] for its scalability and performance, offering reduced error-rates while affording the opportunity to explore error correcting codes [30, 31, 25, 32, 33]. We also devised a strategy for remote scheduling that can exploit both TeleGate and TeleData operations, and tested the impact of using either only TeleGates or both. The evaluation results show that TeleData operations may have a positive impact on the number of consumed EPR pairs, while choosing a more connected network topology helps reduce the number of layers dedicated to remote operations.
The paper is organized as follows. In Section 2, related works on quantum compiling for DQC are discussed. In Section 3, the proposed modular quantum compilation framework is illustrated in detail. In Section 4, the experimental evaluation of a Python-based implementation of the compiler is presented. Finally, Section 5 concludes the paper with a discussion of open questions and future work.
## 2 Related Work
Quantum compilation for DQC is characterized by two fundamental steps, _qubit assignment_ and _remote gate schedul
Figure 1: **(1a)** Circuit representation of TeleData by means of the Teleport primitive, which allows one to move the quantum state of a data qubit \(|c\rangle\) to the second half of an entangled pair. While both the sate of the entangled pair and the original qubit \(|c\rangle\) are now lost, multiple CZ acting on the teleported qubit can then be executed. **(1b)** Circuit representation of TeleGate by means of Cat-Ent and Cat-DisEnt primitives. After the Cat-Ent operation, the second half of the entangled pair acts as a shared copy (not an actual copy, due to the no cloning theorem) of the original \(|c\rangle\) control qubit. Multiple remote CZ with same control qubit and different target can be executed between Cat-Ent and Cat-DisEnt. It is worth noting that, between Cat-Ent and Cat-DisEnt, the \(|c\rangle\) control qubit is entangled with its shared copy and cannot be targeted by other gates.
Figure 2: Execution of multiple quantum circuit instances with \(k\) QPUs. For each execution round \(i\), a schedule \(S(i)\) maps some quantum circuit instances to the quantum network – each QPU receiving a quantum circuit \(\mathbf{P}_{i}^{j}\) that is either a monolithic one or a sub-circuit of a monolithic one. The classical outputs are accumulated into an output vector \(O\).
ing_. In DQC, qubit assignment is generally tackled as a partitioning problem. Specifically, for a given set of virtual qubits, one needs to choose a partition that maps sub-sets of logical qubits to processors, while minimizing the number of required interactions among different sub-sets. The main goal is to minimize the number of consumed _ebits_ - i.e., EPR pairs shared between QPUs -, as it is the main bottleneck to distributed quantum computation.
Andres-Martinez and Heunen [18] use _catnetanglement_[15] to implement remote quantum gates. The chosen gate set contains every one-qubit gate and two single two-qubit gates, namely the CNOT and the CZ gate (i.e., the controlled version of the Z gate). The authors consider no restriction on the ebit connectivity between QPUs. Then, they reduce the problem of distributing a circuit across multiple QPUs to hypergraph partitioning. The proposed approach is evaluated against five quantum circuits, including QFT. The proposed solution has some drawbacks, in particular that there is no way to customize the number of communication qubits of each QPU.
Sundaram et al. [19] present a two-step solution, where the first step is qubit assignment. Circuits are represented as edge-weighted graphs with qubits as vertices. The edge weights correspond to an estimation for the number of entanglement operations. The problem is then solved as a minimum k-cut, where partitions have roughly the same size. The second step is finding the smallest set of catnetanglement operations that will enable the execution of all TeleGates. The authors state that, in a special setting, this problem can be reduced to a vertex-cover problem, allowing for a polynomial-time optimal solution based on integer linear programming. They also provide a \(O(\log n)\)-approximate solution, where \(n\) is the total number of global gates, for a generalized setting by means of greedy search algorithm. In [20], the same authors extend their approach to the case of an arbitrary-topology network of heterogeneous quantum computers by means of a Tabu search algorithm.
In [21], by Daei et al., the circuit becomes an undirected graph with qubits as vertices, while edge weights correspond to the number of two-qubit gates between them. Then, the graph is partitioned using the Kernighan-Lin (K-L) algorithm for VLSI design [34], so that the number of edges between partitions is minimized. Finally, each graph partition is converted to a quantum circuit.
In [22], the authors represent circuits as bipartite graphs with two sets of vertices - one set for the qubits and one for the gates - and edges to encode dependencies of qubits and gates. Then, for the qubit assignment problem, they propose a partitioning algorithm via dynamic programming to minimize the number of TeleData operations.
Dadkhah et al. [23] propose a heuristic approach to replace the equivalent circuits in the initial quantum circuit. Then, they use a genetic algorithm to partition the placement of qubits so that the number of teleportations could be optimized for the communications of a DQC.
Nikahd et al. [24] exploit a minimum k-cut partitioning algorithm formulated as an ILP optimization problem, to minimize the number of remote interactions. They use a moving window and apply the partitioning algorithm to small sections of the circuit, thus the partition may change with the moving window by means of TeleData operations.
Cuomo et al. in [25] model the compilation problem with an Integer Linear Programming formulation. The formulation is inspired to the vast theory on dynamic network problems. Authors manage to define the problem as a special case of _quickest multi-commodity flow_. Such a result allows to perform optimization by means of techniques coming from the literature, such as a _time-expanded_ representation of the distributed architecture.
Ovide et al. [26], investigate the performance of the qubit assignment strategy proposed by Baker et al. [27] on some circuits of interests, under the assumption of local and network all-to-all connectivity. In [27], qubit assignment is treated as a graph partitioning problem, under the assumption that a SWAP operation primitive exists to exchange data qubits between different QPUs. I.e., it is not required to check if there are available free data qubits on the QPUs. Oxide et al. show that, in general, the wider the circuit, the higher the number of remote operations, although it highly depends on the specific circuit to be compiled.
## 3 Modular Quantum Compilation Framework
As mentioned in Section 1, there is a lack of a modular framework for compiling quantum circuits to DQC architectures. Such a framework should be circuit agnostic, i.e., able to compile any circuit to any suitable DQC architecture. Moreover, this framework should bridge the gap between local compilation and compilation for DQC. Current proposals from the literature tackle the problems of qubit assignment and remote gate scheduling but do not take into account the local connectivity of each QPU. Our proposal for a general-purpose quantum compilation framework is shown in Fig. 3.
The proposed quantum compilation framework takes as input a quantum circuit and a network configuration. As depicted in Fig. 4, the network configuration describes how QPUs are connected into the target DQC architecture, including quantum channels capacity, i.e., the number of communication qubits for each channel. The network configuration should include descriptions of the internal configurations of the QPUs, i.e., the coupling map and the set of available data qubits and communication qubits. The coupling map is a directed graph where each vertex corresponds to a qubit and directed edges determine the possibility of executing two-qubit gates between the connected qubits1. Fig. 5 shows an example of coupling map with 20 data qubits and 8 communication qubits, highlighted in blue.
Footnote 1: Specifically, the source and destination vertexes of a directed edge can be the control and target qubit respectively of a two-qubit gate. An edge could be undirected, meaning that both qubits can act as control or target.
Having these inputs, the first step in the framework regards the qubit assignment, as described in Sec. 3.1. Once a good qubit assignment has been found, the compiler proceeds to schedule remote gates accordingly and computes the local mapping of qubits assigned for each QPU, as detailed in Sec. 3.2. Finally, local routing must be performed while taking into account the previously scheduled remote gates, as explained in Sec. 3.3.
The output of the framework is of course a compiled circuit. This circuit is a unique object containing all properly scheduled local and remote gates.
### Qubit Assignment
As mentioned in Sec. 1, the goal is to partition the circuit in order to minimize the communication cost, i.e., the number of remote operations and consequently the number of consumed EPR pairs. To this aim, a quantum circuit \(qc\) can be represented as an undirected weighted graph \(G_{qc}(V,E)\), as shown in Fig. 6, where each edge \(e\in E\) has weight \(W(e)\in\mathbb{N}\). The set of vertices \(V\) corresponds to the qubits in \(qc\) and the weight of each edge is equal to the number of two-qubit gates between the corresponding qubits.
The qubit assignment problem can then be treated as a graph partitioning problem where the objective is to compute a k-way partitioning such that the sum of edges' weights that straddle different partitions is minimized. Given \(k\) available QPUs, the result of k-way partitioning are \(k\) roughly equally sized circuit partitions. There are several algorithms available that can efficiently find a solution. In this work we used METIS's multilevel k-way partitioning2. This approach is preliminary and not optimal, as the QPUs would probably be underutilized and the circuit's qubits unnecessarily scattered through all QPUs. In fact, for each partition we check if moving a qubit to another partition would benefit the overall communication cost. Between all the useful moves found, we choose the best and iteratively continue to search for possible movements, until either all qubits have been moved one
Figure 4: DQC architecture comprising 3 QPUs as shown in Fig. 5. Each QPU is connected to the others and each QPU supports up to 4 _communication qubits_ per connection.
Figure 5: QPU configuration with 20 _data qubits_ and 8 _communication qubits_, inspired by IBM’s heavy hexagon devices [35].
Figure 3: Workflow of the proposed modular quantum compilation framework for DQC architectures.
time or no more good movements can be found. An example of improvement from the initial solution is depicted in Fig. 7
### Remote Gate Scheduling
We designed a compilation pass to schedule remote gates for DQC architectures to investigate the impact of using both TeleData and TeleGate operations. The main strategy, described in Alg. 1, requires three input items: the quantum circuit to distribute, the configuration of the network onto which such circuit will be executed and a suitable qubit assignment, as computed by a previous pass in the compilation framework. The pass scans the quantum circuit gate by gate and stops when it encounters a gate that, based on the current partitioning, involves qubits on different QPUs. The pass then searches for feasible TeleData operations that could be covered3 by teleporting one or both qubits on a common QPU. TeleData operations are selected by taking into account the memory capacity of each QPU, all the while making sure that no data qubits storing valuable information gets overwritten by a teleportation. Finally, each possible TeleData is assigned a cost, which is given by Eq. 1:
Footnote 3: Here, “to cover” means to make the gate executable.
\[\frac{n_{EPR}}{n_{cov}}\,\frac{delay}{\bar{d}_{t}} \tag{1}\]
where \(n_{EPR}\) is the number of consumed EPR pairs, \(n_{cov}\) is the number of covered gates, which may include more gates than those that were to be covered originally - as shown in Fig. 7(b) -, and \(delay\) is the time, measured in discrete intervals, that must be waited before actually executing the gate. The \(delay\) is estimated based on when the quantum links for entanglement generation were last used and when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is scaled with the mean decoherence time \(\bar{d}_{t}\) of the physical qubits.
```
1:functionSchedule
2:\(D\leftarrow\emptyset\)
3:\(covered\leftarrow\emptyset\)
4:for all\(g\in QC\)do
5:if\(g\notin D\)then
6:if\(g\) is localthen
7: put \(g\) into \(covered\) and \(D\)
8:else
9:\(TeleData\leftarrow\textsc{Find TeleData}(g,N,P)\)
10:\(TeleGate\leftarrow\textsc{Find TeleGate}(g,N,P)\)
11:if\(\textsc{Cost}(Teledata)\) i Cost(\(TeleGate\))then
12: put \(TeleData\) into \(D\)
13:else
14: put \(TeleGate\) into \(D\)
15:endif
16: put \(g\) into \(covered\) and \(D\)
17: put extra covered gates into \(D\) and \(covered\)
18:endif
19:endif
20:endfor
21:endfunction
```
**Algorithm 1** Remote Gates Scheduler
The \(delay\) is estimated based on when the quantum links for entanglement generation were last used and when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is scaled with the mean decoherence time \(\bar{d}_{t}\) of the physical qubits.
The \(delay\) is estimated based on when the quantum links for entanglement generation were last used and when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is scaled with the mean decoherence time \(\bar{d}_{t}\) of the physical qubits.
The \(delay\) is estimated based on when the quantum links for entanglement generation were last used and when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is scaled with the mean decoherence time \(\bar{d}_{t}\) of the physical qubits.
The \(delay\) is estimated based on when the quantum links for entanglement generation were last used and when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is scaled with the mean decoherence time \(\bar{d}_{t}\) of the physical qubits.
The \(delay\) is estimated based on when the quantum links for entanglement generation were last used and when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is scaled with the mean decoherence time \(\bar{d}_{t}\) of the physical qubits.
The \(delay\) is estimated based on when the quantum links for entanglement generation were last used and when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is scaled with the mean decoherence time \(\bar{d}_{t}\) of the physical qubits.
The \(delay\) is estimated based on when the quantum links for entanglement generation were last used and when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is scaled with the mean decoherence time \(\bar{d}_{t}\) of the physical qubits.
The \(delay\) is estimated based on when the quantum links for entanglement generation were last used and when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is scaled with the mean decoherence time \(\bar{d}_{t}\) of the physical qubits.
The \(delay\) is estimated based on when the quantum links for entanglement generation were last used and when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is scaled with the mean decoherence time \(\bar{d}_{t}\) of the physical qubits.
The \(delay\) is estimated based on when the quantum links for entanglement generation were last used and when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is scaled with the mean decoherence time \(\bar{d}_{t}\) of the physical qubits.
The \(delay\) is estimated based on when the quantum links for entanglement generation were last used and when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is scaled with the mean decoherence time \(\bar{d}_{t}\) of the physical qubits.
The \(delay\) is estimated based on when the quantum links for entanglement generation were last used and when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is scaled with the mean decoherence time \(\bar{d}_{t}\) of the physical qubits.
The \(delay\) is estimated based on when the quantum links for entanglement generation were last used and when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is scaled with the mean decoherence time \(\bar{d}_{t}\) of the physical qubits.
The \(delay\) is estimated based on when the quantum links for entanglement generation were last used and when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is scaled with the mean decoherence time \(\bar{d}_{t}\) of the physical qubits.
The \(delay\) is estimated based on when the quantum links for entanglement generation were last used and when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is scaled with the mean decoherence time \(\bar{d}_{t}\) of the physical qubits.
The \(delay\) is estimated based on when the quantum links for entanglement generation were last used and when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is scaled with the mean decoherence time \(\bar{d}_{t}\) of the physical qubits.
The \(delay\) is estimated based on when the quantum links for entanglement generation were last used and when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is scaled with the mean decoherence time \(\bar{d}_{t}\) of the physical qubits.
The \(delay\) is estimated based on when the quantum links for entanglement generation were last used and when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is calculated based on when the quantum links for entanglement generation were last used and when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is scaled with the mean decoherence time \(\bar{d}_{t}\) of the physical qubits.
The \(delay\) is estimated based on when the quantum links for entanglement generation were last used and when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is calculated based on when the quantum links for entanglement generation were last used and when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is calculated based on when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is calculated based on when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is calculated based on when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is calculated based on when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is calculated based on when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is calculated based on when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is calculated based on when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is calculated based on when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is calculated based on when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is calculated based on when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is calculated based on when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is calculated based on when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is calculated based on when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is calculated based on when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is calculated based on when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is calculated based on when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is calculated based on when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is calculated based on when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. The \(delay\) is calculated based on when the gate should be executed. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. It may be the case that before executing a TeleData operation, one needs to wait for a previously scheduled one to complete. It may be case that before executing a Tele
gate \(g_{0}\) is covered by sharing qubits \(q_{1}\) and \(q_{4}\) with \(QPU_{1}\), using two TeleGates. By doing this, also gates \(g_{1}\) and \(g_{2}\) are covered. The same concept can be applied with TeleData operations.
The pass also compiles the same portion of circuit by scheduling only TeleGate operations. Finally, the pass can compute a cost for the portions of the circuit - one with TeleData and TeleGate, the other with just TeleGate - and select the strategy with the lowest cost. This time cost is simply the amount of consumed EPR pairs. Finally, the pass resumes scanning gates in search of the next gate to cover.
### Local Routing
The designed local routing pass takes as input a partitioned circuit with already scheduled remote operations and handles the local routing accordingly. The pass requires the partitioned circuit, with scheduled remote operations, the network configuration, and each QPU configuration, specifically coupling maps including connections between data qubits and communication qubits.
The core strategy is straightforward. The pass scans the circuit and for every gate that involves qubits not directly connected on their specific QPU, computes the shortest sequence of necessary SWAP gates. When it encounters a TeleData or TeleGate operation, it first checks if the involved data qubits are in proximity of one of the available communication qubits, among those corresponding to the quantum links used by the remote operation. If not, it computes the shortest paths to the less recently used communication qubit. The less recently used communication qubit is chosen to avoid as much delay as possible in entanglement generation. At this stage of compilation, due to local SWAPs, the state of a data qubit may now reside on a communication qubit and vice versa. This is not necessarily an issue [36], but it is necessary to move the communication qubit back to its original position, after the remote operation is completed and before it is used again. This is crucial, not to loose the state of a data qubit physically stored at a communication qubit location, due to a new remote operation. An example of such instance in shown in Fig. 9.
Figure 8: Gates can be covered by either migrating one qubit to the other qubit’s QPU or both to a different one. This concept is valid for both TeleGate and TeleData operations. **(a)** Gate \(g_{0}\) is covered by sharing qubit \(q_{1}\) with \(QPU_{1}\) using one TeleGate. **(b)** Qubits \(q_{1}\) and \(q_{4}\) are shared with \(QPU_{2}\) using two TeleGates, gates \(g_{0}\), \(g_{1}\) and \(g_{2}\) are consequently covered.
Figure 7: **(a) Example of initial graph partitioning. There are three partitions, each holding 4 qubits, and 8 edges between different partitions. The total communication cost is equal to 8. White nodes represent available qubits that have not been utilized. **(b)** Graph partitioning where qubits have been moved to achieve a better solution. The total communication cost is now equal to 6.**
## 4 Evaluation
We implemented a quantum compiler based on the modular framework presented in Sec. 3. The compiler was tested against three classes of quantum circuits, namely, VQE, QFT and Graph state circuits (an example of graph state used is shown in Fig. 10). The aforementioned circuits were compiled for the DQC architecture illustrated in Fig. 4 and Fig. 11, comprised respectively of 3 and 5 QPUs, denoted as _Net-3_ and _Net-5_. To increase the number data qubits available, the QPUs in Fig. 5 can be scaled up in a modular fashion [29]. In the following, _QPU-n_ denotes a QPU with \(n\) data qubits. For each quantum circuit, the tests concerned remote gate scheduling with only the TeleGate operation, as well as with both the TeleGate and TeleData operations. For each compiled circuit the depth, the number of EPR pairs consumed and the layers dedicated to remote operations were used to analyze the results.
Fig. 12, 13 and 14 show compilation results of VQE, QFT and Graph circuits on Net-3 with QPU-21. It can be seen that exploiting TeleData operations alongside TeleGates tends to be more beneficial, in terms of total depth, especially for QFT circuits. Moreover, using TeleData operations can greatly reduce the number of EPR pairs consumed and layers dedicated to remote operations for QFT circuits while being slightly detrimental for VQE circuits. Regarding Graph state circuits there is an increase in the remote operations layers, although minimal, which is opposed to a decrease in the number of EPR consumed, when using TeleData operations.
All figures show a slight increase in total circuit depth when the channel capacity changes from 2 to 4. At first glance, this may seem counter intuitive and it is probably an overhead caused by the local routing, which tries to use
Figure 11: DQC architecture comprising 5 QPUs. Each QPU is connected to at least 3 others QPUs and, depending on the QPUs topology, there may be more than 1 _communication qubit_ per connection.
Figure 10: Example of graph state used to create graph state circuits. This graph state has 6 qubits, but can be scaled up to an arbitrary large number of qubits.
Figure 9: Example of remote gate scheduling and local routing. Local gates are interlaced with Cat-Ent, Cat-DisEnt and TeleData operations as well as SWAP gates.
all available communication qubits, regardless of their distance from data qubits in the local coupling map. Interestingly, there seems to be no difference in the number of layers dedicated to remote operations with respect to the channel capacity. We suppose that, due to the low connectivity between data qubits and communication qubits on each QPU, local routing operations create an upstream bottleneck with deleterious effects despite the increase in channel capacity. Further investigations in this regard will be necessary.
Some tests were also made using Net-3 with QPU-63 devices, with the number of data qubits used by the circuits ranging from 80 to 140. The results are reported in Fig. 15, 16 and 17. While there is still not much of a difference when changing the channel capacity, the use of TeleData operations is greatly beneficial when distributing QFT circuits, which, from the number of EPR consumed, appear to be the circuit class that more heavily depends on remote operations, among those tested.
By maintaining Net-3 but changing to QPU-125 devices, the compiler was tested on circuits with up to 250 qubits. At this stage, an interesting observation can be made from Fig. 18, 19 and 20. It seems that, for Graph states circuits, when the number of qubits grows, and the data qubits capacity of the network is topped up, while the number of EPR pairs consumed remains unchanged, there is an almost unnoticeable increase in the number of layers for remote operations.
Finally, the total number of data qubits was further increased, by exploiting Net-5 with QPU-125 devices. Therefore, it was possible to compile circuits up to 600 qubits, as depicted in Fig. 21, 22 and 23. There are two results that stand out in these figures. Firstly, for VQE circuits, the results show that there is a slight increase in the layers of remote operations when the maximum number of qubits is reached and TeleData operations are employed. The opposite can be observed for Graph state circuits, where the number of remote
Figure 14: Results of Graph circuits compiled for the Net-3 architecture with QPU-21. The number of qubits varies from 40 to 50, while the channel capacity varies from 2 to 4.
Figure 12: Results of VQE circuits compiled for the Net-3 with QPU-21. The number of qubits varies from 40 to 50, while the channel capacity varies from 2 to 4.
Figure 13: Results of QFT circuits compiled for the Net-3 architecture with QPU-21. The number of qubits varies from 40 to 50, while the channel capacity varies from 2 to 4.
operations layers decreases, although marginally, when the maximum number of data qubits allowed by the network is reached. This trend goes against the observation made previously for the same type of circuits albeit with the Net-3 topology, which outlines the impact of different network topologies and suggests that choosing a more connected network is in fact beneficial.
Figure 16: Results of QFT circuits compiled for the Net-3 architecture with QPU-63. The number of qubits varies from 80 to 140, while the channel capacity varies from 2 to 6.
Figure 17: Results of Graph circuits compiled for the Net-3 architecture with QPU-63. The number of qubits varies from 80 to 140, while the channel capacity varies from 2 to 6.
Figure 18: Results of VQE circuits compiled for the Net-3 with QPU-125. The number of qubits varies from 150 to 250, while the channel capacity varies from 6 to 10.
Figure 15: Results of VQE circuits compiled for the Net-3 with QPU-63. The number of qubits varies from 80 to 140, while the channel capacity varies from 2 to 6.
## 5 Conclusion
In this work, we introduced a general-purpose modular quantum compilation framework for DQC that takes into account both network and device constraints and characteristics. We illustrated the experimental evaluation of a quantum compiler based on the proposed framework, using some circuits of interest (VQE, QFT, graph state preparation) characterized by different widths (up to 600 qubits). We considered different network topologies, with quantum processors characterized by heavy hexagon coupling maps. We also presented a strategy for remote scheduling that can exploit both
Figure 19: Results of QFT circuits compiled for the Net-3 architecture with QPU-125. The number of qubits varies from 150 to 250, while the channel capacity varies from 6 to 10.
Figure 21: Results of VQE circuits compiled for the Net-5 with QPU-125. The number of qubits varies from 300 to 600, while the channel capacity varies from 2 to 4.
Figure 22: Results of QFT circuits compiled for the Net-5 architecture with QPU-125. The number of qubits varies from 300 to 600, while the channel capacity varies from 2 to 4.
Figure 20: Results of Graph circuits compiled for the Net-3 architecture with QPU-125. The number of qubits varies from 150 to 250, while the channel capacity varies from 6 to 10.
TeleGate and TeleData operations, and tested the impact of using either only TeleGates or both operations. We observed that TeleData operations may have a positive impact on the number of consumed EPR pairs. Furthermore, we showed that choosing a more connected network topology helps reduce the number of layers dedicated to remote operations.
Regarding future work, we will focus on integrating noise-adaptive compilation strategies into the framework, both for local routing [37] and remote gate scheduling. We shall then evaluate the impact of different strategies on the quality of computation results, which depend also on the selection of suitable metrics. To produce such metrics we need to actually execute the compiled circuits, either by means of a quantum network simulator or on real hardware. In the first case, there are already available simulators with different levels of abstraction, depending on how realistic the simulations needs to be. These simulations will be crucial to understand the impact that remote operations, and any resulting local routing overhead, have on the quality of the computation due to the effects of noise.
## Acknowledgement
The authors acknowledge financial support from the EU Flagship on Quantum Technologies through the project Quantum Internet Alliance (EU Horizon Europe, grant agreement no. 101102140). This research benefits from the HPC (High Performance Computing) facility of the University of Parma, Italy.
## Data Availability
All data and code required to reproduce all plots shown herein are available at [https://doi.org/10.5281/zenodo.7896589](https://doi.org/10.5281/zenodo.7896589).
|
2304.13595 | Conditional quantum thermometry -- enhancing precision by measuring less | Taking accurate measurements of the temperature of quantum systems is a
challenging task. The mathematical peculiarities of quantum information make it
virtually impossible to measure with infinite precision. In the present paper,
we introduce a generalize thermal state, which is conditioned on the pointer
states of the available measurement apparatus. We show that this conditional
thermal state outperforms the Gibbs state in quantum thermometry. The origin
for the enhanced precision can be sought in its asymmetry quantified by the
Wigner-Yanase-Dyson skew information. This additional resource is further
clarified in a fully resource-theoretic analysis, and we show that there is a
Gibbs-preserving map to convert a target state into the conditional thermal
state. We relate the quantum J-divergence between the conditional thermal state
and the same target state to quantum heat. | Akira Sone, Diogo O. Soares-Pinto, Sebastian Deffner | 2023-04-26T14:44:58Z | http://arxiv.org/abs/2304.13595v2 | # Conditional quantum thermometry - enhancing precision by measuring less
###### Abstract
Taking accurate measurements of the temperature of quantum systems is a challenging task. The mathematical peculiarities of quantum information make it virtually impossible to measure with infinite precision. In the present letter, we introduce a generalize thermal state, which is conditioned on the pointer states of the available measurement apparatus. We show that this conditional thermal state outperforms the Gibbs state in quantum thermometry. The origin for the enhanced precision can be sought in its asymmetry quantified by the Wigner-Yanase-Dyson skew information. This additional resource is further clarified in a fully resource-theoretic analysis, and we show that there is a Gibbs-preserving map to convert a target state into the conditional thermal state. Finally, we relate the quantum J-divergence between the conditional thermal state and the same target state to quantum heat.
Quantum metrology is a task to utilize the peculiar properties of quantum systems, such as quantum coherence and entanglement, to achieve parameter estimation at precision beyond the classical limit [1; 2; 3; 4]. Hence, the quest to identify quantum states that are uniquely suited for metrological tasks with limited resources [5; 6] is of crucial practical relevance. In the following, we address this issue and impose the plausible and realistic condition that only the observable eigenstates of the measurement apparatus, aka pointer states [7; 8; 9; 10; 11] are available.
One of the most prominent metrological tasks is quantum thermometry of stationary states. In fact, quantum thermometry with Gibbs states has been well studied [12; 13; 14; 15; 16; 17; 18; 19; 20]. In this simplest scenario it is easy to see that the optimal measurement for estimating the inverse temperature \(\beta\) is the Hamiltonian. However, when the system size is very large and supports quantum correlations, even energy measurements are a challenging task [14; 15]. Therefore, it is desirable to find _better_ states whose corresponding optimal measurements are experimentally implementable, and which ideally even outperform Gibbs states in the low-temperature limit [21; 22; 23].
In this letter, we discuss a quantum state, that indeed fulfills the aforementioned 'wishlist'. The _conditional thermal state_ (CTS) is constructed as Gibbsian-distributed quantum state of the pointer basis corresponding to the available measurement apparatus. The CTS originally appeared in the one-time measurement approach to quantum work [24; 25; 26; 27; 28; 29] and correspondingly tighter maximum work theorems. In the following, we demonstrate that the CTS outperforms the Gibbs state in quantum thermometry. To elucidate its properties further we then show that it can be understood as a non-passive state with useful resources [30; 31; 32]. To this end, we first relate its asymmetry, which is quantified by the Wigner-Yanase-Dyson (WYD) skew information [33; 34; 35], to its QFI. Then, focusing on a system undergoing unitary evolution, we discuss the state convertibility between the exact final state and the CTS constructed by the pointer states given by the evolved post-measurement state. Finally, we demonstrate that their symmetric divergence, also known as quantum J-divergence, can be interpreted as quantum heat [36]. Our results demonstrate the usefulness of the CTS as a resource state from both fundamental and practical perspectives.
_Conditional thermal state_ We begin by establishing notions and notations. For the ease of the presentation, and to avoid clutter in the formulas, we work in units for which the Boltzmann constant \(k_{B}\) and the reduced Planck constant \(\hbar\) are \(k_{B}=\hbar=1\).
The concept of the CTS is illustrated in Fig. 1. In a \(d\)-dimensional Hilbert space \(\mathcal{H}\), given a Hamiltonian \(H\) and the pointer states \(\{|\psi_{k}\rangle\}_{k=1}^{d}\) of an implementable
Figure 1: **Illustration of the concept**: The _conditional thermal state_ (CTS) describes state of a thermal quantum system conditioned on the measurable pointer basis \(\{|\psi_{k}\rangle\}_{k=1}^{d}\) of an implementable measurement \(M\). The CTS maximizes the von Neumann entropy under the constraint that the ensemble average of the Hamiltonian is fixed.
measurement \(M\), the CTS is defined as
\[\rho_{\beta}\equiv\sum_{k=1}^{d}\frac{e^{-\beta\langle\psi_{k}\left|H\right|\psi_{ k}\rangle}}{Z_{\beta}}|\psi_{k}\rangle\!\langle\!\langle\psi_{k}|\,. \tag{1}\]
Here, \(Z_{\beta}\) is the normalization factor
\[Z_{\beta}\equiv\sum_{k=1}^{d}e^{-\beta\langle\psi_{k}\left|H\right|\psi_{k} \rangle}\,, \tag{2}\]
which can be interpreted as a generalized partition function [24; 25; 26; 27; 28]. The CTS is defined as the thermal state conditioned on the choice of the measurement implementable in the laboratory. In the supplementary material [37] we show that the CTS maximizes the von Neumann entropy [38] under the constraint that the ensemble average of the Hamiltonian is fixed. Note that when \(\left\{|\psi_{k}\rangle\right\}_{k=1}^{d}\) are the pointer states of \(H\), \(\rho_{\beta}\) becomes the Gibbs state \(\rho_{\beta}^{\rm eq}\equiv e^{-\beta H}/Z_{\beta}^{\rm eq}\), where \(Z_{\beta}^{\rm eq}\equiv\mathrm{tr}\left\{e^{-\beta H}\right\}\) is the standard canonical partition function. Therefore, the CTS can be regarded as a generalized thermal state.
For later convenience, we also define the following conditional thermal separable state in a composite Hilbert space \(\mathcal{H}_{1}\otimes\mathcal{H}_{2}\)
\[\widetilde{\rho}_{\beta}\equiv\sum_{k=1}^{d}\frac{e^{-\beta\langle\psi_{k} \left|H\right|\psi_{k}\rangle}}{Z_{\beta}}|\psi_{k}\rangle\!\langle\!\langle \psi_{k}|_{1}\otimes|\phi_{k}\rangle\!\langle\!\phi_{k}|_{2}\,, \tag{3}\]
where \(\left\{|\phi_{k}\rangle\right\}_{k=1}^{d}\) are the arbitrary orthogonal eigenstates of \(\mathcal{H}_{2}\). From these definitions, we also have \(\rho_{\beta}=\mathrm{tr}_{2}\left\{\widetilde{\rho}_{\beta}\right\}\), which will become important in our discussion of the relation between the quantum Fisher information (QFI) for quantum thermometry and the asymmetry measure of \(\rho_{\beta}\).
Quantum thermometryAn important figure of merit in quantum metrology is the QFI. Given a quantum state \(\rho_{\theta}\) parameterized by a certain parameter \(\theta\in\mathbb{R}\), an unbiased estimator \((\delta\theta)^{2}\) is bounded by the quantum Cramer-Rao bound (QCRB), \((\delta\theta)^{2}\geqslant 1/\mathcal{I}(\rho_{\theta};\theta)\), where \(\mathcal{I}(\rho_{\theta};\theta)\) is the QFI defined by \(\mathcal{I}(\rho_{\theta};\theta)\equiv-2\lim_{\epsilon\to 0}\partial_{ \epsilon}^{2}\mathcal{F}(\rho_{\theta},\rho_{\theta+\epsilon})\)[39; 40; 41; 42; 43; 44]. Here, \(\mathcal{F}(\rho,\sigma)\equiv\left\|\sqrt{\rho}\sqrt{\sigma}\right\|_{1}^{2}\) is the quantum fidelity between two quantum states \(\rho\) and \(\sigma\)[45], and \(\left\|A\right\|_{1}\equiv\mathrm{tr}\left\{\sqrt{A^{\dagger}A}\right\}\) is the trace norm. We denote \(\partial^{m}/\partial x^{m}\) simply by \(\partial_{x}^{m}\).
For our present purposes we now analyze the precision, with which the inverse temperature \(\beta\) can be estimated from the CTS. This will also allow us to relate the QFI of the CTS to the WYD skew information \(I_{\alpha}(\rho_{\beta},H)\) contained by \(\rho_{\beta}\) with respect to the Hamiltonian \(H\), which quantifies the asymmetry of \(\rho_{\beta}\).
The QFI \(\mathcal{I}(\rho_{\beta};\beta)\) of the CTS \(\rho_{\beta}\) is given by [37]
\[\mathcal{I}(\rho_{\beta};\beta)=\partial_{\beta}^{2}\ln Z_{\beta}\,, \tag{4}\]
and the optimal measurement achieving the quantum Cramer-Rao bound is
\[M_{\rm opt}=\sum_{k=1}^{d}\langle\psi_{k}\left|H\right|\psi_{k}\rangle|\psi_{ k}\rangle\!\langle\psi_{k}|\,. \tag{5}\]
To compare the sensitivity of the CTS \(\rho_{\beta}\) and the Gibbs state \(\rho_{\beta}^{\rm eq}\), we define the QFI difference as \(\Delta\mathcal{I}_{\beta}\equiv\mathcal{I}(\rho_{\beta};\beta)-\mathcal{I}( \rho_{\beta}^{\rm eq};\beta)\). We have
\[\Delta\mathcal{I}_{\beta}=-\partial_{\beta}^{2}S(\rho_{\beta}||\rho_{\beta}^{ \rm eq})=\partial_{\beta}^{2}\left(\ln\frac{Z_{\beta}}{Z_{\beta}^{\rm eq}} \right)\,, \tag{6}\]
where \(S(\rho_{\beta}||\rho_{\beta}^{\rm eq})\equiv\mathrm{tr}\left\{\rho_{\beta}\ln \rho_{\beta}\right\}-\mathrm{tr}\left\{\rho_{\beta}\ln\rho_{\beta}^{\rm eq}\right\}\) denotes the quantum relative entropy [46] of \(\rho_{\beta}\) with respect to \(\rho_{\beta}^{\rm eq}\). The relative entropy measures the distinguishability of \(\rho_{\beta}\) and \(\rho_{\beta}^{\rm eq}\). Therefore, the condition that the CTS outperforms the Gibbs state in quantum thermometry at a certain temperature \(\beta_{0}\) is
\[\partial_{\beta}^{2}S(\rho_{\beta}||\rho_{\beta}^{\rm eq})\Big{|}_{\beta= \beta_{0}}<0\,. \tag{7}\]
Therefore, the curvature of the quantum relative entropy \(S(\rho_{\beta}||\rho_{\beta}^{\rm eq})\) with respect to \(\beta\) can be regarded as the criteria quantifying the performance of the CTS for quantum thermometry.
Single-qubit exampleAs a pedagogical example, we show the single-qubit case. Let \(\sigma_{x},\sigma_{y}\) and \(\sigma_{z}\) be the Pauli matrices. The Hamiltonian is \(H=\omega\sigma_{z}\), so that the eigenstates of \(H\) are \(\left|0\right\rangle=\left(1\ \ 0\right)^{T}\) and \(\left|1\right\rangle=\left(0\ \ 1\right)^{T}\). Therefore, \(Z_{\beta}^{\rm eq}=\mathrm{tr}\left\{e^{-\beta H}\right\}=2\cosh(\beta\omega)\). Considering the pointer states \(\left|\psi_{0}(\theta)\right\rangle=e^{i\frac{\theta}{2}\sigma_{x}}|0\rangle\) and \(\left|\psi_{1}(\theta)\right\rangle=e^{i\frac{\theta}{2}\sigma_{x}}|1\rangle\) with \(\theta\in\mathbb{R}\), the normalization factor of the CTS becomes \(Z_{\beta}(\theta)=2\cosh(\beta\omega\cos(\theta))\).
Therefore, we obtain from Eq. (6),
\[\Delta\mathcal{I}_{\beta}(\theta)=\omega^{2}\left(-1+\frac{\cos^{2}(\theta)}{ \cosh^{2}(\beta\omega\cos(\theta))}+\tanh^{2}(\beta\omega)\right). \tag{8}\]
In the low-temperature limit (\(\beta\omega\gg 1\)), we have
\[\Delta\mathcal{I}_{\beta}(\theta)\simeq\left(\frac{\omega\cos(\theta)}{\cosh( \beta\omega\cos(\theta))}\right)^{2}\geqslant 0\ \ \ \forall\,\theta\in\mathbb{R}\,. \tag{9}\]
This means that the CTS can outperform the Gibbs state for any choice of pointer states in the low-temperature limit. As an example, choose \(\theta=\pi/4\) and \(\omega=1\), which is illustrated in Fig. 2.
AsymmetryThe natural question arises, where exactly this enhanced performance of the CTS in thermometry originates. To this end, we now analyze the relation between the QFI of \(\rho_{\beta}\) and the asymmetry measure, which is quantified by the WYD skew information \(I_{\alpha}(\rho_{\beta},H)\) contained by \(\rho_{\beta}\) with respect to the Hamiltonian \(H\). For a quantum state \(\rho\), the WYD skew information is defined as \(I_{\alpha}(\rho,H)\equiv\mathrm{tr}\left\{\rho H^{2}\right\}-\mathrm{tr} \left\{\rho^{\alpha}H\rho^{1-\alpha}H\right\}\)
with \(0<\alpha<1\). With \(\mathrm{Var}_{\rho}\{H\}\equiv\mathrm{tr}\left\{\rho H^{2}\right\}-\left(\mathrm{ tr}\left\{\rho H\right\}\right)^{2}\) the variance of \(H\) with respect to \(\rho\), we can define the general variance (covariance) \(\mathrm{Cov}_{\rho}\{H,H\}\) as \(\mathrm{Cov}_{\rho}\{H,H\}\equiv\mathrm{Var}_{\rho}\{H\}-I_{\alpha}(\rho,H)\)[47; 48; 49].
Hence, the QFI of \(\rho_{\beta}\) can be alternatively written as the covariance of the Hamiltonian \(H_{12}\equiv H\otimes\openone_{2}\) with respect to the conditional thermal separable state \(\widetilde{\rho}_{\beta}\) (3). By using the fact that \(\mathrm{Var}_{\widetilde{\rho}_{\beta}}\{H_{12}\}=\mathrm{Var}_{\rho_{\beta}} \{H\}\) and the asymmetry monotone of WYD skew information \(I_{\alpha}(\widetilde{\rho}_{\beta},H_{12})\geqslant I_{\alpha}(\rho_{\beta},H)\)[33; 34; 35], we obtain [37]
\[\mathcal{I}(\rho_{\beta};\beta)=\mathrm{Cov}_{\widetilde{\rho}_{\beta}}\{H_{1 2},H_{12}\}\leqslant\mathrm{Cov}_{\rho_{\beta}}\{H,H\}\,. \tag{10}\]
Note that when the chosen basis comprises the pointer states of the Hamiltonian, i.e., \(\rho_{\beta}=\rho_{\beta}^{\mathrm{eq}}\), we have \(I_{\alpha}(\rho_{\beta}^{\mathrm{eq}},H)=0\) because of \([\rho_{\beta}^{\mathrm{eq}},H]=0\). In this case, the upper bound becomes \(\mathrm{Var}_{\rho_{\beta}}\{H\}\), which is exactly the QFI of \(\rho_{\beta}\) for estimating \(\beta\), and Eq. (10) saturates.
These results demonstrate that the asymmetry of the CTS quantifies the maximum ultimate precision limit of the quantum thermometry. In our scenario, since the asymmetry resource contains the quantum coherence over the eigenstates of the Hamiltonian [33], from Eq. (10), the quantum coherence of the conditional thermal separable state \(\widetilde{\rho}_{\beta}\) can be engineered by choosing appropriate pointer states maximizing the QFI of \(\rho_{\beta}\).
Resource-theoretic propertiesIn the preceding section we have shown that the inherent asymmetry of the CTS can be exploited as a resource. Hence, we now continue with a more formal analysis of the resource-theoretic properties of the CTS arising from the state distinguishability with the Gibbs state. To this end, we consider a quantum system, which is initially prepared in a Gibbs state, \(\rho_{\beta}^{\mathrm{eq}}(0)\), and evolves under a time-dependent Hamiltonian \(H_{t}\) from \(t=0\) to \(t=\tau\) with a corresponding unitary \(U_{\tau}\). Our main interest is the state convertibility of the exact final state into the CTS by considering the information-theoretic properties of the averaged dissipative work.
Let us write \(\rho(\tau)\) as the exact final state \(\rho(\tau)=U_{\tau}\rho_{\beta}^{\mathrm{eq}}(0)U_{\tau}\). In this case, the exact averaged work \(\langle W\rangle\) is given by \(\langle W\rangle=\mathrm{tr}\left\{\rho(\tau)H_{\tau}\right\}-\mathrm{tr} \left\{\rho_{\beta}^{\mathrm{eq}}(0)H_{0}\right\}\). When \(\Delta F^{\mathrm{eq}}\) is the equilibrium free energy difference \(\Delta F^{\mathrm{eq}}\equiv-\beta^{-1}\ln(Z_{\beta}^{\mathrm{eq}}(\tau)/Z_{ \beta}^{\mathrm{eq}}(0))\), the dissipative work is given by [50; 51]
\[\langle W_{\mathrm{dis}}\rangle\equiv\langle W\rangle-\Delta F^{\mathrm{eq}}= \beta^{-1}\,S(\rho(\tau)||\rho_{\beta}^{\mathrm{eq}}(\tau))\,. \tag{11}\]
We initially measure the system with \(H_{0}\equiv\sum_{k=1}^{d}E_{k}|E_{k}\rangle\!\langle E_{k}|\), where \(\{|E_{k}\rangle\}_{k=1}^{d}\) are the eigenstates of \(H_{0}\). The post-measurement state of the system after the evolution is \(U_{\tau}|E_{k}\rangle\). Thus, the CTS constructed from the post-measurement state for the final Hamiltonian \(H_{\tau}\equiv\sum_{k=1}^{d}E_{k}^{\prime}|E_{k}^{\prime}\rangle\!\langle E_{ k}^{\prime}|\) is given by [24]
\[\rho_{\beta}(\tau)=\sum_{k=1}^{d}\frac{e^{-\beta\langle E_{k}|U_{\tau}^{\dagger }H_{\tau}V_{\tau}|E_{k}\rangle}}{Z_{\beta}(\tau)}U_{\tau}|E_{k}\rangle\!\langle E _{k}|U_{\tau}^{\dagger}\,, \tag{12}\]
where \(Z_{\beta}(\tau)\) is again the conditional partition function.
In Ref. [24] it was shown that the dissipative work is lower bounded by
\[\langle W_{\mathrm{dis}}\rangle\geqslant\beta^{-1}S(\rho_{\beta}(\tau)||\rho_ {\beta}^{\mathrm{eq}}(\tau))\,. \tag{13}\]
Therefore, from Eqs. (11) and (13), we have
\[S(\rho(\tau)||\rho_{\beta}^{\mathrm{eq}}(\tau))\geqslant S(\rho_{\beta}(\tau) ||\rho_{\beta}^{\mathrm{eq}}(\tau))\,. \tag{14}\]
From Refs. [52; 53; 54], Eq. (13) becomes tight if and only if there exists a sufficiently large \(n_{0}\) and there exists a Gibbs-preserving map \(\mathcal{E}_{n}:\mathcal{H}^{\otimes n}\to\mathcal{H}^{\otimes n}\) for \(n\geqslant n_{0}\) such that
\[\mathcal{E}_{n}(\rho_{\beta}^{\mathrm{eq}}(\tau)^{\otimes n})=\rho_{\beta}^{ \mathrm{eq}}(\tau)^{\otimes n}\,,\ \mathcal{E}_{n}(\rho(\tau)^{\otimes n})=\Xi_{n}\,, \tag{15}\]
where for any \(\epsilon>0\), the quantum state \(\Xi_{n}\) satisfies
\[\frac{1}{2}\,\big{|}\,\rho_{\beta}(\tau)^{\otimes n}-\Xi_{n}\,\big{|}_{1}< \epsilon\,. \tag{16}\]
This means that the CTS is not just a mathematical object but can be achieved with an arbitrarily small error via the Gibbs-preserving map.
Also, from the quantum relative entropy of the exact final state \(\rho(\tau)\) with respect to the CTS \(\rho_{\beta}(\tau)\)\(S(\rho(\tau)||\rho_{\beta}(\tau))\), we obtain [37]
\[S(\rho(\tau)||\rho_{\beta}(\tau))+S(\rho_{\beta}(\tau)||\rho_{\beta}^{\mathrm{ eq}}(\tau))=S(\rho(\tau)||\rho_{\beta}^{\mathrm{eq}}(\tau))\,, \tag{17}\]
which we call _thermodynamic triangle equality_. Therefore, when \(S(\rho(\tau)||\rho_{\beta}^{\mathrm{eq}}(\tau))=S(\rho_{\beta}(\tau)||\rho_{ \beta}^{\mathrm{eq}}(\tau))\), we must have \(\rho(\tau)=\rho_{\beta}(\tau)\) (i.e. \(U_{\tau}^{\dagger}H_{\tau}U_{\tau}=H_{0}\)).
Furthermore, the symmetric divergence between \(\rho(\tau)\) and \(\rho_{\beta}(\tau)\), i.e., the quantum J-divergence defined as [55],
\[J(\rho(\tau),\rho_{\beta}(\tau))\equiv S(\rho(\tau)||\rho_{\beta}(\tau))+S(\rho _{\beta}(\tau)||\rho(\tau))\,, \tag{18}\]
is related to the concept of quantum heat [36]. This becomes obvious when the quantum J-divergence is written as [37]
\[J(\rho(\tau),\rho_{\beta}(\tau))=\beta\left(\langle W\rangle-\mathcal{W}_{0}( \rho_{\beta}(\tau))-\Delta E(\rho_{\beta}(\tau))\right). \tag{19}\]
Here, the internal energy change of \(\rho_{\beta}(\tau)\) is \(\Delta E(\rho_{\beta}(\tau))\equiv\operatorname{tr}\left\{\rho_{\beta}(\tau) \left(H_{\tau}-H_{0}\right)\right\}\), and the quantum ergotropy of \(\rho_{\beta}(\tau)\) with respect to \(H_{0}\) is \(\mathcal{W}_{0}(\rho_{\beta}(\tau))\equiv\operatorname{tr}\left\{\rho_{\beta} (\tau)H_{0}\right\}-\operatorname{tr}\left\{\Gamma\rho_{\beta}(\tau)\Gamma^{ \dagger}H_{0}\right\}\), where \(\Gamma\equiv\sum_{k=1}^{d}|E_{k}\rangle\!\langle E_{k}|U_{\tau}^{\dagger}\) is the ergotropic transformation [56]. From the first law of thermodynamics, \(\beta^{-1}J(\rho(\tau),\rho_{\beta}(\tau))\) can be regarded as a heat, particularly the _quantum heat_, which has been discussed in the literature as the heat induced by the measurement [36]. In essence, the CTS \(\rho_{\beta}(\tau)\) is conditioned on the first energy measurement outcome; therefore, its relation to the quantum heat is consistent in this context. Also, note that when \(U_{\tau}\) is a adiabatic passage, then \(\rho(\tau)=\rho_{\beta}(\tau)=\rho_{\beta}^{\text{eq}}(\tau)\), so that \(\beta^{-1}J(\rho(\tau),\rho_{\beta}(\tau))=0\), which is consistent with zero heat exchange in the adiabatic process.
Concluding remarksIn conclusion, we have introduced a conditional thermal state, which is a thermal state conditioned on the pointer states, and demonstrated that this conditional thermal state can outperform the Gibbs state in the quantum thermometry as a useful resource state. We also have explored its resource-theoretic properties in terms of the asymmetry, state convertibility and its relation to the quantum heat. From the practical perspective, these results could help experimentalists to achieve a better sensitivity in quantum thermometry under the constraints in the implementable measurements. From the fundamental point of view, these results provide insightful perspective on the implications of the conditional thermal state in the thermodynamic protocols in the quantum systems in a resource-theoretic approach. Finally, the present analysis also provides an additional _a posteriori_ motivation and justification for the one-time energy measurement approach to quantum work [24; 25; 26; 27; 28; 29].
We would like to thank to Ryuji Takagi for helpful discussions. AS gratefully acknowledges startup funding supported by the University of Massachusetts, Boston. DOSP acknowledges the Brazilian funding agencies CNPq (Grant No. 307028/2019-4), FAPESP (Grant No. 2017/03727-0), and the Brazilian National Institute of Science and Technology of Quantum Information (INCT-IQ) Grant No. 465469/2014-0. S.D. acknowledges support from the John Templeton Foundation under Grant No. 62422.
|
2310.10103 | Navigation with Large Language Models: Semantic Guesswork as a Heuristic
for Planning | Navigation in unfamiliar environments presents a major challenge for robots:
while mapping and planning techniques can be used to build up a representation
of the world, quickly discovering a path to a desired goal in unfamiliar
settings with such methods often requires lengthy mapping and exploration.
Humans can rapidly navigate new environments, particularly indoor environments
that are laid out logically, by leveraging semantics -- e.g., a kitchen often
adjoins a living room, an exit sign indicates the way out, and so forth.
Language models can provide robots with such knowledge, but directly using
language models to instruct a robot how to reach some destination can also be
impractical: while language models might produce a narrative about how to reach
some goal, because they are not grounded in real-world observations, this
narrative might be arbitrarily wrong. Therefore, in this paper we study how the
``semantic guesswork'' produced by language models can be utilized as a guiding
heuristic for planning algorithms. Our method, Language Frontier Guide (LFG),
uses the language model to bias exploration of novel real-world environments by
incorporating the semantic knowledge stored in language models as a search
heuristic for planning with either topological or metric maps. We evaluate LFG
in challenging real-world environments and simulated benchmarks, outperforming
uninformed exploration and other ways of using language models. | Dhruv Shah, Michael Equi, Blazej Osinski, Fei Xia, Brian Ichter, Sergey Levine | 2023-10-16T06:21:06Z | http://arxiv.org/abs/2310.10103v1 | # Navigation with Large Language Models:
###### Abstract
Navigation in unfamiliar environments presents a major challenge for robots: while mapping and planning techniques can be used to build up a representation of the world, quickly discovering a path to a desired goal in unfamiliar settings with such methods often requires lengthy mapping and exploration. Humans can rapidly navigate new environments, particularly indoor environments that are laid out logically, by leveraging semantics -- e.g., a kitchen often adjoins a living room, an exit sign indicates the way out, and so forth. Language models can provide robots with such knowledge, but directly using language models to instruct a robot how to reach some destination can also be impractical: while language models might produce a narrative about how to reach some goal, because they are not grounded in real-world observations, this narrative might be arbitrarily wrong. Therefore, in this paper we study how the "semantic guesswork" produced by language models can be utilized as a guiding heuristic for planning algorithms. Our method, Language Frontier Guide (LFG), uses the language model to bias exploration of novel real-world environments by incorporating the semantic knowledge stored in language models as a search heuristic for planning with either topological or metric maps. We evaluate LFG in challenging real-world environments and simulated benchmarks, outperforming uninformed exploration and other ways of using language models.
navigation, language models, planning, semantic scene understanding
## 1 Introduction
Navigation in complex open-world environments is conventionally viewed as the largely geometric problem of determining collision-free paths that traverse the environment from one location to another. However, real-world environments possess _semantics_. Imagine navigating an airport to get to a terminal: your prior knowledge about the way such buildings are constructed provides extensive guidance, even if this particular airport is unfamiliar to you. Large language models (LLMs) and various language embedding techniques have been studied extensively as ways to interpret the semantics in user-specified _instructions_ (e.g., parsing "go to the television in the living room" and grounding it in a specific spatial location), but such models can provide much more assistance in robotic navigation scenarios by capturing rich semantic knowledge about the world. For instance, when looking for a spoon in an unseen house, the LLM can produce a "narrative" explaining why going towards a dishwasher may eventually lead you to find the spoon, and that the robot should prioritize that direction. This is similar to how a person might imagine different ways that the available subgoals might lie on the path to the goal, and start exploring the one for which this "narrative" seems most realistic. However, since language models are not _grounded_ in the real world, such models do not know the spatial layout of the robot's surroundings (e.g., there is a couch that the robot
needs to circumnavigate). To utilize the semantic knowledge in language models to aid in embodied tasks, we should not just blindly _follow_ the language model suggestions, but instead use them as proposals or navigational heuristics. In this paper, we study how that might be accomplished.
We study this idea in the context of visual navigation, where a robot is tasked with reaching a goal denoted by a natural language query \(q\) (see Fig. 1) in a _novel_ environment using visual observations. The robot has no prior experience in the target environment, and must explore the environment to look for the goal. While narratives generated by an LLM may not be sufficient for navigation by themselves, they contain useful cues that can be used to _inform_ or _guide_ the behavior of the underlying navigation stack for the language navigation task (e.g., by choosing between collision-free subgoal proposals that avoid the couch and lead to the ice tray). We show that this idea can be combined with frontier-based exploration, where the robot maintains a set of unvisited locations at its frontier, _grounds_ them in text using a vision-language model (VLM), and _scores_ the unvisited subgoals by using LLM reasoning.
We propose Language Frontier Guide, or LFG, a method for leveraging the reasoning capabilities of LLMs to produce a _search heuristic_ for guiding exploration of previously unseen real-world environments, combining the strengths of search-based planning with LLM reasoning. LFG is agnostic of the memory representation and planning framework, and can be combined with both (i) a geometric navigation pipeline, building a metric map of the environment for planning and using a hand-designed controller, as well as (ii) a learning-based navigation pipeline, building a topological map for planning and using a learned control policy, yielding a versatile system for navigating to open-vocabulary natural language goals. Our experiments show that LFG can identify and predict simple patterns in previously unseen environments to accelerate goal-directed exploration. We show that LFG outperforms other LLM-based approaches for semantic goal-finding in challenging real-world environments and on the Habitat ObjectNav benchmark.
## 2 Related Work
**Vision-based navigation:** Navigation is conventionally approached as a largely geometric problem, where the aim is to map an environment and use that map to find a path to a goal location [1]. Learning-based approaches can exploit patterns in the training environments, particularly by learning vision-based navigation strategies through reinforcement or imitation [2; 3; 4; 5; 6; 7]. Our work is also related to PONI [7], which uses a learned potential function to prioritize frontier points to explore; instead, we use a language model to rank these points. Notably, these methods do not benefit from prior semantic knowledge (e.g., from the web), and must rely entirely on patterns discovered from offline or online navigational data. Our aim is specifically to bring semantic knowledge into navigation, to enable robots to more effectively search for a goal in a new environment.
**Semantic knowledge-guided navigation:** Prior knowledge about the semantics of indoor environments can provide significantly richer guidance. With the advent of effective open-vocabulary vision models [8; 9], some works have recently explored incorporating their semantic knowledge into models for navigation and other robotic tasks with the express aim of improving performance at _instruction following_[10; 11; 12; 13; 14]. In general within robotics, such methods have either utilized pre
Figure 1: In constrast to methods that use LLM plans directly, Language Frontier Guide (LFG) uses a language model to _score_ subgoal candidates, and uses these scores to guide a heuristic-based planner.
trained vision-language representations [15; 16; 17], or used language models directly to make decisions [18; 19; 20; 21; 22; 23]. Our aim is somewhat different: while we also focus on language-specified goals, we are primarily concerned with utilizing the semantics in pre-trained language models to help a robot figure out how to actually reach the goal, rather than utilizing the language models to more effectively interpret a language instruction. While language models can output reasonable substeps for temporally extended tasks in some settings [24; 25], there is contradictory evidence about their ability to actually plan [26], and because they are unaware of the observations and layout in a particular environment, their "plans" depend entirely on the context that is provided to them. In contrast to prior work, our approach does not rely on the language model producing a _good_ plan, but merely a heuristic that can bias a dedicated planner to reach a goal more effectively. In this way, we use the language models more to produce _suggestions_ rather than actual plans.
**LLM-guided navigation:** Some works have sought to combine predictions from language models with either planning or probabilistic inference [27; 14], so as to not rely entirely on forward prediction from the language model to take actions. However, these methods are more aimed at filtering out _infeasible_ decisions, for example by disallowing actions that a robot is incapable of performing, and still focus largely on being able to interpret and process instructions, rather than using the language model as a source of semantic hints. In contrast, by incorporating language model suggestions as heuristics into a heuristic planner, our approach can completely override the language model predictions if they are incorrect, while still making use of them if they point the way to the goal.
Another branch of recent research [28; 29; 30] has taken a different approach to ground language models, by making it possible for them to read in image observations directly. While this represents a promising alternative approach to make language models more useful for embodied decision making, we believe it is largely orthogonal and complementary to our work: although vision-language models can produce more grounded inferences about the actions a robot should take, they are still limited only to _guessing_ when placed in unfamiliar environments. Therefore, although we use unbounded language-only models in our evaluation, we expect that our method could be combined with vision-language models easily, and would provide complementary benefits.
## 3 Problem Formulation and Overview
Our objective is to design a high-level planner that takes as input a natural language query \(q\) (e.g., "find the bedside table"), explores the environment in search of the queried object, and commands a low-level policy to control a robotic agent. To do this, we maintain an episodic memory of the environment \(\mathcal{M}\) in the form of either (i) a 2D map of the environment, where grid cells contain information about occupancy and semantic labels, or (ii) a topological map of the environment, where nodes contain images captured by the robot and corresponding object labels. One way to solve this task is Frontier-Based Exploration (FBE) [31], where a robot maintains a set of unexplored _frontiers_ in it's memory, and explores randomly to reach the goal. _Can we do better with access to LLMs?_
We distill the language-guided exploration task to a heuristic-based search problem, where the robot must propose unvisited subgoals or waypoints, score them, and then use a search algorithm (e.g., A*) to plan a path to the goal. Thus, at the core of LFG is the task of _scoring_ subgoal proposals. Formally, let's assume we have the task by query \(q\), a partially explored environment stored in \(\mathcal{M}\), and a set \(\mathcal{S}\) of \(n\) textual subgoal proposals \(s_{1},s_{2},\ldots,s_{n}\) (e.g., "a corner with a dishwasher and refrigerator", "a hallway with a door", etc.). Our goal is to score these subgoal proposals with \(p(s_{i},q,\mathcal{M})\), the probability that the candidate \(s_{i}\in\mathcal{S}\) will lead to the goal \(q\) given the current state of the environment, described through \(\mathcal{M}\).
We posit that we can leverage the semantic reasoning capabilities of LLMs by prompting them to construct narratives about which semantic regions of the environment are most (and least) likely to lead to the goal. While the narrative itself might be ungrounded, since the LLM knows very little about the environment, reasoning over objects and semantic regions of the environment often generalizes very broadly. For example, even without seeing a new apartment, a human would _guess_
that the dining area is close to the kitchen. Hence, rather than directly using LLM scores for planning [23; 25], we incorporate them as a goal-directed _heuristic_ to inform the search process. This has two distinct advantages: (i) when the LLM is right, it nudges the search towards the goal, and when it is wrong (or uncertain), we can still default to the underlying FBE algorithm, allowing recovery from LLM failures, and (ii) it allows us to combine the signal from LLMs with other scores that may be more grounded, e.g. distance to subgoals, making the system more versatile.
## 4 LFG: Scoring Subgoals by Polling LLMs
Our aim in this section is to derive a scoring function from LLMs that takes a textual description of subgoal candidates \(s_{i}\) and the goal query \(q\) as inputs, and predicts task-relevant probability \(p(s_{i},q,\mathcal{M})\), conditioned on the episodic memory \(\mathcal{M}\). While we may obtain this from next-token likelihoods (or "logprobs"), they do not represent the desired task-relevant probability \(p(s_{i},q,\mathcal{M})\), and fail to assign similar scores, say, to different subgoals that are semantically similar but have different tokenizations (see our experiments in Section 6 for a comparison). Furthermore, most capable LLMs of today are available through APIs that do not expose the ability to query logprobs.1 And lastly, even if reliable logprobs were available, they are incompatible with chain-of-thought prompting [32], which we find to be crucial to success in spatial reasoning.
Footnote 1: Most notably, OpenAI’s Chat API for GPT-3.5 and GPT-4, Google’s PaLM API, and Anthropic’s Claude API all do not support logprobs.
To overcome these challenges, LFG uses a novel approach to extract task-relevant likelihoods from LLMs. Given candidate subgoal images, LFG uses a VLM to obtain a textual subgoal descriptor \(s_{i}\), which must be scored with the LLM. LFG _polls_ the LLMs by sampling the most likely subgoal \(n_{s}\) times, conditioned on a task-relevant prompt. We then use these samples to empirically estimate the likelihood of each subgoal. To get informative and robust likelihood estimates, we use a chain-of-thought prompting (CoT) technique [32], to improve the quality and interpretability of the scores, and use a combination of positive and negative prompts to gather unbiased likelihood estimates. Figure 2 outlines our scoring technique, with the full prompt provided in Appendix B. We now describe the details of our scoring technique.
**Structured query:** We rely on in-context learning by providing an example of a structured query-response pair to the LLM, and ask it to pick the most likely subgoal that satisfies the query. To sample a subgoal from \(\mathcal{S}\) using a language model, we prompt it to generate a structured response, ending with 'Answer: i''. This structure allows us to always sample a _valid_ subgoal, without having to ground LLM generations in the environment [24].
**Positives and negatives:** We find that only using positive prompts (e.g., "which subgoal is most likely to reach the goal") leads to likelihood estimates being uninformative for cases where the LLM is not confident about any subgoal. To overcome this, we also use negative prompts (e.g., "which subgoal is least likely to be relevant for the goal"), which allows us to score subgoals by eliminating/downweighting subgoals that are clearly irrelevant. We then use the difference between the positive and negative likelihoods to rank subgoals.
Figure 2: LFG scores subgoals with an empirical estimate of the likelihoods by sampling an LLM \(n_{s}\) times with both positive and negative prompts, and uses chain-of-thought to obtain reliable scores. These scores are used by a high-level planner as _heuristics_ to guide search. For full prompts, see Appendix B.
Chain-of-though prompting:A crucial component of getting interpretable and reliable likelihood estimates is to encourage the LLM to _justify_ its choice by chain-of-thought prompting. As demonstrated in prior works, CoT elicits interpretability and reasoning capabilities in LLMs, and while we don't explicitly use the generated reasonings in our approach (great future work direction), we find that CoT improves the quality and consistency of the likelihood estimates. Additionally, it also helps maintain interpretability, to better understand why the LFG-equipped agent takes certain decisions.
In summary, we score subgoals by sampling the LLM multiple times and empirically estimating the likelihood of each subgoal. We use a combination of positive and negative prompts to get unbiased likelihood estimates, and use chain-of-thought prompting to improve the quality and interpretability of the scores (Figure 2). We will now discuss how these scores can be incorporated into a navigation system as _search heuristics_.
## 5 LLM Heuristics for Goal-Directed Exploration
Given the LLM scoring pipeline outlined in the previous section, our key insight is that we can incorporate these scores in a search-based planning pipeline to _heuristically_ guide the search process. We instantiate LFG using frontier-based exploration (FBE) and LLM scores generated via polling.
**FBE:** This method maintains a "map" of the seen parts of the environment, which may be geometric [33] or topological [34], and a frontier separating it from the unexplored parts. By navigating to the _nearest_ point of the frontier, the robot explores new areas of the environment until it finds the goal object or completes exploration without finding it. A standard FBE implementation is presented in Algorithm 2 in black text. The robot maintains either a 2D metric map of its surroundings, or a topological map whose nodes are comprised of the robot's visual observations and edges represent paths taken in the environment. Additionally, we also store semantic labels corresponding to objects detected in the robot's observations, which are used to ground the observations in text.
At a fixed re-planning rate, the high-level planner computes its frontier \(f_{i}\) (Line 10), and picks the frontier point that is _closest_ to the current location, i.e., maximizing the distance score (Line 16), and then navigates to this frontier (Line 21). At any point in this process, if the agent's semantic
Figure 3: Overview of LFG for language-guided exploration. Based on the pose and observations, LFG builds an episodic memory (topological or metric), which is used by the heuristic-based exploration policy to rank adjacent clusters, or subgoal frontiers. Navigation to the subgoal frontier is completed by a low-level policy.
detector detects an object of the same category as the query \(q\), it navigates directly to this object and the trajectory ends.
**Incorporating LLM scores:** Our method, LFG, extends FBE by using an additional search heuristic obtained by polling LLMs for semantic "scores". The modifications to FBE are marked in purple in Algorithm 2. After enumerating the frontiers, LFG uses the semantic labels from a VLM [35] to _ground_ subgoal images at each frontier \(f_{i}\) (Line 11). These images are converted into textual strings, and form the subgoal candidates \(s_{i}\) that can be scored using Algorithm 1. We associate each frontier point \(f_{i}\) with the nearest object cluster \(c_{i}\) (Line 17), and compute LLM scores for each point as follows:
\[h(f_{i},q)=w_{p}\cdot\text{LLM}_{\text{pos}}(c_{i})-w_{n}\cdot\text{LLM}_{\text {neg}}(c_{i})-\text{dist}(f_{i},p), \tag{1}\]
where \(p\) is the current position of the agent, and \(w_{p},w_{n}\) are hyperparameters (see Appendix A.1). We then choose the frontier with the highest score to be the next subgoal (Line 21), navigate to it using a local controller, and repeat the planning process. Algorithm 2 outlines the general recipe for integrating LLM scores as a planning heuristic. Please see Appendix A for specific instantiations of this system with geometric and topological maps, and more details about the referenced subroutines.
```
Data:\(o_{0}\), Goal language query \(q\)
1subgoal \(\leftarrow\) None
2whilenot donedo
3\(o_{t}\leftarrow\) getObservation()
4episodicMemory \(\leftarrow\) mappingModule(\(o_{t}\))
5ifq in semanticMapthen
6 subGoal \(\leftarrow\) getLocation(episodicMemory, q)
7
8else
9ifnumSteps \(\%\)\(\tau==0\)then
10 // replanning
11 location \(\leftarrow\) getCurrentLocation()
12 frontier \(\leftarrow\) getFrontier(episodicMemory)
13 objectClusters \(\leftarrow\) getSemanticLabels(episodicMemory, frontier)
14\(LLM_{pos},LLM_{neg}\leftarrow\) ScoreSubgoals(objectClusters)
15 scores \(\leftarrow\)[]
16forpoint in frontierdo
17 distance \(\leftarrow\) DistTo(location, point)
18 scores[point] \(\leftarrow\) - distance
19 closestCluster \(\leftarrow\) getClosestCluster(objectClusters, point)
20\(i\leftarrow\) clusterID(closestCluster)
21ifdist(closestCluster, point) \(<\delta\)then
22 // incorporate language scores
23 scores[point] \(\leftarrow\)\(w_{p}\cdot LLM_{pos}\)[\(i\)] - \(w_{n}\cdot LLM_{neg}\)[\(i\)] - distance
24
25subgoal \(\leftarrow\) argmax(scores)
26
27numSteps \(\leftarrow\) numSteps \(+1\)
28goTo(subGoal)
```
**Algorithm 2**Instantiating LFG for Goal-Directed Exploration
## 6 System Evaluation
We now evaluate the performance of LFG for the task of goal-directed exploration in real-world environments, and benchmark its performance against baselines. We instantiate two systems with LFG: a real-world system that uses a topological map and a learned control policy, and a simulated agent that uses a geometric map and a deterministic control policy. Our experiments show that both these systems outperform existing LLM-based exploration algorithms by a wide margin, owing to the high quality scores incorporated as search heuristics.
### Benchmarking ObjectNav Performance
We benchmark the performance of LFG for the task of object-goal navigation on the Habitat ObjectNav Challenge [36], where the agent is placed into a simulated environment with photo-realistic graphics, and is tasked with finding a query object from one of 10 categories (e.g., "toilet", "bed", "couch" etc.). The simulated agent has access to egocentric RGBD observations and accurate pose information. We run 10 evaluation episodes per scene and report two metrics: the average success rate, and success weighted by optimal path length (SPL), the default metrics for the benchmark. Since LFG requires no training, we do not use the training scenes from HM3D.
We compare to three classes of published baselines: (i) learning-based baselines that learn navigation behavior from demonstrations or online experience in the training scenes [37] on up to 2.5B frames of experience, (ii) search-based baselines [33; 38], and (iii) LLM-based baselines that do not use the training data directly, but leverage the semantic knowledge inside foundation models to guide embodied tasks [18; 39].
Evaluating LFG on the HM3D benchmark, we find that it significantly outperforms search and LLM-based baselines (Table 1). Greedy LLM struggles on the task due to several LLM planning failures, causing the episodes to fail. LFG significantly outperforms the vanilla FBE baseline by leveraging semantic priors from LLMs to score subgoals intelligently. Comparing to learning-based baselines, we find that LFG outperforms most of them and closely matches the state-of-the-art on the task, proving the competence of our polling and heuristic approach. Figure 4 shows an example of the LFG agent successfully reaching the goal by using chain-of-thought and negative prompting.
L3MVN [39], which uses a combination of LLMs and search, performs slightly better than FBE, but is unable to fully leverage the semantics in LLMs. While being similar to LFG, it suffers from two key limitations: (i) it uses a small language model (GPT-2), which likely does not contain strong
Figure 4: **Qualitative example of a negative score influencing the agent’s decision.** LFG discourages the agent from exploring the bedroom and living room, leading to fast convergence toward the goal, whereas FBE fails. The CoT reasoning given by the LLM is shown in purple, justifying its score.
Figure 5: **Qualitative example of LFG in real.** LFG reasons about floor plans in the environment it is searching. In this apartment experiment, LFG believes that a bathroom is more likely to be found near a bedroom rather than a kitchen, and guides the robot towards the bedroom, successfully reaching the goal.
semantic priors for the agent to leverage, and (ii) it uses a simple likelihood-based scoring scheme, which we show below is not very effective. Another closely related work, LGX [18], uses a variant of greedy LLM scoring, and hence fails to perform reliably on the benchmark.
Probing deeper into the strong performance of LFG, we ablated various components of our scoring pipeline and studied the change in performance. Note that LGX (Greedy) and L3MVN (No CoT, Logprobs) can be seen as ablations of LFG. Table 2 shows that modifying both the prompting and scoring mechanisms used by LFG lead to large drops in performance. Most notably, scoring via polling (\(+7.8\%\)) and CoT (\(+6.6\%\)) are both essential to the strong performance of LFG. Furthermore, we find that using only positive prompts also hurts performance (\(-4.7\%\)). Popular approaches for using LLMs for planning are significantly outperformed by LFG: Greedy (\(-14.5\%\)) and Logprobs (\(-8.5\%\)). Figure 4 shows an example of the LFG agent successfully reaching the goal by using CoT and negative prompting.
**Setup:** For these experiments, we mimic the semantic mapping pipeline of the best-performing baseline on the benchmark [33; 38], and integrate LFG with the geometric map. The simulated agent builds a 2D semantic map of its environment, where grid cells represent both occupancy and semantic labels corresponding to objects detected by the agent. Prior work has shown that state-of-the-art vision models, such as DETIC, work poorly in simulation due to rendering artifacts [33]; hence, we use ground-truth semantic information for all simulated baselines to analyze navigation performance under perfect perception.
### Real-world Exploration with LFG
To show the versatility of the LFG scoring framework, we further integrated it with a heuristic-based exploration framework that uses topological graphs for episodic memory [34]. We compare two published baselines: a language-agnostic FBE baseline [40], and an LLM-based baseline that uses the language model to greedily pick the frontier [18].
We evaluate this system in two challenging real-world environments: a cluttered cafeteria and an apartment building (shown in Figures 3 and 5). In each environment, the robot is tasked to reach an object described by a textual string (e.g., "kitchen sink" or "oven"), and we measure the success rate and efficiency of reaching the goal. Episodes that take longer than 30 minutes are marked as failure. While we only tested our system with goal strings corresponding to the 20,000 classes supported by our object detector [35], this can be extended to more flexible goal specifications with the rapid progress in vision-language models.
We find that the complexity of real-world environments causes the language-agnostic FBE baseline to _time out_, i.e., the robot is unable to reach the goal by randomly exploring the environment. Both LLM baselines are able to leverage the stored semantic knowledge to guide the exploration in novel environments, but LFG achieves 16% better performance. Figure 5 shows an example rollout in a real apartment, where the robot uses LFG to reach the toilet successfully.
**Setup:** We instantiate LFG in the real-world using a previously published topological navigation framework [34] that builds a topological map of the environment, where nodes correspond to the robot's visual observations and edges correspond to paths traversed in the environment. This system relies on omnidirectional RGB observations and does not attempt to make a dense geometric map of the environment. To obtain "semantic frontiers" from the omnidirectional camera, we generate \(n_{v}=4\)_views_ and run an off-the-shelf object detector [35] to generate rich semantic labels describing objects in these views. The robot maintains a topological graph of these views and semantic labels, and picks the frontier view with the highest score (Algorithm 2, Line 21) according to LFG. The robot then uses a Transformer-based policy [6; 41] to reach this subgoal. For more implementation details, see Appendix A.3.
## 7 Discussion
We presented LFG, a method for utilizing language models for _semantic guesswork_ to help navigate to goals in new and unfamiliar environments. The central idea in our work is that, while language models can bring to bear rich semantic understanding, their ungrounded inferences about how to perform navigational tasks are better used as suggestions and heuristics rather than plans. We formulate a way to derive a heuristic score from language models that we can then incorporate into a planning algorithm, and use this heuristic planner to reach goals in new environments more effectively. This way of using language models benefits from their inferences when they are correct, and reverts to a more conventional unguided search when they are not.
**Limitations and future work:** While our experiments provide a validation of our key hypothesis, they have a number of limitations. First, we only test in indoor environments in both sim and real yet the role of semantics in navigation likely differs drastically across domains - e.g., navigating a forest might implicate semantics very differently than navigating an apartment building. Exploring the applicability of semantics derived from language models in other settings would be another promising and exciting direction for future work. Second, we acknowledge that multiple requests to cloud-hosted LLMs with CoT is slow and requires an internet connection, severely limiting the extent of real-world deployment of the proposed method. We hope that ongoing advancements in quantizing LLMs for edge deployment and fast inference will address this limitation.
#### Acknowledgments
This research was partly supported by AFOSR FA9550-22-1-0273 and DARPA ANSR. The authors would like to thank Bangguo Yu, Vishnu Sashank Dorbala, Mukul Khanna, Theophile Gervet, and Chris Paxton, for their help in reproducing baselines. The authors would also like to thank Ajay Sridhar, for supporting real-world experiments, and Devendra Singh Chaplot, Jie Tan, Peng Xu, and Tingnan Zhang, for useful discussions in various stages of the project.
|
2303.08674 | Speech Signal Improvement Using Causal Generative Diffusion Models | In this paper, we present a causal speech signal improvement system that is
designed to handle different types of distortions. The method is based on a
generative diffusion model which has been shown to work well in scenarios with
missing data and non-linear corruptions. To guarantee causal processing, we
modify the network architecture of our previous work and replace global
normalization with causal adaptive gain control. We generate diverse training
data containing a broad range of distortions. This work was performed in the
context of an "ICASSP Signal Processing Grand Challenge" and submitted to the
non-real-time track of the "Speech Signal Improvement Challenge 2023", where it
was ranked fifth. | Julius Richter, Simon Welker, Jean-Marie Lemercier, Bunlong Lay, Tal Peer, Timo Gerkmann | 2023-03-15T14:58:40Z | http://arxiv.org/abs/2303.08674v1 | # Speech Signal Improvement Using Causal Generative Diffusion Models
###### Abstract
In this paper, we present a causal speech signal improvement system that is designed to handle different types of distortions. The method is based on a generative diffusion model which has been shown to work well in scenarios with missing data and non-linear corruptions. To guarantee causal processing, we modify the network architecture of our previous work and replace global normalization with causal adaptive gain control. We generate diverse training data containing a broad range of distortions. This work was performed in the context of an "ICASSP Signal Processing Grand Challenge" and submitted to the non-real-time track of the "Speech Signal Improvement Challenge 2023", where it was ranked fifth.
Julius Richter, Simon Welker, Jean-Marie Lemercier, Bunlong Lay, Tal Peer, Timo Gerkmann Signal Processing (SP), Universitat Hamburg, Germany Speech signal improvement, universal speech enhancement, diffusion models, causal processing
## 1 Introduction
High-quality voice communication requires clear audio and natural-sounding speech. However, there are numerous factors that can degrade speech signals including background noise, room acoustics, transmission errors, limited bandwidth, and codec artifacts. Prior works on improving speech signals have typically studied each type of distortion separately. However, there has been some recent interest in developing universal approaches that address a broader range of distortions. These approaches typically use generative modeling, which works particularly well in scenarios with missing data and non-linear corruptions [1].
In this paper, we present our diffusion-based speech enhancement method submitted to the non-real-time track of the "Speech Signal Improvement Challenge 2023" [2] as part of the "ICASSP Signal Processing Grand Challenges". The proposed model extends our previous work [3], incorporating significant modifications in the network architecture to meet the causality requirement and to output super wideband speech. Furthermore, we devise a data corruption approach to generate diverse training data resembling distortions observed in the blind data. Strong variations in loudness are compensated by using causal adaptive gain control.
In the challenge's subjective test, our proposed method yields a final score of 0.445 using ITU-T P863.2 and P.804 with mean opinion scores (MOSs) of 2.570 (Overall), 2.998 (Signal), 3.765 (Noise), 3.330 (Coloration), 3.674 (Discontinuity), 3.435 (Loudness) and 4.241 (Reverberation). In the subjective test based on ITU-T P835 our method yields a final score of 0.495 with MOSs of 2.842 (Overall), 3.119 (Signal), and 3.682 (Background). It is interesting to note that, as our approach is generative in nature, it is capable of achieving excellent performance for moderately distorted inputs, while it may generate phonetic confusions and insertions if the input distortions are too strong.
## 2 Proposed System
Fig. 1 shows an overview of our proposed system for speech signal improvement at training and inference time. The diffusion process (Sec. 2.1) is the core of the method and is accompanied by other processing blocks including the short-time Fourier transform (STFT), causal automatic gain control (AGC) (Sec. 2.3), and a data corruption model \(D\) used to simulate various distortion types (Sec. 3.1).
### Diffusion process
The diffusion process \(\{\mathbf{x}_{t}\}_{t=0}^{T}\) for speech enhancement is essentially the same as in our previous works [3, 4], where \(\mathbf{x}_{t}\) is the state of the process at time step \(t\). During training, the forward process moves from clean speech \(\mathbf{x}_{0}:=\mathbf{x}\) to corrupted speech \(\mathbf{y}\), while increasing amounts of Gaussian noise are gradually added. At inference, the corresponding reverse process [5] is used to progressively remove the corruption and therefore generate an estimate of \(\mathbf{x}\) starting from \(\mathbf{x}_{T}\sim\mathcal{N}_{\text{C}}(\mathbf{x}_{T};\mathbf{y},\sigma_{t}^ {2})\). This reverse process involves the _score function_ of \(\mathbf{x}_{t}\), i.e., the gradient of its log-probability. Functioning as a prior for clean speech, it is unavailable at inference time and is thus approximated by a trained deep neural network (DNN) \(\mathbf{s}_{\theta}\) called the _score model_. We make no further adaptations to this process as it is defined for every STFT bin independently, and is thus inherently causal as long as \(\mathbf{s}_{\theta}\) is implemented with a causal network architecture.
### Network architecture
We use a modified version of NCSN++ [5] for score estimation. The network is an encoder-decoder architecture based on 2D convolutions,
Figure 1: Proposed system: (a) At training, a corruption model \(D\) generates \(y\) from clean speech \(x\). The forward diffusion moves from the clean spectrogram \(\mathbf{x}_{0}\) to the corrupted \(\mathbf{y}\), while Gaussian noise is gradually added. The score model \(\mathbf{s}_{\theta}\) is learned using score matching. (b) At inference, the data is normalized with causal adaptive gain control (AGC) and the reverse diffusion uses the trained score model.
taking complex spectrograms \(\mathbf{x}_{t}\) and \(\mathbf{y}\) and the process time \(t\) as input. Real and imaginary parts are considered as separate channels and the convolutions are performed over time and frequency.
We apply the following modifications to the architecture to meet the causality constraint: **(1)** Padding in the 2D convolutions is modified so that the convolution along the time-dimension is causal; **(2)** Batch normalization is replaced with cumulative group normalization, aggregating statistics recursively; **(3)** Downsampling in the time dimension is performed with strided convolutions and corresponding upsampling with transposed strided convolutions. Up- and downsampling in the frequency dimension are realized with finite impulse response filters, as in [3]. **(4)** All attention layers as well as the progressive growing path are removed.
### Automatic gain control
To match the unit-scale training condition of the score model, i.e., normalization of corrupted speech \(y\) (see Fig. 1a), we use a causal AGC system before feeding the mixture to the diffusion process, and again after enhancement to maximize loudness, as part of the signal improvement task (see Fig. 1b). To this end, we recursively track the maximum value per magnitude frame averaged over the frequency bins to normalize the spectrogram in a causal manner. We start tracking when speech activity is first detected, using a voice activity detection method based on a causal speech presence probability estimator [6]. The speech probability is fed through an ideal low-pass filter to avoid having high-frequency noise bursts produce false positives. Voice activity is then assumed if the speech presence probability is higher than a threshold \(\tau=0.8\), for a duration of \(100\)ms. When discovering a larger maximum than the previous one, we smooth the normalization with an exponential ramp going from the old value to the new one. Finally, to avoid clipping, we use a causal compressor from the pedalboard library1.
Footnote 1: [https://github.com/spotify/pedalboard](https://github.com/spotify/pedalboard)
## 3 Experimental setup
### Dataset
We use the VCTK corpus [7] as the clean speech dataset and resample all utterances from 48kHz to our processing sampling frequency 32kHz. Using the audiomentations library2, we simulate several corruptions observed in the blind data, namely stationary and non-stationary noise, reverberation, clipping, gain reduction, packet loss and lossy speech coding (the last two implemented by us). For each clean utterance, a random corruption chain is chosen among plausible candidates, e.g. \(\{\mathrm{Reverb}\rightarrow\mathrm{Noise}\rightarrow\mathrm{PacketLoss}\}\), with the parameters of each corruption chosen randomly as well. We use the QUT corpus [8] as the environmental noise dataset and take room impulse responses from the DNS challenge [9] for reverberation.
Footnote 2: [https://github.com/iver56/audiomentations](https://github.com/iver56/audiomentations)
### Hyperparameters and training configuration
All processing is performed at \(f_{s}=32\)kHz and we upsample the processed files to the original \(48\)kHz frequency. We use an STFT with a 638-point Hann window and \(160\)-point hop, which guarantees the global latency to be below \(20\)ms, as we use purely causal processing. We set the lowest two frequency bins of the spectrogram to zero, to remove the DC offset and low frequency noise. The diffusion process hyperparameters are identical to those in [3]. We train the score model \(\mathbf{s}_{\theta}\) with the denoising score matching objective [5] using the Adam optimizer with mini-batch size of 8 and exponential moving average over the parameters with a factor of \(0.999\). Training takes around two days on two NVIDIA RTX A6000 GPUs.
## 4 Results
In addition to the challenge's subjective evaluation [2], we use DNS-MOS P.835 [10] to evaluate our method on the 500 files in the blind set. Fig. 2 depicts the histograms of DNSMOS scores: (a), (b) speech quality (SIG) and overall quality (OVRL) of the improved files compared to the corrupted files; (c) SIG with and without AGC.
Computational complexity: The network has about 55.7M parameters and the time to infer a frame on a CPU (Intel Core i7-7800X @ 3.50GHz) takes 0.89s. Please note that the model is designed to run on a GPU, on which the inference time is orders of magnitude faster (0.02 s/ frame with an NVIDIA GeForce RTX 2080 Ti).
## 5 Conclusion
In this work, we have built upon our previous work on diffusion-based speech enhancement, with the novel contribution in making the system causal and training the model on different distortion types. The proposed method was ranked fifth in the non-real-time track of the "Speech Signal Improvement Challenge 2023".
|
2308.14891 | Mass formula for non-ordinary curves in one dimensional families | This paper is about one dimensional families of cyclic covers of the
projective line in positive characteristic. For each such family, we study the
mass formula for the number of non-ordinary curves in the family. We prove two
equations for the mass formula: the first relies on tautological intersection
theory; and the second relies on the $a$-numbers of non-ordinary curves in the
family. Our results generalize the Eichler--Deuring mass formula for
supersingular elliptic curves; they also generalize some theorems of Ibukiyama,
Katsura, and Oort about supersingular curves of genus $2$ that have an
automorphism of order $3$ or order $4$. We determine the mass formula in many
new cases, including linearized families of hyperelliptic curves of every genus
and all families of cyclic covers of the projective line branched at four
points.
keywords: curve, hyperelliptic curve, cyclic cover, Jacobian, mass formula,
cycle class, tautological ring, Hodge bundle, intersection theory, Frobenius,
non-ordinary, $p$-rank, $a$-number. | Renzo Cavalieri, Rachel Pries | 2023-08-28T20:27:08Z | http://arxiv.org/abs/2308.14891v3 | # Mass formula for non-ordinary curves in one dimensional families
###### Abstract.
This paper is about one dimensional families of cyclic covers of the projective line in positive characteristic. For each such family, we study the mass formula for the number of non-ordinary curves in the family. We prove two equations for the mass formula: the first relies on the \(a\)-numbers of non-ordinary curves in the family; and the second relies on tautological intersection theory. Our results generalize the Eichler-Deuring mass formula for supersingular elliptic curves; they also generalize some theorems of Ibukiyama, Katsura, and Oort about supersingular curves of genus 2 that have an automorphism of order 3 or order 4. We determine the mass formula in many new cases, including linearized families of hyperelliptic curves of every genus and all families of cyclic covers of the projective line branched at four points. For certain families of curves of genus 3 to 7, we determine the exact rate of growth of the number of non-ordinary curves in the family as a linear function of the characteristic \(p\).
keywords: curve, hyperelliptic curve, cyclic cover, Jacobian, mass formula, cycle class, tautological ring, Hodge bundle, intersection theory, Frobenius, non-ordinary, \(p\)-rank, \(a\)-number.
MSC20: primary 11G20, 14C17, 14H10, 14H40, 14N35; secondary 11G10, 14G15, 14H37
Cavalieri acknowledges support from Simons Collaboration Grant 420720 and NSF grant DMS - 2100962. Pries was partially supported by NSF grant DMS - 22-00418. The authors would like to thank John Voight for helpful conversations.
stable curves of genus \(g\). We define the mass formula for \(\mathcal{F}\) as the degree of the zero dimensional class obtained by intersecting \(V_{g-1}\) with the curve in \(\overline{\mathcal{M}}_{g}\) determined by the family \(\mathcal{F}\).
When the generic curve in the family is ordinary, this degree gives a weighted count of the non-ordinary curves in \(\mathcal{F}\). Each curve \(X\) is weighted not only with the usual reciprocal of the size of its automorphism group, but also with the multiplicity of intersection between \(\mathcal{F}\) and \(V_{g-1}\) at \(X\).
We prove two results about the mass formula in Theorems 3.3 and 3.4. The first reinterprets the mass formula in terms of the \(a\)-numbers of the non-ordinary curves in the family; and the second in terms of a tautological intersection number. Thus, because \(\mathcal{F}\) is (covered by) a family of cyclic admissible covers of a rational curve, the mass formula admits a natural evaluation using a tautological intersection theory computation from [COS].
We then determine the mass formula for important one dimensional families of curves. These families generalize the Legendre family in several different ways and also generalize earlier work for genus \(2\) from [13]. Specifically, we determine the mass formula for non-ordinary curves for:
* Linearized one dimensional families of hyperelliptic curves of genus \(g\) for any \(g\geq 2\), see Corollary 5.4.
* Families of cyclic covers of the projective line branched at \(4\) points, for any degree \(d\) and any inertia type, see Corollary 6.1.
We now give a more complete description of the results in the paper.
### Non-ordinary curves
Let \(k\) be an algebraically closed field of characteristic \(p\). Let \(X\) be a (smooth, connected, projective) curve of genus \(g\) defined over \(k\). Let \(J_{X}\) be the Jacobian of \(X\).
The curve \(X\) is _ordinary_ if the number of \(p\)-torsion points in \(J_{X}(k)\) is exactly \(p^{g}\). The ordinary condition is equivalent to the action of \(V\) on \(H^{0}(X,\Omega^{1})\) being invertible, where \(V\) denotes the semi-linear Vershiebung morphism.
If \(X\) is not ordinary, then its \(a\)-number \(\alpha_{X}\) is positive; the \(a\)-number is the co-rank of \(V\) on \(H^{0}(X,\Omega^{1})\). Equivalent definitions are described in Section 2.1.
### Cyclic covers of the projective line
Let \(d\geq 2\) be an integer with \(p\nmid d\).
We suppose that \(X\) admits a cyclic degree \(d\) cover \(h:X\to\mathbb{P}^{1}\) with \(n\geq 4\) branch points. A discrete invariant of \(h\) is the _inertia type_, which is a \(n\)-tuple \(a=(a_{1},a_{2},\dots,a_{n})\) of integers \(a_{i}\) with \(0<a_{i}<d\) for \(1\leq i\leq n\) and \(\sum_{i=1}^{n}a_{i}\equiv 0\bmod d\). Then \(X\) admits an affine equation of the form
\[y^{d}=\prod_{i=1}^{n-1}(x-t_{i})^{a_{i}}, \tag{2}\]
where \(t_{1}=0\), \(t_{2}=1\) and \(t_{3},\dots,t_{n-1}\) are pairwise distinct elements of \(k-\{0,1\}\). The inertia type determines the genus \(g\) of \(X\), see Lemma 2.3.
Let \(\tau\in\operatorname{Aut}(X)\) be an automorphism of degree \(d\) such that the quotient curve \(X/\langle\tau\rangle\) is \(\mathbb{P}^{1}\). With respect to (2), we can choose \(\tau:(x,y)\mapsto(x,\zeta_{d}y)\), where \(\zeta_{d}\) is a primitive \(d\)th root of unity. Let \(\operatorname{Aut}(X,\tau)\) denote the normalizer of \(\tau\) in \(\operatorname{Aut}(X)\).
The curve \(X\) in (2) is ordinary for a generic choice of \(\vec{t}=(t_{3},\dots,t_{n-1})\) only under certain necessary conditions on the inertia type and the congruence of \(p\) modulo \(d\). In [11], under very mild hypotheses, Bouw proved that these necessary conditions are sufficient for \(X\) in (2) to be ordinary for a generic choice of \(\vec{t}\). More generally, the inertia type and the congruence of \(p\) modulo \(d\) place restrictions on \(\alpha_{X}\).
### Main result
We study one dimensional families of cyclic covers of a rational curve. We fix the degree \(d\) and the inertia type \(a\). Let \(\mathcal{A}_{d,a}\) denote the moduli space of admissible \(\mu_{d}\)-covers of a rational curve with inertia type \(a\). A one dimensional family \(\mathcal{F}\) of admissible \(\mu_{d}\)-covers of a rational curve is, by definition, a morphism \(\phi_{\mathcal{F}}:\mathcal{A}_{d,a}^{\mathcal{F}}\to\mathcal{A}_{d,a}\) for some smooth, proper, irreducible, one
dimensional Deligne-Mumford (DM) stack \(\mathcal{A}_{d,a}^{\mathcal{F}}\). When the general target curve of the family is smooth, we abbreviate this as a _one dimensional family of \(\mu_{d}\)-covers of \(\mathbb{P}^{1}\)_.
Let \(\overline{\mathcal{M}}_{g}\) be the moduli space of stable curves of genus \(g\). There is a forgetful morphism
\[\phi_{d,a}^{\mathcal{F}}:\mathcal{A}_{d,a}^{\mathcal{F}}\to\overline{ \mathcal{M}}_{g}, \tag{3}\]
that records the source curve of the cover. We denote its image by \(\mathcal{M}_{d,a}^{\mathcal{F}}\) and its degree by \(\delta_{d,a}^{\mathcal{F}}\).
Let \(\mathbb{E}_{g}\to\overline{\mathcal{M}}_{g}\) denote the Hodge bundle and let \(\lambda_{1}\) denote its first Chern class. For a one dimensional family \(\mathcal{F}\) of \(\mu_{d}\)-covers of \(\mathbb{P}^{1}\) with inertia type \(a\), we denote by \(C_{d,a}^{\mathcal{F}}(\lambda_{1})\) the degree of the pullback of \(\lambda_{1}\) via the morphism \(\phi_{d,a}^{\mathcal{F}}\).
The locus of non-ordinary curves in \(\overline{\mathcal{M}}_{g}\), denoted \(V_{g-1}\), is a codimension one cycle determining a degree one class in the Chow ring of \(\overline{\mathcal{M}}_{g}\). Hence we can define a mass formula for the one dimensional family \(\mathcal{F}\) as the intersection number of the classes of \(V_{g-1}\) and the curve \(\mathcal{M}_{d,a}^{\mathcal{F}}\) in \(\overline{\mathcal{M}}_{g}\).
For a prime \(p\nmid d\), we define the _mass formula_ for the family \(\mathcal{F}\) to be
\[\mu(\mathcal{F},p)=\int_{\overline{\mathcal{M}}_{g}}[\mathcal{M}_{d,a}^{ \mathcal{F}}]\cdot[V_{g-1}]. \tag{4}\]
When the generic source curve in \(\mathcal{F}\) is ordinary, (4) provides a weighted count of the non-ordinary curves in \(\mathcal{F}\). Each curve \(X\) is weighted by the multiplicity \(m_{X}\) of the intersection of \(\mathcal{M}_{d,a}^{\mathcal{F}}\) and \(V_{g-1}\) at \(X\), and by the reciprocal of the size of its automorphism group. In Proposition 3.2, we prove that the multiplicity \(m_{X}\) equals the product of the \(a\)-number \(\alpha_{X}\) of \(X\) and the index of \(\operatorname{Aut}(X,\tau)\) in \(\operatorname{Aut}(X)\).
As a result, we provide two ways to evaluate the mass formula \(\mu(\mathcal{F},p)\): the first is in terms of the \(a\)-numbers and automorphism groups of the non-ordinary curves in the family; and the second in terms of the intersection number \(C_{d,a}^{\mathcal{F}}(\lambda_{1})\) and the degree \(\delta_{d,a}^{\mathcal{F}}\). We collect both results in the following statement.
**Main Theorem**.: _Let \(\mathcal{F}\) be a one dimensional family of \(\mu_{d}\)-covers of a rational curve with inertia type \(a\). Suppose \(p\nmid d\) is a prime such that the generic point of the characteristic \(p\) fiber of \(\mathcal{M}_{d,a}^{\mathcal{F}}\) represents an ordinary curve. Then_
\[\mu(\mathcal{F},p)\stackrel{{ Thm.\ref{thm:main}}}{{=}}\sum_{[X]} \frac{\alpha_{X}}{\#\operatorname{Aut}(X,\tau)}\stackrel{{ Thm.\ref{thm:main}}}{{=}}(p-1)\frac{C_{d,a}^{ \mathcal{F}}(\lambda_{1})}{\delta_{d,a}^{\mathcal{F}}}, \tag{5}\]
_where the sum in the middle term is over the isomorphism classes of non-ordinary curves \(X\) in the characteristic \(p\) fiber of the family \(\mathcal{M}_{d,a}^{\mathcal{F}}\)._
In Lemma 5.1, we show that (5) specializes to the Eichler-Deuring mass formula in the case of the Legendre family, where \(d=2\), \(n=4\), and \(a=(1,1,1,1)\).
The proofs of the two equalities in the Main Theorem are in Section 3. In [COS, Theorem 1.3], Cavalieri, Owens, and Somerstep found an explicit formula for the class \(\lambda_{1}\) on \(\mathcal{A}_{d,a}\) as a linear combination of boundary divisors; see Section 4. This allows us to explicitly evaluate the right hand side of (5) for many families \(\mathcal{F}\). With additional information about the \(a\)-numbers, we can then extract enumerative information about the number of non-ordinary curves in \(\mathcal{F}\).
We note that Theorem 3.4 holds in greater generality than stated in this paper, see Remark 3.5.
### Applications
Section 5 contains our first application which is for linearized families of hyperelliptic curves of every genus \(g\geq 2\).
**Corollary 1.1** (Corollary 5.4).: _Let \(g\geq 2\) and \(p\) be odd. Suppose \(h(x)\) is a separable polynomial of degree \(2g+1\). Under the mild condition that the generic curve of the characteristic \(p\) fiber of \(\mathcal{F}_{h(x)}:y^{2}=h(x)(x-t)\) is ordinary, the mass formula of the family is \(\mu(\mathcal{F}_{h(x)},p)=(p-1)g/4\)._
For the proof of Corollary 5.4, we study the intersection of the linearized family \(y^{2}=h(x)(x-t)\) with the boundary of \(\overline{\mathcal{M}}_{g}\). This calculation is combinatorial in nature, and it can be completed independently of the prime \(p\). We explain in Remark 5.5 why it seems hard to prove Corollary 5.4 using the more typical approach with the Cartier operator.
In our second application, Corollary 6.1, we determine the mass formula for any family of \(\mu_{d}\)-covers of \(\mathbb{P}^{1}\) branched at four points. This is a natural class of examples to consider because \(\mathcal{F}\) coincides with a one dimensional moduli space \(\mathcal{A}_{d,a}\); we suppress the symbol \(\mathcal{F}\) from the notation. In this case, [COS, Theorem 1.2] gives an explicit numerical formula for the degree of \(\lambda_{1}\). Also, the degree \(\delta_{d,a}\) of the map \(\phi_{d,a}:\mathcal{A}_{d,a}\to\mathcal{M}_{d,a}\) equals the number of different labelings of the branch points that are compatible with the inertia type \(a\), as described in Lemma 2.9.
For example, consider the family of curves of genus \(d-1\) given by:
\[X_{t}:y^{d}=x(x-1)(x-t). \tag{6}\]
In Example 6.2, we show that the mass formula for the non-ordinary curves in the family (6) is \((p-1)(d^{2}-1)/72d^{2}\), if \(d\geq 5\) with \(\gcd(d,6)=1\), and \(p\equiv 1\bmod d\).
Then in Section 6.2, we study 14 families of cyclic covers associated with special Shimura varieties; these occur for curves of genus \(1-7\), with the genus \(1\) case being the Legendre family. The advantage of working with these families is that we have complete knowledge of the \(a\)-numbers that occur for the non-ordinary curves. As a result, in Corollary 6.4, we determine both the mass formula and the explicit rate of growth of the number of non-ordinary curves in each of the 14 families as a linear function of the prime \(p\). Here are two cases of Corollary 6.4:
**Corollary 1.2**.: _For \(p\equiv 1\bmod d\), the rate of growth of the number of isomorphism classes of non-ordinary curves in the family \(X_{t}:y^{d}=x(x-1)(x-t)\) is: \((p-1)/30\) if \(d=5\) and is \((p-1)/21\) if \(d=7\)._
In developing the material in this paper, it was very helpful for us to compare our results with two theorems of Ibukiyama, Katsura and Oort [13] about two families of curves of genus \(g=2\). In Sections 7.3 and 8.2, we show that our mass formula agrees with the results of [13] in these cases. For this, we need some additional information about families of curves whose automorphism group contains a dihedral group; see Sections 7 and 8. For example, we prove the following result which specializes when \(d=3\) to [13].
**Corollary 1.3** (Corollary 7.1).: _Suppose \(d\geq 3\) is odd, \(a=(1,1,d-1,d-1)\), and \(p\) is a prime with \(p\nmid d\). For the family \(X_{t}:y^{d}=x(x-1)(x-t)^{d-1}\) of curves of genus \(d-1\), the mass formula for the non-ordinary curves in the family is \((p-1)(d^{2}-1)/2^{5}d^{2}\)._
Corollary 8.1 contains a similar result for the family \(X_{t}:y^{d}=x(x-1)^{d-1}(x-t)^{d/2}\) with \(d\geq 4\) and \(d\) even which specializes when \(d=4\) to [13].
For most families of cyclic covers of \(\mathbb{P}^{1}\), it is not feasible to obtain complete information about the \(a\)-numbers of the non-ordinary curves in the family. Indeed, in certain cases, this is related to an open question about hypergeometric differential equations; see Remark 7.5. The \(a\)-number is usually viewed as an invariant of the \(p\)-torsion group scheme (or de Rham cohomology) of a curve in positive characteristic. Our results show that the \(a\)-numbers of curves in a family distill subtle information about the geometry of the family in the moduli space of curves.
Section 9 contains a brief discussion about other research results on mass formulas in positive characteristic, which are not closely connected with this paper.
## 2. Background
In this section, we provide some background material aimed at making this work more self-contained and accessible. We introduce: the \(a\)-number of a curve; cyclic covers of the projective line; moduli spaces of curves; the Hodge bundle on the moduli space of curves and its first Chern
class; and the cycle class of the non-ordinary locus. Much of this material is well known in one of the two communities that might be reading this work, but perhaps not in both.
### The \(a\)-number
Suppose \(A\) is a principally polarized abelian variety of dimension \(g\) defined over an algebraically closed field \(k\) of characteristic \(p\). Let \(A[p]\) denote its \(p\)-torsion group scheme. In this paper, \(A\) is usually the Jacobian \(J_{X}\) of a smooth curve \(X\) of genus \(g\).
The \(p\)_-rank_\(s_{A}\) of \(A\) is the integer \(s\) such that \(\#A[p](k)=p^{s}\). The abelian variety \(A\) is _ordinary_ if and only if \(s_{A}=g\).
The \(a\)_-number_\(\alpha_{A}\) of \(A\) is \(\alpha_{A}:=\dim_{k}\!\operatorname{Hom}(\alpha_{p},A[p])\) where the group scheme \(\alpha_{p}\) is the kernel of Frobenius on the additive group \(\mathbb{G}_{a}\). The \(a\)-number \(\alpha_{A}\) equals the dimension of \(\operatorname{Ker}(F)\cap\operatorname{Ker}(V)\) on the Dieudonne module of \(A[p]\), where \(F\) is the Frobenius morphism and \(V\) is the Verschiebung morphism. It equals the number of generators of \(A[p]\) as a module under \(F\) and \(V\).
If \(A\) is not ordinary, then \(\alpha_{A}>0\), because there is a non-trivial local-local summand of \(A[p]\) on which \(V\) is nilpotent. More generally \(0<\alpha_{A}+s_{A}\leq g\).
By definition, \(A\) is _superspecial_ if \(\alpha_{A}=g\); this property is equivalent to \(A\) being isomorphic to a product of \(g\) supersingular elliptic curves. An abelian variety \(A\) is _supersingular_ if it is isogenous to a product of \(g\) supersingular elliptic curves. In general, superspecial implies supersingular, which implies \(p\)-rank \(0\), which implies non-ordinary; the converse statements are all false for \(g\geq 3\).
For a curve \(X\), the \(p\)-rank \(s_{X}\) and the \(a\)-number \(\alpha_{X}\) are that of its Jacobian. These can be computed using the action of the Verschiebung morphism \(V\) on the vector space of holomorphic differentials. The \(p\)-rank \(s_{X}\) is the stable rank of \(V\) on \(H^{0}(X,\Omega^{1})\). By [10, Equation 5.2.8], \(\alpha_{X}=g-\operatorname{rank}(V)\).
The action of \(V\) on \(H^{0}(X,\Omega^{1})\) is given by the Cartier-Manin matrix, which is the matrix for the modified Cartier operator \(C\). Here \(C\) is a \(1/p\)-linear map which trivializes exact \(1\)-forms and satisfies \(C(f^{p-1}df)=df\) for any function \(f\).
### Cyclic covers of \(\mathbb{P}^{1}\)
Let \(d\geq 2\). Fix a primitive \(d\)-th root of unity \(\zeta_{d}\). Let \(\mu_{d}=\langle\zeta_{d}\rangle\).
Let \(p\) be a prime with \(p\nmid d\). The material in this subsection is most familiar when working over the complex numbers, but holds equally well when working with the characteristic \(p\) reduction of the curves and the moduli spaces.
#### 2.2.1. The inertia type
Fix an inertia type of length \(n\) for \(d\), namely a \(n\)-tuple \(a=(a_{1},\dots,a_{n})\) with \(0<a_{i}<d\) such that \(\sum_{i=1}^{n}a_{i}\equiv 0\bmod d\) and \(\gcd(a_{1},\dots,a_{n})=1\). A \(\mu_{d}\)-cover of \(\mathbb{P}^{1}\) with inertia type \(a\) consists of a datum \((X,\tau,B)\), which we first define complex analytically.
**Definition 2.1**.: Let \(X\) be a smooth curve of genus \(g\) and \(\tau\in\operatorname{Aut}(X)\) an automorphism of order \(d\). Identifying \(\tau\) with the action of the chosen generator \(\zeta_{d}\) of \(\mu_{d}\) defines a \(\mu_{d}\)-action on \(X\). Assume that the orbit space is isomorphic to \(\mathbb{P}^{1}\). Then the projection function \(h:X\to\mathbb{P}^{1}\) is called a \(\mu_{d}\)_-cover of \(\mathbb{P}^{1}\)_. Denote by \(B\) a labeling of the branch points of \(h\). Let \(y\) be a general point of \(\mathbb{P}^{1}\). For \(1\leq i\leq n\), let \(\rho_{i}\) be a small loop based at \(y\) winding once around the \(i\)-th branch point. Let \(x\) be an inverse image of \(y\). If \(\tilde{\rho}_{i,x}\) denotes the lift of \(\rho_{i}\) based at \(x\), then the end point of \(\tilde{\rho}_{i,x}\) equals \(\tau^{a_{i}}(x)\) for some integer \(0\leq a_{i}<d\). The vector \(a=(a_{1},\dots,a_{n})\) is called the _inertia type_ of the \(\mu_{d}\)-cover \(h\).
Over an algebraically closed field \(k\) of characteristic \(p\), the inertia type of a \(\mu_{d}\)-cover is well-defined and has the same attributes. To define the inertia type, one can either lift the cover \(h\) to characteristic \(0\), or use Grothendieck's theorem about the prime-to-\(p\) fundamental group of \(\mathbb{P}^{1}-B\), or work exclusively in characteristic \(p\) by replacing loops with isomorphisms of fiber functors.
**Remark 2.2**.: For any \(\ell\) in \(\mu_{d}^{*}\), the automorphism \(\tau^{\ell}\) produces the same projection function \(h:X\to\mathbb{P}^{1}\); however, the inertia type of the corresponding cover is correspondingly multiplied by \(\ell\). Similarly, a labeling of the branch points is necessary to define the inertia type. For this reason, the \(\mu_{d}\)-action on \(X\) (or equivalently the choice of automorphism \(\tau\)) and the labeling of the branch
points of \(h\) are part of the datum defining a cyclic cover. The inertia type of a cyclic cover does not depend on the choice of the pre-image \(x\) of \(y\).
#### 2.2.2. Isomorphisms and automorphisms
Two \(\mu_{d}\)-covers with inertia type \(a\) are _isomorphic_ if there exists a commutative diagram
(7)
where \(\varphi\) is a \(\mu_{d}\)-equivariant isomorphism, and \(F\) is an automorphism of \(\mathbb{P}^{1}\) that maps the \(i\)-th branch point of \(h_{1}\) to the \(i\)-th branch point of \(h_{2}\) for each \(1\leq i\leq n\).
An _automorphism_ of a \(\mu_{d}\)-cover \(h\) is the datum of a commutative diagram:
(8)
where \(\varphi\) is a \(\mu_{d}\)-equivariant automorphism of \(X\), or, in other words, an automorphism of \(X\) that preserves the fibers and commutes with \(\tau\).
Let \(\operatorname{Aut}(X,\tau)\) denote the normalizer of \(\tau\) in \(\operatorname{Aut}(X)\). The structure and size of \(\operatorname{Aut}(X,\tau)\) do not depend on the choice of \(\tau\). In many cases, \(\operatorname{Aut}(X,\tau)=\langle\tau\rangle\). The automorphism group of the cyclic cover \((X,\tau,B)\) coincides with \(\operatorname{Aut}(X,\tau)\).
#### 2.2.3. The genus and signature
Suppose \(h:X\to\mathbb{P}^{1}\) is a \(\mu_{d}\)-cover with inertia type \(a\). Then \(X\) has an affine equation of the form (2). The genus \(g\) of \(X\) is the dimension of \(H^{0}(X,\Omega^{1})\). For \(0\leq j\leq d-1\), let \(L_{j}\) be the \(j\)-th eigenspace for the action of \(\mu_{d}\) on \(H^{0}(X,\Omega^{1})\) and let \(f_{j}=\dim(L_{j})\). Then \(f_{0}=0\), since a \(\mu_{d}\)-invariant form would be pulled-back from the base and there are no holomorphic one-forms on \(\mathbb{P}^{1}\). The data of \(\vec{f}=(f_{1},\dots,f_{d-1})\) is the _signature_ of \(h\).
**Lemma 2.3**.: _Suppose \(h:X\to\mathbb{P}^{1}\) is a \(\mu_{d}\)-cover with inertia type \(a\)._
1. _(Riemann-Hurwitz formula) The genus_ \(g\) _of_ \(X\) _satisfies_ \[g=d+1-\sum_{i=1}^{n}\gcd(d,a_{i})/2.\]
2. _See, e.g._ _[_1_, Lemma 4.3]_ _or_ _[_10_, Lemma 2.7, SS3.2]__. For_ \(x\in\mathbf{Q}\)_, let_ \(\langle x\rangle\) _denote its fractional part. If_ \(1\leq j\leq d-1\)_, then_ \[f_{j}=-1+\sum_{i=1}^{n}\langle\frac{-ja_{i}}{m}\rangle.\]
The hypotheses on \(a\) imply that \(g\) is an integer. If all entries of \(a\) are relatively prime to \(d\), then \(g=(n-2)(d-1)/2\).
**Remark 2.4**.: A \(\mu_{d}\)-cover \(h:X\to\mathbb{P}^{1}\) can also be interpreted as a twisted stable map from a marked orbifold \(\mathbb{P}^{1}\) to the stack \(B\mu_{d}\), classifying principal \(\mu_{d}\)-bundles. With this perspective, the dimensions \(f_{j}\) can also be computed with an orbifold Riemann-Roch computation (see [1, Theorem 7.2.1]).
#### 2.2.4. Generically ordinary
The Frobenius map acts on the set of eigenspaces \(\{L_{n}\mid 0\leq n\leq d-1\}\) via the multiplication-by-\(p\) map of the indices.
We state a result of Bouw only in the special case of \(n=4\) branch points.
**Proposition 2.5**.: _(Special case of [1, Theorem 6.1]) Let \(X_{t}:y^{d}=x^{a_{1}}(x-1)^{a_{2}}(x-t)^{a_{3}}\) be a family of \(\mu_{d}\)-covers of \(\mathbb{P}^{1}\) branched at \(4\) points. Then the generic curve \(X\) in the family is ordinary if and only if the dimension \(f_{j}\) is constant for each orbit of \(\{L_{j}\mid 0\leq j\leq d-1\}\) under Frobenius._
**Remark 2.6**.: If \(p\equiv 1\bmod d\), then \(X_{t}\) is ordinary for a generic choice of \(t\); this is because the orbits of Frobenius on the set of eigenspaces \(\{L_{j}\mid 1\leq j\leq d-1\}\) all have cardinality \(1\), so the condition that the dimension \(f_{j}\) of \(L_{j}\) is constant within each orbit is vacuous.
### Moduli spaces, admissible covers and cyclic Hurwitz curves
Let \(\mathcal{M}_{g}\) (resp. \(\overline{\mathcal{M}}_{g}\)) denote the moduli space of smooth (resp. stable) curves of genus \(g\). Let \(\mathcal{M}_{0,n}\) (resp. \(\overline{\mathcal{M}}_{0,n}\)) denote the moduli space of smooth (resp. stable) curves of genus \(0\) with \(n\) marked points.
Let \(\mathcal{A}_{g}\) denote the moduli space of principally polarized abelian varieties of dimension \(g\). The Torelli morphism \(T:\mathcal{M}_{g}\to\mathcal{A}_{g}\) sends a curve of genus \(g\) to its Jacobian; it is an embedding. The Torelli morphism can be extended to \(T:\overline{\mathcal{M}}_{g}\to\tilde{\mathcal{A}}_{g}\), where \(\tilde{\mathcal{A}}_{g}\) is a toroidal compactification of \(\mathcal{A}_{g}\).
Let \(\mathcal{A}_{d,a}\) denote the stack of admissible \(\mu_{d}\)-covers of a rational curve \(R\) branched at \(n\) labeled points and having inertia type \(a\). This is a compactification of the moduli space for isomorphism classes of (families of) \(\mu_{d}\)-covers of \(\mathbb{P}^{1}\); roughly speaking, the boundary points parameterize projection maps obtained from automorphisms of order \(d\) on nodal curves whose orbit spaces are nodal rational curves; see [10, page 57] for a precise definition.
**Remark 2.7**.: In reality, by \(\mathcal{A}_{d,a}\) we denote the stack of twisted stable maps from an orbifold rational curve to \(B\mu_{d}\) with inertia at the orbifold points given by \(a\), as defined in [1]. The authors show that this stack is smooth and it is the normalization of the admissible cover space of Harris and Mumford [10]. This distinction is technical and will not appear in an evident fashion in what we do here, but it is important for the intersection theory to run smoothly.
There is a morphism
\[\pi=\pi_{d,a}:\mathcal{A}_{d,a}\to\overline{M}_{0,n},\]
where the isomorphism class of a cover \(h:X\to R\) of a rational curve \(R\) is sent to the labeled set of \(n\) branch points of \(h\). The map \(\pi\) is a bijection on closed points, but \(\mathcal{A}_{d,a}\) is a \(\mu_{d}\)-gerbe over \(\overline{M}_{0,n}\). This roughly means that it has generic isotropy equal to \(\mu_{d}\). Therefore the degree of \(\pi\) is \(1/d\), in the sense that \(\pi_{*}([\mathcal{A}_{d,a}])=[\overline{M}_{0,n}]/d\).
There is a morphism
\[\phi=\phi_{d,a}:\mathcal{A}_{d,a}\to\overline{\mathcal{M}}_{g},\]
where the isomorphism class of a cover \(h:X\to R\) is sent to the isomorphism class of the curve \(X\) of genus \(g\). Let \(\mathcal{M}_{d,a}\) denote the image of \(\mathcal{A}_{d,a}\) in \(\overline{\mathcal{M}}_{g}\) with its reduced induced structure.
Consider the map of \(DM\) stacks
\[\phi_{d,a}:\mathcal{A}_{d,a}\to\mathcal{M}_{d,a}. \tag{9}\]
We denote by \(\delta_{d,a}\) the degree of the map \(\phi_{d,a}\) onto its image.
**Definition 2.8**.: A _one dimensional family_\(\mathcal{F}\) of admissible \(\mu_{d}\)-covers of a rational curve with inertia type \(a\) is a morphism \(\varphi_{\mathcal{F}}:\mathcal{A}_{d,a}^{\mathcal{F}}\to\mathcal{A}_{d,a}\), where \(\mathcal{A}_{d,a}^{\mathcal{F}}\) is a smooth, proper, irreducible, one dimensional Deligne-Mumford (DM) stack.
For such a one dimensional family \(\mathcal{F}\), we obtain a map \(\phi_{d,a}^{\mathcal{F}}=\phi_{d,a}\circ\varphi_{\mathcal{F}}:\mathcal{A}_{d,a }^{\mathcal{F}}\to\overline{\mathcal{M}}_{g}\). We denote its image by \(\mathcal{M}_{d,a}^{\mathcal{F}}\). We denote by \(\delta_{d,a}^{\mathcal{F}}\) the degree of \(\phi_{d,a}^{\mathcal{F}}\) onto its image, so that
\[\phi_{d,a}^{\mathcal{F}}{}_{*}([\mathcal{A}_{d,a}^{\mathcal{F}}])=\delta_{d,a} ^{\mathcal{F}}[\mathcal{M}_{d,a}^{\mathcal{F}}]. \tag{10}\]
**Lemma 2.9**.: _Consider a moduli space of admissible covers \(\mathcal{A}_{d,a}\), where the inertia vector has length \(n\geq 4\). Suppose \(g\geq 2\). The degree \(\delta_{d,a}\) of the map \(\phi_{d,a}:\mathcal{A}_{d,a}\to\mathcal{M}_{d,a}\) equals the cardinality of the set \(S_{d,a}\), defined as follows:_
\[S_{d,a}=\{(\ell,\sigma)\in\mu_{d}^{*}\times S_{n}\mid\ell a=\sigma(a)\}. \tag{11}\]
Proof.: Let \(X\) be a generic point in the image of \(\phi_{d,a}\). Let \(h:X\to R\) be an admissible cover in \(\phi_{d,a}^{-1}(X)\), corresponding to the quotient of \(X\) by the subgroup generated by an automorphism \(\tau\) of \(X\) of order \(d\). The set of labeled branch points for \(h\) gives a generic point of \(\overline{M}_{0,n}\). Since \(n\geq 4\), there is no non-trivial automorphism of \(\mathbb{P}^{1}\) that fixes each of the branch points of \(h\). The other inverse images of \(X\) under \(\phi_{d,a}\) therefore correspond to the same geometric map \(h:X\to R\), but with possibly different labelings of the branch points. The set \(S_{d,a}\) indexes the set of all possible labelings of branch points compatible with the inertia type \(a\). Here \(\ell\) corresponds to the power of the automorphism \(\tau\) that produces the same geometric projection function, and \(\sigma\) gives a permutation of the labelings of the branch points of \(\tau\) that produce a labeling of the branch points of \(\tau^{\ell}\) of inertia type \(a\).
### The Hodge bundle and Chern classes
The _Hodge bundle_\(\mathbb{E}_{g}\to\mathcal{A}_{g}\) is a locally free sheaf of rank \(g\) on \(\mathcal{A}_{g}\). For \(A/S\) an abelian scheme over \(S\), the sections of \(\mathbb{E}\) are given by \(e^{*}\Omega^{1}_{A/S}\) on \(S\), where \(\Omega^{1}_{A/S}\) is the relative sheaf of differentials and \(e:S\to A\) is the identity section. This construction is compatible with pullbacks. By [10], the Hodge bundle extends to a locally free sheaf \(\mathbb{E}\) on a smooth toroidal compactification \(\tilde{A}_{g}\).
The pullback \(\mathbb{E}_{g}\to\overline{\mathcal{M}}_{g}\) is a rank \(g\) vector bundle on \(\overline{\mathcal{M}}_{g}\). Over the locus of smooth curves \(\mathcal{M}_{g}\), the fiber of \(\mathbb{E}_{g}\) over a moduli point \(X\) is naturally identified with the vector space of holomorphic one-forms \(H^{0}(X,\Omega^{1})\). Because such an identification works in families, it provides patching data and identifies the bundle on \(\mathcal{M}_{g}\); thus it is customary to say that the Hodge bundle \(\mathbb{E}_{g}\) is the vector bundle whose fibers are the holomorphic one-forms on the parameterized curves. An earlier reference for the extension to the boundary \(\overline{\mathcal{M}_{g}}\smallsetminus\mathcal{M}_{g}\) is Mumford's work [12].
The Chern classes of \(\mathbb{E}_{g}\) are defined over \(\mathbb{Z}\) and can be studied over any field \(k\). They yield classes in the Chow ring \(A^{*}(\tilde{\mathcal{A}}_{g})\). We are especially interested in the first Chern class
\[\lambda_{1}:=c_{1}(\mathbb{E}_{g})\in A^{*}(\tilde{\mathcal{A}}_{g}). \tag{12}\]
We also denote by \(\lambda_{1}\in A^{*}(\mathcal{M}_{g})\) its pullback via the Torelli morphism. The class \(\lambda_{1}\) may be represented by a codimension one cycle given by the divisor of a meromorphic section of \(\det(\mathbb{E}_{g})\).
The theory of Chern classes of a vector bundle is very rich (see [11]). We recall only a few properties that we will use. In general, the first Chern class of a line bundle \(L\) may be represented by the divisor of a meromorphic section of \(L\). Given \(E\) (resp. \(E_{i}\), for \(i=1,2\)) vector bundles on \(Y\) of ranks \(r\) (resp. \(r_{i}\)) and \(f:X\to Y\) a flat morphism:
**pull-back:**: the first Chern class commutes with pull-back
\[c_{1}(f^{*}(E))=f^{*}(c_{1}(E))\in A^{1}(X); \tag{13}\]
**tensor products:**: the first Chern class of a tensor product of vector bundles is the following dot product:
\[c_{1}(E_{1}\otimes E_{2})=[c_{1}(E_{1}),c_{1}(E_{2})]\cdot[r_{2},r_{1}]=c_{1}( E_{1})r_{2}+c_{1}(E_{2})r_{1}. \tag{14}\]
In particular, the first Chern class is additive for tensor products of line bundles.
**dual:**: the sign of the first Chern class is reversed by dualization:
\[c_{1}(E^{\vee})=-c_{1}(E). \tag{15}\]
The Hodge bundle on the moduli space \(\mathcal{A}_{d,a}\) of admissible covers is the pull-back via \(\phi_{d,a}\) of the Hodge bundle on \(\overline{\mathcal{M}}_{g}\). In light of (13), we omit \(\phi^{*}\) from the notation and denote by \(\mathbb{E}_{g}\) the Hodge bundle on \(\mathcal{A}_{d,a}\), and by \(\lambda_{1}\) its first Chern class.
**Definition 2.10**.: Let \(\mathcal{F}\) be a one dimensional family of admissible \(\mu_{d}\)-covers. Then
\[C^{\mathcal{F}}_{d,a}(\lambda_{1}):=\int_{\mathcal{A}^{\mathcal{F}}_{d,a}} \varphi^{*}_{\mathcal{F}}(\lambda_{1}).\]
### Cycle class of the non-ordinary locus
Let \(V_{g-1}\) denote the locus on \(\mathcal{A}_{g}\) of abelian varieties having \(p\)-rank \(s\leq g-1\). This is the same as the locus \(T_{1}\) of abelian varieties having \(a\)-number \(\alpha\geq 1\).
By [1, Section 9, page 625], the cycle class of \(V_{g-1}\) is \([V_{g-1}]=(p-1)\lambda_{1}\). We briefly summarize the argument. The locus \(V_{g-1}\) is given by the vanishing of the map
\[\det(V):\det(\mathbb{E}_{g})\to\det(\mathbb{E}_{g}^{(p)}).\]
By tensoring with \((\mathbb{E}_{g})^{\vee}\), this is given by the zero locus of the section \(s_{V}:\mathcal{O}\to\det(\mathbb{E}_{g})^{\vee}\otimes\det(\mathbb{E}^{(p)})\). Therefore, we must compute \(c_{1}(\det(\mathbb{E}_{g})^{\vee}\otimes\det(\mathbb{E}_{g}^{(p)}))\). It follows from the definition of \(\mathbb{E}_{g}^{(p)}\) that \(c_{1}(\mathbb{E}_{g}^{(p)})=p\lambda_{1}\). Applying properties (14), (15), we then obtain
\[[V_{g-1}]=c_{1}(\det(\mathbb{E}_{g})^{\vee}\otimes\det(\mathbb{E}_{g}^{(p)})) =-\lambda_{1}+p\lambda_{1}=(p-1)\lambda_{1}. \tag{16}\]
## 3. Main Theorems
In this section, we prove the main theorem stated in the introduction. The two equalities composing the statement of the main theorem are proven individually in Theorems 3.3, 3.4.
### The mass formula and the multiplicity
We first recall our definition of the mass formula.
**Definition 3.1**.: Let \(\mathcal{F}\) be a one dimensional family of admissible \(\mu_{d}\)-covers with inertia type \(a\). We define the _mass formula_ to be
\[\mu(\mathcal{F},p)=\int_{\overline{\mathcal{M}}_{g}}[\mathcal{M}^{\mathcal{F} }_{d,a}]\cdot[V_{g-1}]. \tag{17}\]
When the source curve of the generic point of the family is ordinary, then \(V_{g-1}\) is dimensionally transversal to \(\mathcal{M}^{\mathcal{F}}_{d,a}\) and the mass formula is the degree of a zero dimensional cycle supported on the set of non-ordinary curves of \(\mathcal{F}\). Denoting by \(m_{X}\) the multiplicity of intersection of the cycles \(\mathcal{M}^{\mathcal{F}}_{d,a}\), \(V_{g-1}\) at the curve \(X\in\overline{\mathcal{M}}_{g}\), we have
\[\mu(\mathcal{F},p)=\sum_{[X]\in\mathcal{M}^{\mathcal{F}}_{d,a}\cap V_{g-1}} \frac{m_{X}}{\#\mathrm{Aut}(X)}. \tag{18}\]
We determine the multiplicity \(m_{X}\) in terms of the \(a\)-number \(\alpha_{X}\) of \(X\), and the index of the normalizer of \(\tau\) in \(\mathrm{Aut}(X)\).
**Proposition 3.2**.: _If \(X\in\mathcal{M}^{\mathcal{F}}_{d,a}\), then \(m_{X}=\alpha_{X}[\mathrm{Aut}(X,\tau):\mathrm{Aut}(X)]\)._
Proof.: If \(X\not\in V_{g-1}\), then \(m_{X}=0\) and \(\alpha_{X}=0\). Suppose \(X\in\mathcal{M}^{\mathcal{F}}_{d,a}\cap V_{g-1}\), and \(h\) is an inverse image of \(X\) via \(\phi^{\mathcal{F}}_{d,a}\). Consider the following diagram
\[\begin{CD}h&\in&\mathcal{A}^{\mathcal{F}}_{d,a}\xrightarrow{\phi^{\mathcal{F }}_{d,a}}\mathcal{M}^{\mathcal{F}}_{d,a}&\subset&\overline{\mathcal{M}}_{g,n }&\ni&X\\ &&\rotatebox{0.0pt}{}\\ \underline{h}&\in&\mathcal{A}^{\mathcal{F}}_{d,a}\xrightarrow{\phi^{\mathcal{F }}_{d,a}}M^{\mathcal{F}}_{d,a}&\subset&\overline{M}_{g,n}&\ni&\underline{X} \end{CD}\]
where the objects in the second row are the coarse moduli spaces of the DM stacks appearing in the first row, and the vertical arrows are the coarse moduli maps. Given any object in the top row, we denote its coarse image by adding an underline.
Since the hypersurface \(\underline{V}_{g-1}\) is a Cartier divisor, the coarse multiplicity \(m_{\underline{X}}\) may be computed in \(A^{\mathcal{F}}_{d,a}\) by pulling back a local equation \(f\) of the hypersurface \(\underline{V}_{g-1}\), and computing the order of vanishing at \(\underline{h}\). Here \(f\) is a local equation for the determinant of the Cartier operator. Thus by definition \(m_{\underline{X}}=\alpha_{X}\), the \(a\)-number of the curve \(X\).
In order to lift the coarse multiplicity to obtain the stacky multiplicity \(m_{X}\), we observe that the moduli point \(X\in\overline{\mathcal{M}}_{g}\) is isomorphic to the global quotient stack \([pt./\operatorname{Aut}(X)]\). Therefore the degree of the coarse moduli map restricted to \(X\) is \(1/\#\operatorname{Aut}(X)\). Similarly \(h\cong[pt./\operatorname{Aut}(X,\tau)]\), and the degree of the coarse moduli map restricted to \(h\) equals \(1/\#\operatorname{Aut}(X,\tau)\). We then obtain
\[\frac{m_{X}}{\#\operatorname{Aut}(X)}=\frac{\alpha_{X}}{\#\operatorname{Aut} (X,\tau)}, \tag{19}\]
from which the statement of the proposition follows.
### First main result
First, we evaluate the mass formula in terms of \(a\)-numbers of curves in the family.
**Theorem 3.3**.: _For \(\mathcal{F}\) a one dimensional family of cyclic covers of a rational curve, and \(p\) a prime such that the generic curve in the characteristic \(p\) fiber \(\mathcal{F}_{p}\) is ordinary, we have:_
\[\mu(\mathcal{F},p)=\sum_{[X]}\frac{\alpha_{X}}{\#\operatorname{Aut}(X,\tau)},\]
_where the sum is over the isomorphism classes of non-ordinary curves \(X\) in \(\mathcal{F}_{p}\)._
Proof.: By Proposition 3.2,
\[\frac{m_{X}}{\#\operatorname{Aut}(X)}=\frac{\alpha_{X}[\operatorname{Aut}(X, \tau):\operatorname{Aut}(X)]}{\#\operatorname{Aut}(X)}=\frac{\alpha_{X}}{\# \operatorname{Aut}(X,\tau)}.\]
Substituting this in (18) completes the proof.
### Second main result
Next we evaluate the mass formula in terms of tautological intersection numbers of \(\overline{\mathcal{M}}_{g}\).
**Theorem 3.4**.: _For \(\mathcal{F}\) a one dimensional family of \(\mu_{d}\)-covers of a rational curve with inertia type \(a\), and \(p\) a prime such that the generic curve in the characteristic \(p\) fiber \(\mathcal{F}_{p}\) is ordinary, we have:_
\[\mu(\mathcal{F},p)=(p-1)\frac{C^{\mathcal{F}}_{d,a}(\lambda_{1})}{\delta^{ \mathcal{F}}_{d,a}}, \tag{20}\]
_where the symbols on the right hand side are as in Definitions 2.8, 2.10._
Proof.: By (17), \(\mu(\mathcal{F},p)=\int_{\overline{\mathcal{M}}_{g}}[\mathcal{M}^{\mathcal{F}} _{d,a}]\cdot[V_{g-1}]\). The intersection \([\mathcal{M}^{\mathcal{F}}_{d,a}]\cdot[V_{g-1}]\) determines a class in the Chow ring of \(\overline{\mathcal{M}}_{g}\). We compute its degree.
By [10, Theorem 2.3], every component of \(V_{g-1}\) has dimension \(3g-4\). Furthermore, the generic point of each component of \(V_{g-1}\) represents a smooth curve by [1, Lemma 3.2]. By [1, Section 9, page 625], the class of the non-ordinary locus \([V_{g-1}]\) on \(\mathcal{A}_{g}\) is \((p-1)\lambda_{1}\). So the class \([V_{g-1}]\) on \(\overline{\mathcal{M}}_{g}\) equals \((p-1)\lambda_{1}\in A^{1}(\overline{\mathcal{M}}_{g})\).
Since \(\phi^{\mathcal{F}}_{d,a\ *}([\mathcal{A}^{\mathcal{F}}_{d,a}])=\delta^{\mathcal{F}}_{d,a}[ \mathcal{M}^{\mathcal{F}}_{d,a}]\), in the Chow ring of \(\overline{\mathcal{M}}_{g}\) we have:
\[[\mathcal{M}^{\mathcal{F}}_{d,a}]\cdot[V_{g-1}]=\frac{\phi^{\mathcal{F}}_{d,a \ *}([\mathcal{A}^{\mathcal{F}}_{d,a}])}{\delta^{\mathcal{F}}_{d,a}}\cdot(p-1) \lambda_{1} \tag{21}\]
It thus suffices to show that the degree of \(\phi_{d,a\ *}^{\mathcal{F}}([\mathcal{A}_{d,a}^{\mathcal{F}}])\cdot\lambda_{1}\) equals \(C_{d,a}^{\mathcal{F}}(\lambda_{1})\). In order to compute the degree of this class, consider the commutative diagram
where \(\tilde{c}\) and \(c\) are constant functions.
Pulling out coefficients and using the projection formula, one obtains
\[c_{*}(\phi_{d,a\ *}^{\mathcal{F}}([\mathcal{A}_{d,a}^{\mathcal{F}}]) \lambda_{1}) = c_{*}\phi_{d,a}^{\mathcal{F}}\left([\mathcal{A}_{d,a}^{\mathcal{F }}]\cdot\lambda_{1}\right)\] \[= \tilde{c}_{*}\left(\lambda_{1}\right)=C_{d,a}^{\mathcal{F}}( \lambda_{1})[pt.].\]
This completes the proof because the degrees of the classes are the same.
**Remark 3.5**.: Observe that the definitions and results in this section apply more generally. Any proper one dimensional family of curves \(\mathcal{F}\) determines a class \([\mathcal{C}^{\mathcal{F}}]\in A_{1}(\overline{\mathcal{M}}_{g})\), and one can define a mass formula for \(\mathcal{F}\) as the degree of the class \([\mathcal{C}^{\mathcal{F}}]\cdot[V_{g-1}]\). Then it is immediate to see that Theorem 3.4 holds. Since in this work we only explore applications for families of cyclic covers of rational curves, we chose to restrict to that case in the statements.
## 4. Evaluating the class for cyclic covers
In this section, we review results from [COS]. Let \(d\geq 2\) and \(n\geq 4\). Recall that \(a=(a_{1},\ldots,a_{n})\) is a tuple of integers with \(0\leq a_{i}<d\), whose sum is congruent to \(0\) modulo \(d\).
### Cyclic covers branched at four points
A natural class of examples is given by moduli spaces of admissible cyclic covers that are one dimensional, i.e., when the covers are branched at exactly four points. Since the base of the family is the fundamental class of the space of admissible covers, we lighten the notation by suppressing the superscript \(\mathcal{F}\).
In this situation, \(\mathcal{A}_{d,a}\) denotes the moduli space of (admissible) \(\mu_{d}\)-covers of \(\mathbb{P}^{1}\) branched at \(n=4\) points with inertia type \(a\). In this case, [COS] gives an explicit formula evaluating the degree of \(\lambda_{1}\).
**Theorem 4.1**.: _[COS, Theorem 1.2] When \(n=4\), the degree of \(\lambda_{1}\) on \(\mathcal{A}_{d,a}\) is_
\[C_{d,a}(\lambda_{1})=\frac{1}{12d^{2}}\left(d^{2}-\sum_{i=1}^{4}\gcd^{2}(a_{i },d)+\sum_{i=1}^{3}\gcd^{2}(a_{i}+a_{4},d)\right). \tag{22}\]
This theorem is proven using Atiyah-Bott localization to obtain linear relations between the degree of \(\lambda_{1}\) and the degrees of some zero-dimensional boundary strata in the spaces of admissible covers. Alternatively, one may deduce this formula from a stacky Grothendieck-Riemann-Roch computation, as done in [1, Proposition 10.20] in the hyperelliptic case.
### Cyclic covers with more than four branch points
When the base of the family is not all of \(\mathcal{A}_{d,a}\), we use the following result from [COS] in order to evaluate the class \(\lambda_{1}\). First we define some notation.
Suppose \(J\) is a subset of \([n]=\{1,\ldots,n\}\) with \(2\leq|J|\leq n-2\). Then \(\Delta_{J}\) denotes the boundary divisor whose generic point represents a \(\mu_{d}\)-cover of a rational curve with two irreducible components, intersecting in one node, with the branch points labeled by \(J\) on one component, and the branch points labeled by \(J^{c}\) on the other. Note that \(\Delta_{J}=\Delta_{J^{c}}\) and therefore summing over all allowed subsets of \([n]\) one counts each divisor twice.
We need some other definitions. Consider the universal curve \(\pi:\mathcal{C}_{g,n}\to\overline{\mathcal{M}}_{g,n}\) and the section \(\sigma_{i}\) of \(\pi\) whose image on an \(n\)-marked curve is the \(i\)th marked point. Define \(\psi_{i}=c_{1}(\sigma_{i}^{*}(\omega_{\pi}))\), where
is the relative dualizing sheaf. Furthermore, by identifying \(\mathcal{C}_{g,n}\) with \(\overline{\mathcal{M}}_{g,n+1}\), we can view \(\pi=\pi_{n+1}\) as the projection map that forgets the data of the last marked point. Define \(\kappa_{1}=\pi_{n+1,*}(\psi_{n+1}^{2})\).
**Theorem 4.2**.: _[_COS_, Theorem 1.3]_ _The class \(\lambda_{1}\) on \(\mathcal{A}_{d,a}\) is equivalent to the following tautological expression:_
\[\lambda_{1}=\frac{1}{24d}\left(\sum_{J\in\mathcal{P}([n])}\gcd^{2}\left(\sum_ {j\in J}a_{i},d\right)\Delta_{J}\right), \tag{23}\]
_where \(J\) is a subset of \(\{1,\ldots,n\}\), and_
* _if_ \(2\leq|J|\leq n-2\)_, then_ \(\Delta_{J}\) _denotes the boundary divisor described above;_
* _if_ \(J=\{j\}\) _or_ \(J=[n]\smallsetminus\{j\}\) _for some_ \(1\leq j\leq n\)_, then_ \(\Delta_{J}:=-\psi_{j}\)_;_
* _if_ \(J=\phi\) _or_ \(J=[n]\)_, then_ \(\Delta_{J}:=\kappa_{1}\)_._
## 5. Linearized families of hyperelliptic curves of every genus
In Section 5.1, we show that our mass formula generalizes the Eichler-Deuring mass formula. Then in Section 5.3, we determine the mass formula for a one dimensional family of hyperelliptic curves of every genus \(g\geq 2\).
### The Legendre family as an example
Suppose \(d=2\), \(n=4\), and \(a=(1,1,1,1)\). Then the family (2) is the Legendre family of elliptic curves:
\[E_{t}:y^{2}=x(x-1)(x-t). \tag{24}\]
**Lemma 5.1**.: _In the case of the Legendre family (24), Theorems 3.3 and 3.4 specialize to the Eichler-Deuring mass formula (1)._
Proof.: Note that \(C_{d,a}^{\mathcal{F}}(\lambda_{1})=1/4\) by Theorem 4.1. We compute \(\delta_{2,(1,1,1,1)}=6\): one of the ramification points is the marked point on the elliptic curve and there are \(3!\) labelings of the other three ramification points. By Theorem 3.3, \(\mu(\mathcal{F},p)=(p-1)/24\).
In the Legendre family (24), \(\tau\) is the hyperelliptic involution, which commutes with every automorphism of the elliptic curve; so \(\operatorname{Aut}(E_{t},\tau)=\operatorname{Aut}(E_{t})\). Also, if the elliptic curve \(E_{t}\) is non-ordinary, then its \(a\)-number is \(\alpha_{E_{t}}=1\). By Theorem 3.3, \(\mu(\mathcal{F},p)=\sum_{[E]}\frac{1}{\#\operatorname{Aut}(E)}\), where the sum is over the isomorphism classes of non-ordinary elliptic curves \(E\) over \(k\). Equating these two expressions for \(\mu(\mathcal{F},p)\) yields (1).
There are many proofs of the Eichler-Deuring mass formula; our proof of the Main Theorem is most similar to the one in [13, Corollary 12.4.6, page 358]. Other proofs can be deduced from: the separability of the Deuring polynomial [12, Theorem 4.1, chapter 13]; a comparison of the \(\ell\)-adic etale Euler characteristic of the modular curve \(Y_{0}(p)\) in characteristic \(0\) and characteristic \(p\); or a computation of the constant term of the weight two Eisenstein series on \(\Gamma_{0}(p)\).
**Remark 5.2**.: We explain how the number of isomorphism classes of supersingular elliptic curves determines (and is determined by) (1), given some additional information about supersingular elliptic curves with extra automorphisms, following [12, Section 13.4]. This perspective is useful to review before we introduce the material in Sections 7.3 and 8.2.
If \(p=2\), the elliptic curve \(E^{\prime}:y^{2}+y=x^{3}\) is supersingular and \(\#\operatorname{Aut}(E^{\prime})=24\); there is a unique isomorphism class of supersingular elliptic curve over \(\bar{\mathbb{F}}_{2}\)[12, Example 13.3.2 and Section 3.6.2 case 2]. If \(p=3\), the elliptic curve \(E^{\prime\prime}:y^{2}=x^{3}+x\) is supersingular and \(\#\operatorname{Aut}(E^{\prime\prime})=12\)[12, Example 13.3.3 and Section 3.5.2 case 2]. These two cases are both compatible with (1).
For \(p\geq 5\), the number of isomorphism classes of supersingular elliptic curves is \(\lfloor p/12\rfloor+\epsilon_{p}\) where \(\epsilon_{p}=0,1,1,2\) when \(p\equiv 1,5,7,11\) mod \(12\) respectively. To see this, one uses the Cartier operator to find a polynomial whose roots are the values of \(t\) for which the elliptic curve \(y^{2}=x(x-1)(x-t)\)
is supersingular. This polynomial has degree \((p-1)/2\) and is separable by a result of Igusa. There are typically \(6\) values of \(t\) for which the curves are in the same isomorphism class, as determined by the \(j\)-invariant.
Suppose \(p\geq 5\). The elliptic curve \(y^{2}=x^{3}-x\) (\(j\)-invariant \(1728\)) has \(4\) automorphisms; it is supersingular if and only if \(p\equiv 3\bmod 4\). The elliptic curve \(y^{2}=x^{3}+1\) (\(j\)-invariant \(0\)) has \(6\) automorphisms; it is supersingular if and only if \(p\equiv 2\bmod 3\). These are the only elliptic curves with more than \(2\) automorphisms. So there are exactly \(\lfloor p/12\rfloor\) supersingular elliptic curves \(E\) with \(\#{\rm Aut}(E)=2\). Combining all this data yields (1).
### Linearized families of hyperelliptic curves
**Definition 5.3**.: Suppose \(h(x)\in k[x]\) is a separable polynomial of degree \(2g+1\). Let \(\mathcal{F}_{h(x)}\) denote the one dimensional family of hyperelliptic curves of genus \(g\) whose generic fiber is given by the affine equation
\[X_{t}:y^{2}=h(x)(x-t). \tag{25}\]
We intend the base of this family to be \(\mathbb{P}^{1}\): the fibers over the values of \(t\) coinciding with a root of \(h(x)\) are the stable hyperelliptic curves arising from the admissible cover compactification of the family.
Throughout this section, we assume that \(h(x)\) is generic in that its roots do not satisfy any \({\rm PGL}_{2}(k)\) symmetry. We also assume that the generic curve in the characteristic \(p\) fiber of \(\mathcal{F}_{h(x)}\) is ordinary. In a conversation with Achter, we heard that he and Katz have the following expectation:
* for small \(p\), it is possible that there exists \(h(x)\) such that the generic curve in \(\mathcal{F}_{h(x)}\) is not ordinary;
* for \(p\) sufficiently large, perhaps \(p>2g\), one is cautiously optimistic that the generic curve in \(\mathcal{F}_{h(x)}\) is ordinary for every separable \(h(x)\in k[x]\) of degree \(2g+1\).
### The mass formula for linearized families of hyperelliptic curves
Here is our first application of the mass formula.
**Corollary 5.4**.: _Suppose \(h(x)\in k[x]\) is a generic separable polynomial of degree \(2g+1\) as in Definition 5.3. Suppose \(p\) is a prime such that the generic curve in the characteristic \(p\) fiber of \(\mathcal{F}_{h(x)}:y^{2}=h(x)(x-t)\) is ordinary. Then the mass formula is \(\mu(\mathcal{F}_{h(x)},p)=(p-1)g/4\)._
The proof of Corollary 5.4 is in Section 5.4.
**Remark 5.5**.: We explain how the result \((p-1)g/4\) in Corollary 5.4 can be predicted, but not finalized, using earlier results about the Cartier operator.
By Theorem 3.3, \(\mu(\mathcal{F},p)=\sum_{[X]\ \frac{\alpha_{X}}{\#{\rm Aut}(X,\tau)}}\). Here \(\tau\) is the hyperelliptic involution, which is in the center of \({\rm Aut}(X)\). So \({\rm Aut}(X,\tau)={\rm Aut}(X)\) and any automorphism of \(X_{t}\) descends to an automorphism of \(\mathbb{P}^{1}\) that fixes the roots of \(h(x)(x-t)\). Since \(g\geq 2\), there are at least \(6\) roots and so \({\rm Aut}(X_{t})\) is trivial for the generic choice of \(t\).
Suppose \(p\geq 2g-1\). Let \(e=(p-1)/2\). Let \(H(x)=(h(x)(x-t))^{e}\), which is a polynomial in \(k[t,x]\) of degree \((2g+2)e\) in the variable \(x\) and degree \(e\) in the variable \(t\). Let \(c_{\ell}\) be the coefficient of \(x^{\ell}\) in \(H(x)\), for \(0\leq\ell\leq(2g+2)e\). The coefficient \(c_{\ell}\) is a polynomial in \(k[t]\); it has degree \(e\) in the variable \(t\) for a general choice of \(h(x)\), and for \(0\leq\ell\leq(2g+1)e\).
By [20, page 381], there is a basis for \(H^{0}(X_{t},\Omega^{1})\) such that the matrix \(M\) for the Cartier operator is the \(g\times g\) matrix whose \((i,j)\)th entry is \(c_{pi-j}\). The roots of \(\det(M)\) are the values of \(t\) such that \(X_{t}\) is non-ordinary. By the hypothesis \(p\geq 2g-1\), every entry of \(M\) is a polynomial of degree \(e\) in \(t\), and so \(\det(M)\) is a polynomial of degree \(eg\) in the variable \(t\). If \(t_{0}\) is a root of \(\det(M)\), the \(a\)-number of \(X_{t_{0}}\) is the multiplicity of the root.
For a general choice of \(h(x)\), one expects that \(X_{t_{0}}\) does not have extra automorphisms for each non-ordinary curve in the family. Thus, for \(p\) sufficiently large and \(h(x)\) sufficiently general, this
gives a rationale for the equality \(\mu(\mathcal{F}_{h(x)},p)=eg/2=(p-1)g/4\), but the details of this method seem difficult to analyze.
### Proof of mass formula for linearized hyperelliptic families
In the proof of Corollary 5.4, we use the following facts about tautological classes on admissible covers and \(\overline{M}_{0,n}\).
**Lemma 5.6**.: _Let \(g\geq 2\) and \(d=2\). Let \(a=(1,\ldots,1)\) be a tuple of length \(n=2g+2\). Consider the space of hyperelliptic admissible covers \(\mathcal{A}_{2,a}\) and the tautological morphism \(\phi_{2,a}:\mathcal{A}_{2,a}\to\overline{M}_{0,2g+2}\) of degree \(1/2\). Then:_
1. _On the universal family of hyperelliptic covers, consider a (marked) ramification point_ \(r\) _and its corresponding branch point_ \(b\)_. The class_ \(\psi_{r}\) _is restricted from_ \(\overline{M}_{g,2g+2}\)_, whereas the class_ \(\psi_{b}\) _is in_ \(A^{1}(\overline{M}_{0,2g+2})\)_. We have:_ \[\psi_{r}=\frac{1}{2}\phi_{2,a}^{*}\psi_{b}.\]
2. _Suppose_ \(i\in[n]\)_. Choose two distinct values_ \(j,k\in[n]-\{i\}\)_. Then the class_ \(\psi_{i}\in A^{1}(\overline{M}_{0,n})\) _is represented by the following linear combination of boundary divisors:_ \[\psi_{i}=\sum_{\begin{array}{c}I\subset[n]\text{ such that }i\in I\\ j,k\not\in I\end{array}}\Delta_{I}.\]
3. _The following relation holds in_ \(A^{1}(\overline{M}_{0,n})\)_:_ \[\kappa_{1}=\sum_{1\leq i\leq n}\psi_{i}-\Delta_{tot},\] _where_ \(\Delta_{tot}=\frac{1}{2}\sum_{2\leq|J|\leq n-2}\Delta_{J}\) _denotes the sum of all boundary divisors._
Proof.: The first statement is the specialization to the degree \(2\) setting of [10, Lemma 1.17]. The second statement follows from the initial condition \(\psi_{i}=0\) on \(\overline{M}_{0,\{i,j,k\}}\) and relation for \(\psi\) classes when pulled-back via forgetful morphisms [10, Lemma 1.3.1]. The third statement follows from the initial condition \(\kappa_{1}=0\) on \(\overline{M}_{0,3}\) and relation for \(\kappa\) classes when pulled-back via forgetful morphisms [10, Lemma 2.2.3]. A published reference for the last two statements is [1, Formulas (1.9), (1.10)], but it is harder to recover the exact form we need from the more general treatment there.
Proof of Corollary 5.4.: Write \(\mathcal{F}=\mathcal{F}_{h(x)}\). By Theorem 3.4, \(\mu(\mathcal{F},p)=(p-1)C_{d,a}^{\mathcal{F}}(\lambda_{1})/\delta_{d,a}^{ \mathcal{F}}\). Since the polynomial \(h(x)\) is generic, there is no automorphism of \(\mathbb{P}^{1}\) that preserves the set of roots of \(h(x)\); it follows that \(\delta_{d,a}^{\mathcal{F}}=1\). It thus suffices to prove that \(C_{d,a}^{\mathcal{F}}(\lambda_{1})=g/4\).
We compute the intersection numbers of the family \(\mathcal{F}\) with the tautological divisors appearing on the right hand side of (23). Let \(b_{i}\) for \(1\leq i\leq 2g+1\) denote the roots of \(h(x)\). Then:
1. If \(2\leq|J|\leq 2g\), then \(\mathcal{F}\cdot\Delta_{J}=0\) unless \(J\) or \(J^{c}\) is \(\{i,2g+2\}\) for some \(1\leq i\leq 2g+1\). The reason is that the only branch point that moves in the linearized hyperelliptic family is the last one. So the only time this family hits a boundary divisor is when the last branch point specializes to one of the others.
2. If \(J=\{i,2g+2\}\) for some \(1\leq i\leq 2g+1\), then \(\mathcal{F}\cdot\Delta_{J}=\frac{1}{2}[pt]\). This intersection occurs when \(t\) specializes to the branch point \(b_{i}\). The singular curve has two components, with genera \(0\) and \(g-1\), intersecting in two ordinary double points; on each component, these two points are an orbit for the restriction of the hyperelliptic involution. The coefficient \(1/2=2/(2\cdot 2)\) accounts for the hyperelliptic involution on each component, and the two choices of ways to identify the two points on one component with the two points on the other.
3. If \(i\neq 2g+2\), then \(\mathcal{F}\cdot\psi_{i}=\frac{1}{2}[pt]\). To see this, we first use Lemma 5.6, (1) to work in \(A^{1}(\overline{M}_{0,2g+2})\). We use Lemma 5.6(2) to replace \(\psi_{i}\) by a linear combination of boundary divisors. We choose \(j=2g+2\) and choose \(k\) arbitrarily. Then the only boundary divisor \(\Delta_{I}\) intersecting \(\mathcal{F}\) is when \(I=[n]-\{k,2g+2\}\). This is because the branch points labeled by \(2g+2\) and \(k\) must be on the second component. Since the branch point labeled with \(2g+2\) is \(t\) and \(t\) can only specialize to one other branch point, all the other branch points must be on the first component. The result then follows from part (2).
4. \(\mathcal{F}\cdot\psi_{2g+2}=\frac{2g-1}{2}[pt]\). The proof begins in the same way as for part (3). In this case, we choose \(j=1\) and \(k=2\). If a boundary divisor \(\Delta_{I}\) in the expression of \(\psi_{2g+2}\) intersects \(\mathcal{F}\), then the marked branch point \(t\) must be on a component alone with only one other marked branch point \(b_{i}\), which must be different from \(b_{1}\) and \(b_{2}\). There are \(2g-1\) possible choices for \(i\in\{3,\ldots 2g+1\}\) and therefore \(2g-1\) divisors \(\Delta_{I}\) intersecting \(\mathcal{F}\), with \(I=\{i,2g+2\}\). The result then follows from part (2).
5. \(\mathcal{F}\cdot\kappa_{1}=\frac{2g-1}{2}[pt]\). To see this, note that \(\kappa_{1}=\psi_{2g+2}+\sum_{i=1}^{2g+1}\psi_{i}-\Delta_{tot}\) by Lemma 5.6(3). By part (3), \(\mathcal{F}\cdot\sum_{i=1}^{2g+1}\psi_{i}=\frac{2g+1}{2}[pt]\). By part (1), \(\mathcal{F}\cdot\Delta_{tot}=\sum_{1\leq i\leq 2g+1}\mathcal{F}\cdot \Delta_{\{i,2g+2\}}\), which equals \(\frac{2g-1}{2}[pt]\) by part (2). By part (4), \(\mathcal{F}\cdot\psi_{2g+2}=\frac{2g-1}{2}[pt]\).
Finally, we substitute these intersection numbers in (23). The coefficient \(\gcd^{2}(\sum_{i\in J}a_{i},d)\) equals \(1\) if \(|J|\) is odd and equals \(4\) if \(|J|\) is even. For \(1\leq s\leq 2g+2\), let \(C_{s}=\sum_{|J|=s}\mathcal{F}\cdot\Delta_{J}\). By definition, \(C_{s}=C_{2g+2-s}\). By part (1), \(C_{s}=0\) for \(3\leq s\leq 2g-1\). So
\[\int_{\mathcal{F}}\lambda_{1}=\frac{1}{24}\cdot\frac{2}{2}(4C_{2}+C_{1}+4C_{0 }).\]
By part (1), \(C_{2}=\frac{2g-1}{2}\). By parts (3)-(4), \(C_{1}=(-1)\frac{(2g-1)+(2g+1)}{2}=-2g\). By part (5), \(C_{0}=\frac{2g+1}{2}\). This yields that \(\int_{\mathcal{F}}\lambda_{1}=g/4\).
## 6. Covers branched at four points
As our second application, we compute the mass formula for any family of cyclic covers of \(\mathbb{P}^{1}\) branched at four points.
**Corollary 6.1**.: _Let \(d\geq 2\) and \(n=4\). Let \(a=(a_{1},\ldots,a_{4})\) be an inertia type for \(d\). Let \(\mathcal{F}=\mathcal{A}_{d,a}\) be the one dimensional family of cyclic \(\mu_{d}\)-covers of a rational curve with inertia type \(a\). Let \(p\) be a prime such that the generic curve in the characteristic \(p\) fiber \(\mathcal{F}_{p}\) is ordinary. Then_
\[\mu(\mathcal{F}_{d,a,p})=(p-1)C_{d,a}(\lambda_{1})/\delta_{d,a}, \tag{26}\]
_where the formula for \(C_{d,a}(\lambda_{1})\) is in (22) and the formula for \(\delta_{d,a}\) is in Lemma 2.9._
Proof.: Immediate from Theorems 3.4 and 4.1.
In the following sections, we illustrate Corollary 6.1 for some well-chosen examples.
### An example
We consider the family \(X_{t}:y^{d}=x(x-1)(x-t)\), which has inertia type \(a=(1,1,1,d-3)\).
**Example 6.2**.: Suppose \(d\geq 5\) with \(\gcd(d,6)=1\) and \(a=(1,1,1,d-3)\). Suppose \(p\equiv 1\bmod d\). Then the mass formula for the number of isomorphism classes of non-ordinary curves in the family \(X_{t}:y^{d}=x(x-1)(x-t)\) is \(\mu(d,a,p)=(p-1)(d^{2}-1)/(72\cdot d^{2})\).
Proof.: The condition \(p\equiv 1\bmod d\) implies that the source curve of the generic point of \(\mathcal{A}_{d,a}\) is ordinary. The hypotheses on \(d\) and \(a\) imply that \(C_{d,a}(\lambda_{1})=(d^{2}-1)/12d^{2}\) by Theorem 4.1. By Lemma 2.9, \(\delta_{d,a}=6\). Thus the result follows from Corollary 6.1.
From Example 6.2, we obtain some information about the number of isomorphism classes of non-ordinary curves of the form \(X_{t}:y^{d}=x(x-1)(x-t)\). Note that \(\#\mathrm{Aut}(X_{t},\tau)=d\) for a generic choice of \(t\). This is because any \(\sigma\in\mathrm{Aut}(X_{t},\tau)\) descends to an automorphism of \(\mathbb{P}^{1}\) that fixes \(\infty\) and stabilizes \(\{0,1,t\}\).
If \(X_{t}\) is non-ordinary, then \(1\leq\alpha_{X}\leq d-1\). This shows that the number of isomorphism classes of non-ordinary curves in the family grows like \(n_{d,a}(p-1)\) where \(\frac{d+1}{72\cdot d}\leq n_{d,a}\leq\frac{d^{2}-1}{72\cdot d}\). More information about the \(a\)-numbers of non-ordinary curves in the family would give sharper bounds on \(n_{d,a}\).
### Special families
In this section, we describe some families of cyclic covers of \(\mathbb{P}^{1}\) branched at four points for which we have more information about the \(a\)-numbers of the non-ordinary curves in the family. As a result, we find the rate of growth of the number of non-ordinary curves in these families as a linear function of \(p\). There are \(10\) families in Corollary 6.4 for which this result is new, specifically those with \(g\geq 3\).
The family \(\mathcal{M}_{d,a}\) is called _special_ if the image of the family under the Torelli morphism is open and dense in a component of the associated Deligne-Mostow Shimura variety. Moonen proved that there are exactly \(20\) families of \(\mu_{d}\)-covers of \(\mathbb{P}^{1}\) that are special (up to equivalence); we label these as \(M[r]\) for \(1\leq r\leq 20\). Of these, \(14\) are one dimensional.
For a one dimensional Deligne-Mostov Shimura variety, there are exactly two Newton polygons that can occur; the generic one is called \(\mu\)_-ordinary_ and the other is called _basic_. Here is the key feature we use about each of the \(14\) one dimensional special Moonen families: there is only one option for the \(a\)-number at the basic points of the family (depending on the congruence of \(p\) modulo \(d\)). We call this \(a\)-number \(\alpha_{\nu}\). In particular, if \(p\equiv 1\bmod d\), and if \(X\) is a non-ordinary curve in a one dimensional special Moonen family, then \(\alpha_{X}=\alpha_{\nu}\) is known.
**Notation 6.3**.: Suppose \(p\equiv 1\bmod d\). For each Moonen family \(M[r]\) that is one dimensional:
let \(\delta_{d,a}=\deg(\mathcal{A}_{d,a}\to\mathcal{M}_{d,a})\);
let \(\zeta_{d,a}\) be the size of \(\mathrm{Aut}(X,\tau)\) for a generic curve \(X\) in the family;
let \(\alpha_{\nu}\) be the \(a\)-number of the basic Ekedahl-Oort type on \(\mathcal{M}_{d,a}\); and
and let \(C=C_{d,a}(\lambda_{1})\) be as in (22).
**Corollary 6.4**.: _For the one dimensional special Moonen families \(\mathcal{F}\): the table below includes the data of \(C=C_{d,a}(\lambda_{1})\) and \(\delta_{d,a}\). The mass formula is \(\mu(\mathcal{F},p)=(p-1)C/\delta_{d,a}\)._
_The table below also includes the data of \(\zeta_{d,a}\) and \(\alpha_{\nu}\). For \(p\equiv 1\bmod d\), the number of isomorphism classes of non-ordinary curves in \(\mathcal{F}\) has linear rate of growth \((p-1)n_{d,a}\), where \(n_{d,a}=\zeta_{d,a}C/\alpha_{\nu}\delta_{d,a}\) is given below._
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline _Label_ & \(d\) & \(a\) & \(g\) & \(C\) & \(\delta_{d,a}\) & \(\zeta_{d,a}\) & \(\alpha_{\nu}\) & \(n_{d,a}\) \\ \hline \(M[1]\) & \(2\) & \((1,1,1,1)\) & \(1\) & \(1/4\) & \(6\) & \(2\) & \(1\) & \(1/12\) \\ \hline \(M[3]\) & \(3\) & \((1,1,2,2)\) & \(2\) & \(2/9\) & \(8\) & \(12\) & \(2\) & \(1/6\) \\ \hline \(M[4]\) & \(4\) & \((1,2,2,3)\) & \(2\) & \(1/8\) & \(4\) & \(8\) & \(2\) & \(1/8\) \\ \hline \(M[5]\) & \(6\) & \((2,3,3,4)\) & \(2\) & \(1/9\) & \(4\) & \(12\) & \(2\) & \(1/6\) \\ \hline \(M[7]\) & \(4\) & \((1,1,1,1)\) & \(3\) & \(1/8\) & \(24\) & \(16\) & \(1\) & \(1/12\) \\ \hline \(M[9]\) & \(6\) & \((1,3,4,4)\) & \(3\) & \(1/18\) & \(2\) & \(6\) & \(2\) & \(1/12\) \\ \hline \(M[11]\) & \(5\) & \((1,3,3,3)\) & \(4\) & \(2/25\) & \(6\) & \(5\) & \(2\) & \(1/30\) \\ \hline \(M[12]\) & \(6\) & \((1,1,1,3)\) & \(4\) & \(1/12\) & \(6\) & \(6\) & \(1\) & \(1/12\) \\ \hline \(M[13]\) & \(6\) & \((1,1,2,2)\) & \(4\) & \(1/9\) & \(4\) & \(12\) & \(2\) & \(1/6\) \\ \hline \(M[15]\) & \(8\) & \((2,4,5,5)\) & \(5\) & \(1/16\) & \(2\) & \(8\) & \(2\) & \(1/8\) \\ \hline \(M[17]\) & \(7\) & \((2,4,4,4)\) & \(6\) & \(4/49\) & \(6\) & \(7\) & \(2\) & \(1/21\) \\ \hline \(M[18]\) & \(10\) & \((3,5,6,6)\) & \(6\) & \(3/50\) & \(2\) & \(10\) & \(2\) & \(3/10\) \\ \hline \(M[19]\) & \(9\) & \((3,5,5,5)\) & \(7\) & \(2/27\) & \(6\) & \(9\) & \(2\) & \(1/18\) \\ \hline \(M[20]\) & \(12\) & \((4,6,7,7)\) & \(7\) & \(1/18\) & \(2\) & \(12\) & \(2\) & \(1/6\) \\ \hline \end{tabular}
**Remark 6.5**.:
1. The family \(M[1]\) is the Legendre family. The family \(M[3]\) (resp. \(M[4]\)) are studied in detail in Section 7.3 (resp. Section 8.2) with no congruence condition. The families \(M[5]\) and \(M[3]\) have the same image in \(\mathcal{M}_{2}\).
2. The computation of \(n_{d,a}\) is new for the remaining \(10\) families.
3. Corollary 1.2 is immediate for \(d=5\) from the data for \(M[11]\). This is because the inertia type \((1,3,3,3)\) for \(d=5\) is equivalent to the inertia type \((1,1,1,2)\), which gives the equation \(y^{5}=x(x-1)(x-t)\), after changing the \(5\)th root of unity and relabeling the branch points. Similarly, Corollary 1.2 is immediate for \(d=7\) from the data for \(M[17]\).
4. Note that \((p-1)n_{d,a}\) is not always an integer; this is due to the fact that there are a few exceptional points in the family such that the curve has larger automorphism group.
Proof.: The value of \(C\) can be calculated from Theorem 4.1. The value of \(\delta_{d,a}\) can be calculated from Lemma 2.9.
Every automorphism in \(\operatorname{Aut}(X,\tau)\) descends to an automorphism of \(\mathbb{P}^{1}\) that stabilizes \(\{0,1,\infty,t\}\) and is compatible with the inertia type \(a\). We calculate the value of \(\zeta_{d,a}\) for \(M[3],M[4],M[5]\) in Sections 7.3 and 8.2. For the other families, one can compute that \(\operatorname{Aut}(X,\tau)=\langle\tau\rangle\) for a generic value of \(t\), except for \(M[13]\).
The Newton polygons for the Moonen families are listed in [19]. From this, we determine the value of \(\alpha_{\nu}\). By Remark 2.6, if \(p\equiv 1\bmod d\), then the generic point of \(\mathcal{M}_{d,a,p}\) represents an ordinary curve, so the hypotheses of Theorems 3.3 and 3.4 are satisfied and the result follows.
## 7. A family of hyperelliptic curves with dihedral action
In this section, we provide an example for every even genus. In Section 7.3, we show that this material generalizes results when \(g=2\) from [11] and [12].
Throughout the section, suppose \(d\geq 3\) is odd and \(a=(1,1,d-1,d-1)\). Suppose \(p\) is an odd prime with \(p\nmid d\). For \(t\in k-\{0,1\}\), we consider the family of curves
\[X_{t}:y^{d}=x(x-1)(x-t)^{d-1}. \tag{27}\]
The genus of \(X_{t}\) is \(d-1\).
### The mass formula for \(a=(1,1,d-1,d-1)\)
**Corollary 7.1**.: _Suppose \(d\geq 3\) is odd, \(a=(1,1,d-1,d-1)\), and \(p\nmid 2d\). The mass formula for the non-ordinary curves in the family \(\mathcal{F}\) given by \(X_{t}:y^{d}=x(x-1)(x-t)^{d-1}\) is_
\[\mu(\mathcal{F},p)=(p-1)(d^{2}-1)/2^{5}d^{2}.\]
Proof.: The signature of the \(\mu_{d}\)-cover is given by the dimensions \(f_{n}=1\) if \(1\leq n\leq d-1\). To see this, we compute
\[f_{n}=-1+2\langle\frac{-n}{d}\rangle+2\langle\frac{-n(d-1)}{d}\rangle=-1+2 \frac{d-n}{d}+2\frac{n}{d}=1. \tag{28}\]
By Proposition 2.5, the curve \(X_{t}\) is ordinary for the generic value of \(t\). By Lemma 2.9, \(\delta_{d,a}=8\).
By Theorem 4.1, \(C_{d,a}(\lambda_{1})=(d^{2}-4+2d^{2}+1)/12d^{2}=(d^{2}-1)/4d^{2}\). The result then follows from Theorem 3.4.
In order to use Corollary 7.1 to estimate the number of non-ordinary curves in the family (27), we need more information about the automorphism groups and \(a\)-numbers.
### The automorphism group and \(a\)-number
**Lemma 7.2**.: _Let \(d\geq 3\) be odd. Let \(X_{t}:y^{d}=x(x-1)(x-t)^{d-1}\)._
1. _[_13_, Lemma 2.1]_ _Then_ \(X_{t}\) _is hyperelliptic. In particular,_ \(X_{t}\) _is birationally equivalent to the curve_ (29) \[Y^{2}=W^{2d}+(2-4t)W^{d}+1,\] _with the hyperelliptic involution_ \(\iota\) _given by_ \(\iota(Y,W)=(-Y,W)\)_._
2. _The curves_ \(X_{t_{1}}\) _and_ \(X_{t_{2}}\) _are isomorphic if and only if either_ \(t_{2}=t_{1}\) _or_ \(t_{2}=1-t_{1}\)_._
3. _If_ \(t\neq 1/2\)_, then_ \(\operatorname{Aut}(X_{t},\tau)\simeq C_{2}\times D_{d}\)_, where_ \(D_{d}\) _is the dihedral group of order_ \(2d\)_._
Proof.:
1. We include the proof for the convenience of the reader. Write (i) \(Z=W^{d}+1-2t\). One can check that (29) is true if and only if (ii) \((Y+Z)(Y-Z)=4t(1-t)\). Let \(x=(Y+W^{d}+1)/2\) and note that \(x-t=(Y+Z)/2\). Then (ii) implies that \((Y-Z)/2=t(1-t)/(x-t)\). Write (iii) \(Z=(Y+Z)/2-(Y-Z)/2=(x-t)+t(t-1)/(x-t)\). Substituting (iii) for \(Z\) in (i) and multiplying by \((x-t)^{d}\) yields the equation \[(x-t)^{d}W^{d}+(x-t)^{d}(1-2t)=(x-t)^{d-1}((x-t)^{2}+t(t-1)).\] Let \(y=(x-t)W\), then the equation simplifies to \(y^{d}=(x-t)^{d-1}x(x-1)\).
2. By part (1), \(X_{t_{1}}\) and \(X_{t_{2}}\) are isomorphic if and only if the hyperelliptic curves \(Y^{2}=f_{1}(W)\) and \(Y^{2}=f_{2}(W)\) are isomorphic, where \(f_{i}(X)=W^{2d}+(2-4t_{i})W^{d}+1\). An isomorphism between hyperelliptic curves commutes with the hyperelliptic involution, and thus descends to a fractional linear transformation \(\gamma\). Without loss of generality, we can suppose \(\gamma\) is an affine linear transformation, because the map \(W\mapsto 1/W\) preserves the roots of \(f_{1}(W)\). Thus \(X_{t_{1}}\simeq X_{t_{2}}\) if and only if there exists a map \(\gamma(W)=aW+b\) such that \(f_{1}(W)=f_{2}(\gamma(W))/a^{2d}\). This is only possible if \(b=0\), \(a^{2d}=1\) and \((2-4t_{2})/a^{d}=2-4t_{1}\). If \(f_{1}(W)\neq f_{2}(W)\), this implies that \(2-4t_{1}=-(2-4t_{2})\) and so \(t_{2}=1-t_{1}\). Conversely, if \(t_{2}=1-t_{1}\), then the map \(\gamma(W)=-W\) provides the isomorphism.
3. By part (1), \(X_{t}\) is isomorphic to \(Y^{2}=f(W)\) where \(f(W)=W^{2d}+(2-4t)W^{d}+1\). Note that \(\langle\iota\rangle\simeq C_{2}\) and \(\iota\) is in the center of \(\operatorname{Aut}(X_{t})\). The order \(d\) automorphism \(\tau(x,y)=(x,\zeta_{d}y)\) acts on the hyperelliptic model via \(\tau(Y,W)=(Y,\zeta_{d}W)\). The automorphism \(\gamma(Y,W)=(Y/W^{d},1/W)\) has order \(2\). A short computation shows that \(\gamma\tau\gamma^{-1}=\tau^{-1}\). Thus \(\#\operatorname{Aut}(X_{t},\tau)\) contains a subgroup isomorphic to \(C_{2}\times D_{d}\). Arguments similar to those in part (2) show that these are the only automorphisms that normalize \(\tau\) unless \(t=1/2\).
We do not have complete information about the \(a\)-number of each non-ordinary curve \(X_{t}\) in the family, but we can show that it is even. We thank Everett Howe and Jen Paulhus for conversations about this.
We follow [14, Section 4] to define two curves \(Z_{+1,t}\) and \(Z_{-1,t}\) of genus \((d-1)/2\) that are quotient curves of \(X_{t}\). For a positive integer \(n\), let \(P_{n}(S)\in\mathbf{Z}[S]\) be the polynomial such that \(P_{n}(S+S^{-1})=S^{n}+S^{-n}\). Then \(P_{0}(S)=2\), \(P_{1}(S)=S\), \(P_{2}(S)=S^{2}-2\). These satisfy the recurrence relation \(P_{n+2}(S)=S\cdot P_{n+1}(S)-P_{n}(S)\). One can check that \(P_{n}(S)\) is an odd (resp. even) function when \(n\) is odd (resp. even).
For \(\epsilon=1,-1\), let \(Z_{\epsilon,t}\) be the hyperelliptic curve \(v^{2}=(u+2\epsilon)P_{d}(u)+(2-4t)\); it has genus \((d-1)/2\).
**Lemma 7.3**.: _With \(d\) and \(p\) as above, consider \(X_{t}:y^{d}=x(x-1)(x-t)^{d-1}\) over \(k=\bar{\mathbf{F}}_{p}\)._
1. _[_13_, Theorem 4.2]_ _There is an isomorphism_ \(\operatorname{Jac}(X_{t})\simeq\operatorname{Jac}(Z_{+1,t})\times\operatorname{ Jac}(Z_{-1,t})\) _of abelian varieties without polarization._
_._
2. _The curves_ \(Z_{+1,t}\) _and_ \(Z_{-1,t}\) _are isomorphic._
3. _In particular, the_ \(a\)_-number of_ \(X_{t}\) _is even._
Proof.:
1. This is proved in [11, Theorem 4.2] over \(\mathbb{C}\). The proof still holds when working over an algebraically closed field of characteristic \(p\) when \(p\nmid 2d\).
2. The curve \(Z_{\epsilon,t}\) has equation \(v^{2}=m_{\epsilon}(u)\) where \(m_{\epsilon}(u)=(u\cdot P_{d}(u)+2-4t)+2\epsilon P_{d}(u)\). Since \(u\cdot P_{d}(u)\) is even and \(P_{d}(u)\) is odd, it follows that \(m_{-1}(u)=m_{+1}(-u)\). So the curves \(Z_{+1,t}\) and \(Z_{-1,t}\) are isomorphic by the change of variables \(u\mapsto-u\).
3. This is immediate from part (2) since the \(a\)-numbers of \(Z_{+1,t}\) and \(Z_{-1,t}\) are the same and the \(a\)-number is an additive invariant.
**Corollary 7.4**.: _The rate of growth of the number of non-ordinary curves in the family \(X_{t}:y^{d}=x(x-1)(x-t)^{d-1}\) over \(k=\bar{\mathbf{F}}_{p}\) is \(n_{d,a}(p-1)\) where \((d+1)/2^{3}d\leq n_{d,a}\leq(d^{2}-1)/2^{4}d\)._
Proof.: The generic size of \(\operatorname{Aut}(X_{t})\) is \(4d\) from Lemma 7.2. If \(X_{t}\) is non-ordinary, then \(2\leq\alpha_{X}\leq d-1\) by Lemma 7.3. The result follows from the mass formula in Corollary 7.1.
**Remark 7.5**.: We do not have much information about the \(a\)-number \(\alpha_{X}\), other than that it is even. There is a basis for \(H^{0}(X,\Omega^{1})\) for which the matrix for \(V\) has exactly one non-zero entry in each row and column. Each entry is a polynomial \(H_{n,p}(t)\) in \(t\) which is closely related to a hypergeometric function \(F(\frac{n}{d},\frac{d-n}{d},1;t)\). This yields \((d-1)/2\) polynomials, each repeated twice.
Using Igusa's approach, one can show that each polynomial \(H_{n,p}(t)\) is separable. (See [11, Lemma 2.3] for the details when \(p\equiv\pm 1\bmod d\).) However, it is not clear if there are repeated roots among the \((d-1)/2\) different polynomials. This appears to be an open problem about hypergeometric functions.
### Comparison of genus \(2\) case with earlier work
We show that the \(d=3\) case of Corollary 7.1 is compatible with earlier work of [10] and [11].
Let \(p\geq 5\), \(d=3\) and \(a=(1,1,2,2)\). Consider the family of genus \(2\) curves
\[X_{t}:y^{3}=x(x-1)(x-t)^{2}. \tag{30}\]
By Corollary 7.1, the mass formula for this family is \(\mu(\mathcal{F},p)=(p-1)/(4\cdot 9)\).
This mass formula is not stated in [10] and [11], although almost all the information needed to do so is contained in those papers. We describe the approach to proving this case found in [10, Proposition 3.2, Theorem 3.3]; see also [11, Theorems 2.6, 2.7].
This family of curves \(X_{t}\) of genus \(g=2\) is characterized by having \(S_{3}\subset\operatorname{Aut}(C_{t})\). In [10, Section 1.3], the authors parametrize this family in a different way: for \(\alpha\neq 0,1\),
\[C_{\alpha}:y_{1}^{2}=(x_{1}^{3}-1)(x_{1}^{3}-\alpha). \tag{31}\]
By [10, Proposition 1.10], if \(C_{\alpha}\) is not ordinary, then it is superspecial. On this particular family \(C_{\alpha}\) (or \(X_{t}\)), the following properties are all equivalent (being superspecial, being supersingular, having \(p\)-rank \(0\), and being non-ordinary); this is a very unusual situation.
**Proposition 7.6**.: _[_10_, Proposition 3.2]_ _Let \(p\geq 5\). The number of isomorphism classes of supersingular curves in the family (31) is_
\[N(3,(1,1,2,2),p)=\begin{cases}(p-1)/6&\text{if }p\equiv 1\bmod 6\\ (p+1)/6&\text{if }p\equiv 5\bmod 6.\end{cases}\]
Proof.: We briefly sketch the proof. Let \(w=\lfloor(p-1)/3\rfloor\), so \(w=(p-1)/3\) when \(p\equiv 1\bmod 3\) and \(w=(p-2)/3\) when \(p\equiv 2\bmod 3\). Consider the polynomial
\[G(z)=\sum_{j=0}^{w}\binom{(p-1)/2}{\lfloor(p+1)/6\rfloor+j}\binom{(p-1)/2}{j} z^{j}.\]
The authors prove that the Cartier-Manin matrix of \(C_{\alpha}\) is a scaling of a diagonal or anti-diagonal \(2\times 2\) matrix by the constant \(G(\alpha)\). This shows that \(C_{\alpha}\) is not ordinary (in fact, superspecial) if and only if \(\alpha\) is a root of \(G(z)\), [2, Proposition 1.8]. Using Igusa's strategy, they prove that the roots of \(G(z)\) are distinct, and \(\alpha=0\) and \(\alpha=1\) are not roots [2, Proposition 1.14]. The number of values of \(\alpha\) such that \(C_{\alpha}\) is supersingular is thus \(\deg(G(x))=w\).
The value \(\alpha=-1\) is handled separately because \(C_{-1}\) has extra automorphisms. When \(\alpha=-1\), then \(C_{-1}\) is supersingular if and only if \(p\equiv 5\bmod 6\), [2, Proposition 1.11]. By [2, Lemma 1.5], \(C_{\alpha_{1}}\simeq C_{\alpha_{2}}\) if and only if either \(\alpha_{2}=\alpha_{1}\) or \(\alpha_{2}=1/\alpha_{1}\). This divides the count by \(2\) when \(p\equiv 1\bmod 6\), or when \(p\equiv 5\bmod 6\) and \(\alpha\neq-1\).
There is a small error in [2, Theorem 3.3], because the family (31) does not specialize to [2, case (5)], which consists of curves whose reduced automorphism group is \(S_{4}\). Once this is corrected, their work yields the same mass formula \((p-1)/(4\cdot 9)\), as we explain below.
Let \(\epsilon_{3}=1-\binom{-3}{p}\), which equals \(0\) if \(p\equiv 1\bmod 6\) and equals \(2\) if \(p\equiv 5\bmod 6\).
**Proposition 7.7**.: _Let \(p\geq 5\). The information in [2] yields the mass formula \((p-1)/(4\cdot 9)\) for the non-ordinary curves in the family (31)._
Proof.: Each curve \(C_{\alpha}\) in the family (31) is hyperelliptic. Also, if \(C_{\alpha}\) is non-ordinary, then its Cartier-Manin matrix is the zero matrix. This implies that the multiplicity \(m_{C_{\alpha}}\) is two. For this reason, the mass formula simplifies as \(\sum_{[C]}\frac{1}{\#\mathrm{redAut}(C,\tau)}\), where the sum is over the isomorphism classes of non-ordinary curves \(C\) in the family (31).
For the family (31): let \(R_{n}\) be the number of isomorphism classes of non-ordinary curves \(C_{\alpha}\) such that \(\#\mathrm{redAut}(C_{\alpha},\tau)=n\); let \(R_{\infty}\) be the number of isomorphism classes of non-ordinary singular curves in (the closure of) the family in \(\overline{\mathcal{M}}_{2}\).
The boundary of the family (31) contains a singular curve \(S\) composed of the join of two elliptic curves, each having equation \(y^{2}=x^{3}-1\). The curve \(S\) is supersingular if and only if \(p\equiv 5\bmod 6\). So \(R_{\infty}=\epsilon_{3}/2\). Also \(\#\mathrm{redAut}(S)=6^{2}\), because each elliptic curve has \(6\) automorphisms and there is an automorphism transposing the two elliptic curves.
Suppose \(p=5\). Then there is a unique non-ordinary curve in the family by [2, Theorem 3.3(II)]; an alternative equation for this curve is \(y^{2}=x^{5}-x\). It has reduced automorphism group \(\mathrm{PGL}_{2}(5)\simeq S_{5}\) of order \(120\). The normalizer of a \(3\)-cycle in \(S_{5}\) has order \(12\). So \(\#\mathrm{redAut}(X,\tau)=12\). This yields the mass formula \((1/12)+(1/36)\), which equals \((p-1)/(4\cdot 9)\).
Suppose \(p\geq 7\). By [15], the only possibilities for \(\mathrm{redAut}(C_{\alpha})\) are \(S_{3}\) or \(D_{6}\). The case \(S_{4}\) does not occur - this is a typo in [2]. Every subgroup of order \(3\) is normal in \(S_{3}\) and in \(D_{6}\), so \(\mathrm{redAut}(C_{\alpha},\tau)=\mathrm{redAut}(C_{\alpha})\). This shows that \(R_{n}=0\) unless \(n=6\) or \(n=12\).
When \(\alpha=-1\), then \(C_{\alpha}\) specializes to [2, Case (4)]; it has reduced automorphism group \(D_{6}\) and is supersingular if and only if \(p\equiv 5\bmod 6\). So \(R_{12}=\epsilon_{3}/2\).
By Proposition 7.6, the total number of supersingular curves \(C_{\alpha}\) with \(\mathrm{redAut}(C_{\alpha})\simeq S_{3}\) is \(R_{6}=(p-1)/6-\epsilon_{3}/3\). This equals \((p-1)/6\) if \(p\equiv 1\bmod 6\) and equals \((p-5)/6\) if \(p\equiv 5\bmod 6\).
In conclusion, this yields the mass formula:
\[\frac{R_{6}}{6}+\frac{R_{12}}{12}+\frac{R_{\infty}}{36}=\frac{p-1}{36}+e_{3} \left(-\frac{1}{9}+\frac{1}{12}+\frac{1}{36}\right)=\frac{p-1}{36} \tag{32}\]
## 8. Another family of hyperelliptic curves with dihedral action
Let \(d\geq 4\) be even and \(a=(1,d/2,d/2,d-1)\). We consider the family of curves
\[X_{t}:y^{d}=x(x-1)^{d/2}(x-t)^{d/2}. \tag{33}\]
Over the two branch points \(x=1\) and \(x=t\), the fiber of \(\alpha:X\to\mathbb{P}^{1}\) contains \(d/2\) points, each having an inertia group of size \(2\). Write \(d_{1}=d/2\). By Lemma 2.3, the genus of \(X_{t}\) is \(d_{1}\).
### The mass formula for \(a=(1,d/2,d/2,d-1)\)
**Corollary 8.1**.: _Suppose \(d\geq 4\) with \(d\) even and \(a=(1,d/2,d/2,d-1)\). Suppose \(p\nmid d\). The mass formula for the non-ordinary curves in the family \(X_{t}:y^{d}=x(x-1)^{d-1}(x-t)^{d/2}\) is_
\[\mu(d,(1,d/2,d/2,d-1),p)=\begin{cases}(p-1)/2^{5}&\text{if $d\equiv 0\bmod 4$} \\ (p-1)(d^{2}+4)/(2^{7}d^{2})&\text{if $d\equiv 2\bmod 4$}.\end{cases}\]
Proof.: The signature of the \(\mu_{d}\)-cover is given by the dimensions \(f_{n}=0\) if \(0\leq n\leq d-1\) is even and \(f_{n}=1\) if \(0\leq n\leq d-1\) is odd. To see this, we compute
\[f_{n}=-1+\langle\frac{-n}{d}\rangle+\langle\frac{-n(d-1)}{d}\rangle+2\frac{- nd/2}{d}=2\langle\frac{n}{2}\rangle.\]
By Proposition 2.5, the curve \(X_{t}\) is ordinary for a generic value of \(t\). By Lemma 2.9, \(\delta_{d,a}=4\).
Write \(d_{1}=d/2\). Let \(d_{2}:=\gcd(d_{1}+1,d)\). Then \(d_{2}=1\) if \(d\equiv 0\bmod 4\) and \(d_{2}=2\) if \(d\equiv 2\bmod 4\). The result follows from Theorem 3.4 once we use Theorem 4.1 to compute:
\[C_{d,a}(\lambda_{1})=(d^{2}-(2+2d_{1}^{2})+(d^{2}+2d_{2}^{2}))/12d^{2}= \begin{cases}1/8&\text{if $d\equiv 0\bmod 4$}\\ (d_{1}^{2}+1)/8d_{1}^{2}&\text{if $d\equiv 2\bmod 4$}.\end{cases}\]
For this family, one can check that \(X_{t}\) is hyperelliptic and that \(\#\text{Aut}(X,\tau)=2d\), unless \(t=-1\) in which case \(\#\text{Aut}(X,\tau)=4d\). We do not have results on the \(a\)-number of \(X_{t}\) in general.
### Comparison of genus \(2\) case with previous work
We show that the \(d=4\) case of Corollary 8.1 is compatible with earlier work of [13].
Let \(d=4\) and \(a=(1,2,2,3)\). Consider the family of curves
\[X_{t}:y^{4}=x(x-1)^{2}(x-t)^{2}. \tag{34}\]
By Corollary 8.1, the mass formula for this family is \(\mu(\mathcal{F},p)=(p-1)/2^{5}\). This mass formula is not stated in [13], although almost all the information needed to do so is contained in that paper, as we explain below.
This family of genus \(2\) curves is called Case (3) in [14] and [13]. A generic curve \(X\) in the family is characterized by having \(\text{Aut}(X)\simeq D_{4}\) and \(\text{redAut}(X)\simeq C_{2}\times C_{2}\). The family can also be parametrized as:
\[Y^{2}=X(X^{2}-1)(X-\lambda)(X-1/\lambda),\]
or
\[C_{\beta}:Y_{1}^{2}=X_{1}(X_{1}^{2}-1)(X_{1}^{2}-\beta), \tag{35}\]
where \(\beta=(\lambda+1)^{2}/(\lambda-1)^{2}\).
For the equation (35), an automorphism of order \(4\) is given by \(\tau(X_{1},Y_{1})=(-X_{1},iY_{1})\). Note that \(\tau^{2}\) is the hyperelliptic involution.
By [13, Proposition 1.10], if \(X_{t}\) is not ordinary, then it is superspecial, and thus supersingular.
**Proposition 8.2**.: _[_13_, Proposition 3.2]_ _Write \(p=8k+\epsilon\) where \(\epsilon\in\{1,3,5,7\}\). The number of supersingular curves in the family is \(k\) if \(\epsilon\in\{1,3\}\) and is \(k+1\) if \(\epsilon\in\{5,7\}\)._
Proof.: We briefly summarize the proof from [13]. Write \(h(X)=\sum_{j=0}^{[p/4]}\binom{(p-1)/2}{(p+1)/4+j}\binom{(p-1)/2}{j}X^{j}\). By [13, Proposition 1.9], \(C_{\beta}\) is not ordinary (in fact, supersingular) if and only if \(\beta\) is a root of \(h(X)\). By [13, Proposition 1.14], the roots of \(h(X)\) are distinct. Thus the number of non-ordinary curves in the family is \(\deg(h(X))=\lfloor p/4\rfloor\).
Then \(C_{\beta_{1}}\simeq C_{\beta_{2}}\) if and only if either \(\beta_{2}=\beta_{1}\) or \(\beta_{2}=1/\beta_{1}\). This divides the number of supersingular curves by \(2\). The case \(\beta=-1\) is handled separately; when \(\beta=-1\), the curve is supersingular if and only if \(p\equiv 5,7\bmod 8\).
Recall that \(\epsilon_{3}=1-{-3\choose p}\), which is \(0\) if \(p\equiv 1\bmod 6\) and is \(2\) if \(p\equiv 5\bmod 6\). Let \(\epsilon_{1}=1-{-1\choose p}\), which is \(0\) if \(p\equiv 1\bmod 4\) and is \(2\) if \(p\equiv 3\bmod 4\). Let \(\epsilon_{2}=1-{-2\choose p}\), which is \(0\) if \(p\equiv 1,3\bmod 8\) and is \(2\) if \(p\equiv 5,7\bmod 4\).
**Proposition 8.3**.: _Let \(p\geq 7\). The information in [10] yields the mass formula \((p-1)/2^{5}\) for the non-ordinary curves in the family (34)._
Proof.: Each curve \(C_{\beta}\) in the family (35) is hyperelliptic. Also, if \(C_{\beta}\) is non-ordinary, then its Cartier-Manin matrix is the zero matrix. This implies that the multiplicity \(m_{C_{\beta}}\) is two. For this reason, the mass formula simplifies as \(\sum_{[C]\ \overline{\#\mathrm{redAut}(C,\tau)}}\), where the sum is over the isomorphism classes of non-ordinary curves \(C\) in the family (35).
Let \(R_{n}\) be the number of isomorphism classes of non-ordinary curves \(C_{\beta}\) in the family such that \(\#\mathrm{redAut}(C_{\beta})=n\). Let \(R_{\infty}\) be the number of isomorphism classes of non-ordinary singular curves in (the closure of) the family in \(\overline{\mathcal{M}}_{2}\).
The boundary of the family (35) contains a singular curve \(S\) composed of the join of two elliptic curves. These each have equation \(y^{2}=x^{3}-x\), which is non-ordinary if and only if \(p\equiv 3\bmod 4\). So \(R_{\infty}=\epsilon_{1}/2\). Also \(\#\mathrm{redAut}(S)=4^{2}\), because each elliptic curve has \(4\) automorphisms and there is an automorphism transposing the two elliptic curves.
Suppose \(p\geq 7\). By [11], the only possibilities for \(\mathrm{redAut}(C_{\beta})\) are \(C_{2}\times C_{2}\), \(D_{6}\), or \(S_{4}\). This shows that \(R_{n}=0\) unless \(n=4,12,\mathrm{or}\ 24\). In the later two cases, there is an automorphism of order \(3\) that does not normalize \(\langle\tau^{2}\rangle\).
When \(\mathrm{redAut}(C_{\beta})\simeq D_{6}\), we can identify \(\tau\) with a reflection in \(D_{6}\); the normalizer of \(\langle\tau\rangle\) in \(D_{6}\) is the Klein-\(4\) group, of order \(4\). When \(\mathrm{redAut}(C_{\beta})\simeq D_{6}\), this shows that \(\#\mathrm{redAut}(C_{\beta},\tau)=4\). When \(\mathrm{redAut}(C_{\beta})\simeq S_{4}\), we can identify \(\tau\) with a \(2-2\) cycle in \(S_{4}\). The normalizer of \(\langle\tau\rangle\) in \(S_{4}\) has order \(8\). When \(\mathrm{redAut}(X)\simeq S_{4}\), this shows that \(\#\mathrm{redAut}(X,\tau)=8\).
**Claim:**\(R_{4}=k-\epsilon_{3}/2\); \(R_{12}=\epsilon_{3}/2\); and \(R_{24}=\epsilon_{2}/2\).
**Proof of claim:** Following [10], we see that:
The curve \(C_{\beta}\) has \(\mathrm{redAut}(C_{\beta})=D_{6}\) (called Case (4)) if and only if \(\beta=9\) (corresponding to \(\lambda=2\)); By [10, Proposition 1.11], \(C_{9}\) is supersingular if and only if \(p\equiv 5\bmod 6\). So \(R_{12}=\epsilon_{3}/2\).
The curve \(C_{\beta}\) has \(\mathrm{redAut}(C_{\beta})=S_{4}\) (called Case (5)) if and only if \(\beta=-1\) (corresponding to \(\lambda=i\)); By [10, Proposition 1.12], \(C_{-1}\) is supersingular if and only if \(p\equiv 5,7\bmod 8\). So \(R_{24}=\epsilon_{2}/2\).
By Proposition 8.2, the total number of supersingular curves in the family (35) is \(k+\epsilon_{2}/2\). Thus the number of supersingular curves with \(\mathrm{redAut}(C_{\beta})\simeq C_{2}\times C_{2}\) is \(k-\epsilon_{3}/2\). This equals the formula found in [10, Theorem 3.3], \(R_{4}=\frac{p-1}{8}-\frac{\epsilon_{1}}{8}-\frac{\epsilon_{2}}{4}-\frac{ \epsilon_{3}}{2}\). In conclusion, this yields the mass formula:
\[\frac{R_{4}}{4}+\frac{R_{12}}{4}+\frac{R_{24}}{8}+\frac{R_{\infty}}{16} =\] \[\frac{1}{4}\left(\frac{p-1}{8}-\frac{\epsilon_{1}}{8}-\frac{ \epsilon_{2}}{4}-\frac{\epsilon_{3}}{2}\right)+\frac{\epsilon_{3}/2}{4}+\frac{ \epsilon_{2}/2}{8}+\frac{\epsilon_{1}/2}{16} = \frac{p-1}{2^{5}}.\]
## 9. Related work
In recent work, the Eichler-Deuring formula was generalized in other ways. For example, the papers [20], [11], [12] provide mass formula for supersingular abelian varieties of dimensions \(2\) and \(3\) and supersingular abelian varieties with real multiplication. These papers take an adelic perspective, building on the work of Ekedahl [1], and study the mass of arithmetic quotients of certain double coset spaces. Our paper generalizes the Eichler-Deuring formula to a different class of abelian varieties using a different kind of proof.
In other related work [20], Yu studies the _basic locus_ of Shimura varieties of PEL-type, as defined by Kottwitz [14]. He proves a comparison formula between the geometric mass \(\sum\frac{1}{\#\operatorname{Aut}(A)}\) of the basic locus and the arithmetic mass of a double coset space, but states that it is difficult to compute either one. For families of \(\mu_{d}\)-covers of \(\mathbb{P}^{1}\), the image under the Torelli morphism is contained in a Shimura variety \(S\) of PEL-type for the semi-simple algebra \(\mathbf{Q}[\zeta_{d}]\). The results in this paper do not follow from [20, Theorem 4.6] because the Torelli locus typically has positive codimension in \(S\).
|
2308.05744 | PlankAssembly: Robust 3D Reconstruction from Three Orthographic Views
with Learnt Shape Programs | In this paper, we develop a new method to automatically convert 2D line
drawings from three orthographic views into 3D CAD models. Existing methods for
this problem reconstruct 3D models by back-projecting the 2D observations into
3D space while maintaining explicit correspondence between the input and
output. Such methods are sensitive to errors and noises in the input, thus
often fail in practice where the input drawings created by human designers are
imperfect. To overcome this difficulty, we leverage the attention mechanism in
a Transformer-based sequence generation model to learn flexible mappings
between the input and output. Further, we design shape programs which are
suitable for generating the objects of interest to boost the reconstruction
accuracy and facilitate CAD modeling applications. Experiments on a new
benchmark dataset show that our method significantly outperforms existing ones
when the inputs are noisy or incomplete. | Wentao Hu, Jia Zheng, Zixin Zhang, Xiaojun Yuan, Jian Yin, Zihan Zhou | 2023-08-10T17:59:34Z | http://arxiv.org/abs/2308.05744v1 | # PlankAssembly: Robust 3D Reconstruction from Three Orthographic Views with Learnt Shape Programs
###### Abstract
In this paper, we develop a new method to automatically convert 2D line drawings from three orthographic views into 3D CAD models. Existing methods for this problem reconstruct 3D models by back-projecting the 2D observations into 3D space while maintaining explicit correspondence between the input and output. Such methods are sensitive to errors and noises in the input, thus often fail in practice where the input drawings created by human designers are imperfect. To overcome this difficulty, we leverage the attention mechanism in a Transformer-based sequence generation model to learn flexible mappings between the input and output. Further, we design shape programs which are suitable for generating the objects of interest to boost the reconstruction accuracy and facilitate CAD modeling applications. Experiments on a new benchmark dataset show that our method significantly outperforms existing ones when the inputs are noisy or incomplete.
## 1 Introduction
In this paper, we tackle a long-standing problem in computer-aided design (CAD), namely 3D object reconstruction from three orthographic views. In today's product design and manufacturing industry, 2D engineering drawings are commonly used by designers to realize, update, and share their ideas, especially during the initial design stages. But to enable further analysis (, finite element analysis) and manufacturing, these 2D designs must be manually realized as 3D models in CAD software. Therefore, if a method can automatically convert the 2D drawings into 3D models, it would greatly facilitate the design process and improve overall efficiency.
As the most popular way to describe an object in 2D drawings, an orthographic view is the projection of the object onto the plane that is perpendicular to one of the three principal axes (Figure 1). Over the past few decades, 3D reconstruction from three orthographic views has been extensively studied, with significant improvements in terms of the types of applicable objects and computational efficiency [25, 10, 19, 37, 38, 28, 18, 21, 8, 9]. However, to the best of our knowledge, these techniques have not enjoyed wide adoption in CAD software and commercial products.
Among the challenges faced by existing methods in practice, their sensitivity to errors and missing components in the drawings is arguably the most critical one. To understand this issue, we note that almost all existing methods follow a standard procedure for 3D reconstruction, which consists of the following steps: **(i)** generate 3D vertices from 2D vertices; **(ii)** generate 3D edges from 3D vertices;
Figure 1: An illustration of various input drawings. **From top to bottom**: clean inputs, noisy inputs, and visible inputs. We use solid and dashed lines to represent the visible and hidden lines, respectively. In noisy line drawings (the second row), we use blue lines to represent noisy lines and highlight the missing lines using the red circle.
**(iii)** generate 3D faces from 3D edges; and **(iv)** construct 3D models from 3D faces (see Figure 6 for an illustration). One main benefit of following the pipeline is that all solutions that match the input views can be found, as it establishes explicit correspondences between entities in the 3D model and those in the drawing. But in practice, rather than making an extra effort to perfect the drawings, designers would deem a drawing good enough as long as it conveys their ideas. Hence, some entities may be erroneous or missing. As a result, the aforementioned pipeline often fails to find the desired solution.
To overcome this difficulty, therefore, it is necessary to reason about the 2D drawings in a more holistic manner, and enable more flexible mappings between the input and output. Recently, Transformer [29] has become the standard architecture in many NLP and CV tasks. It is particularly effective in sequence-to-sequence (seq2seq) problems, such as machine translation, where reasoning about the context and soft alignment between the input and output are critical. Motivated by this, we convert our problem into a seq2seq problem and propose a Transformer-based deep learning method. Intuitively, the self-attention modules allow the model to capture the intent of the product designers even if their drawings are imperfect, and the cross-attention modules enable flexible mappings between geometric entities in the 2D drawing and 3D model.
Another benefit of employing learned representations and soft alignments for geometric entities is that one is free to choose how the 3D model is constructed. This provides us with opportunities to incorporate domain knowledge in our method to boost its performance. To illustrate this, we focus on a specific type of product, _cabinet furniture_, in this paper. As illustrated in Figure 2, a cabinet is typically built by arranging and attaching a number of planks (_i.e_., wooden boards) together in a 3D modeling software. To this end, we develop a simple domain-specific language (DSL) based around declaring planks and then attaching them to one another, so that each cabinet can be represented by a program. Finally, given the input orthographic views, we train the Transformer-based model to predict the program associated with the cabinet.
To systematically evaluate the methods, we build a new benchmark dataset consisting of more than 26,000 3D cabinet models for this task. Most of them are created by professional interior designers using commercial 3D modeling software. Extensive experiments show that our method is much more robust to imperfect inputs. For example, the traditional method achieves an F1 score of \(8.20\%\) when \(30\%\) of the lines are corrupted or missing in the input drawings, whereas our method achieves an F1 score of \(90.14\%\).
In summary, the contributions of this work are: **(i)** To the best of our knowledge, we are the first to use deep generative models in the task of 3D CAD model reconstruction from three orthographic views. Compared to existing methods, our model learns a more flexible mapping between the input and output, thus being more robust to noisy or incomplete inputs. **(ii)** We propose a new network design that learns shape programs to assemble planks into 3D cabinet models. Such a design not only improves reconstruction accuracy but also facilitates downstream applications such as CAD model editing.
## 2 Related Work
**3D reconstruction from three orthographic views.** Studies on recovering 3D models from three orthographic views date back to the 70s and 80s [13, 22, 32, 25, 10]. An early survey on this topic appears in [31]. According to [31], to obtain 3D objects in the boundary representation (B-rep) format, existing methods follow a four-stage scheme in which 3D vertices, edges, faces, and blocks are gradually built upon the results of previous steps. As mentioned before, a key strength of the framework is that all possible solutions that exactly match the input views can be found.
Subsequent methods for this task [37, 38, 28, 18, 21, 8, 9] also follow the same procedure and focus on extending the methods' applicable domain to cover more types of objects. For example, Shin and Shin [28] developed a method to reconstruct objects composed of planar and limited quadric faces, such as cylinders and tori, that are parallel to one of the principal axes. To remove the restriction placed on the axes of curved surfaces, Liu _et al_. [21] designed an algorithm that combines the geometric properties of conics with affine properties. Later, Gong _et al_. [9] proposed to recognize quadric surface features via hint-based pattern matching in the Link-Relation Graph (LRG), expanding the applicable domain to cover objects like those with interacting quadric surfaces. However, all these methods assume clean inputs and, as we will show in the experiment section, could easily break down in the presence of errors and noises.
Recently, Han _et al_. [12] also trained a deep network to reconstruct a 3D model from three orthographic views. However, their method takes raster images as input and produces results in the format of unstructured point clouds,
Figure 2: An illustration of cabinet design in a 3D modeling software.
which are of little use in CAD modeling applications. In contrast, our method directly uses vectorized line drawings as input and generates structured CAD models as output.
**Deep generative models for CAD.** With the availability of large-scale CAD datasets such as ABC [17] and Fusion 360 Gallery [34], a line of recent work trains deep networks to generate structured CAD data in the form of 2D sketches [33, 7, 27] or 3D models [35, 14, 11, 36]. These methods all cast it as a sequence generation problem, but differ in the DSLs used to produce the output. Our idea to generate cabinet furniture by assembling plank models together is inspired by ShapeAssembly [15], which learns to generate objects as hierarchical 3D part graphs. However, unlike the above studies, which focus on the generative models themselves, we propose to use generative models to build effective and efficient method for 3D CAD model reconstruction from three orthographic views.
## 3 A Simple Assembly Language for Cabinets
In this section, our goal is to define a domain-specific language (DSL) for the shapes of interest (_i.e_., cabinet furniture). With this language, each cabinet model can be represented by a shape program, which will later be converted into a sequence of tokens as the output of a Transformer-based seq2seq model.
We define our DSL in the way that the resulting shape program resembles how a human designer builds the model in 3D modeling software. As shown in Figure 2, a cabinet is typically assembled by a list of plank models. In practice, most planks are axis-aligned cuboids. Therefore, we use cuboid as the only data type in our language. In Section 6, we discuss how our approach may be extended to accommodate more complex shapes (_e.g_., a plank with a non-rectangular profile).
An axis-aligned cuboid has six degrees of freedom (DOF), which correspond to the starting and ending coordinates along the three axes:
\[\mathtt{Cuboid}(x_{\min},y_{\min},z_{\min},x_{\max},y_{\max},z_{\max}). \tag{1}\]
In practice, instead of specifying the numerical values for all the coordinates, human designers frequently use the _attachment_ operation. As a form of geometric constraints, the benefit of using attachment is at least two-fold: _First_, it enables users to quickly specify the location of a plank without explicitly calculating (some of) its coordinates; _Second_, it facilitates future edits as any changes made to a plank will be automatically propagated to the others. Take Figure 2 as an example. When adding a plank (highlighted in blue), a designer may attach its four sidefaces to existing planks (including the invisible bounding box), while specifying the distances to the top and bottom in numerical values.
Our language supports specifying the plank coordinates via either numerical values or attachment operation by adopting the Union structure commonly used in programming languages (_e.g_., C++). As shown in the figure to the right, each of the six coordinates in Eq. (1) can either take a numerical value or be a pointer to the corresponding coordinate of another cuboid (to which it attaches to). Figure 3 shows an example cabinet incrementally constructed by imperatively executing the program commands (Program 1).
_Shape program as a DAG._ Alternatively, we may interpret the shape program as a directed acyclic graph (DAG). Note that each plank model consists of six faces, where each face corresponds to exactly one DOF in the axis-aligned cuboid (_i.e_., \(x_{\min},y_{\min},z_{\min},x_{\max},y_{\max},z_{\max}\)). Therefore, each program can be characterized by a graph \(\mathcal{G}=\{\mathcal{F},\mathcal{E}\}\), whose vertices \(\mathcal{F}=\{f_{1},\ldots,f_{|\mathcal{F}|}\}\) represent the faces of plank models and whose edges \(\mathcal{E}=\{e_{1},\ldots,e_{|\mathcal{E}|}\}\) represent attachment relationships between faces. Each directed edge \(e_{i\to j}\) is an ordered pair of vertices \((f_{i},f_{j})\), indicating the \(i\)-th face \(f_{i}\) attaches to the \(j\)-th face \(f_{j}\). We assume that each face can attach to at most one another face; that is, the out-degree of any face \(f_{i}\) is at most one. Further, the edges \(\mathcal{E}\) can be represented by an adjacency matrix \(A\in\mathbb{R}^{|\mathcal{F}|\times|\mathcal{F}|}\). Specifically, \(A_{ij}\) is 1 if \(f_{i}\) directs to \(f_{j}\), and 0 otherwise.
Figure 3: An illustration of how a simple cabinet is incrementally constructed by executing the shape program commands. The corresponding shape program is shown in Program 1.
## 4 The PlankAssembly Model
As shown in Figure 1, we assume the input consists of three orthographic projections of the object, namely, the front view, top view, and side view: \(\mathcal{V}=\{V_{F},V_{T},V_{S}\}\). Each view can be regarded as a planar graph of 2D edges and node points where the edges meet. We use solid lines to represent visible edges and dashed lines to represent hidden edges. Our goal is to reconstruct a 3D cabinet model described by the shape program or the equivalent DAG \(\mathcal{G}\).
In this paper, we cast 3D reconstruction as a seq2seq problem. In Sections 4.1 and 4.2, we describe how to encode the input views \(\mathcal{V}\) and the shape program \(\mathcal{G}\) as 1D sequences \(\mathcal{V}^{\text{seq}}\) and \(\mathcal{G}^{\text{seq}}\), respectively. Then, we introduce the design of our PlankAssembly model, which adopts a Transformer-based encoder-decoder architecture to learn the probability distribution \(p(\mathcal{G}^{\text{seq}}\mid\mathcal{V}^{\text{seq}})\), in Section 4.3. Finally, we present implementation details in Section 4.4.
### Input Sequences and Embeddings
For the input conditions, we first order the 2D edges in \(\mathcal{V}\) by the views. Each 2D edge is written as \((x_{1},y_{1},x_{2},y_{2})\), where we order its two endpoints from lowest to highest by the \(x\)-coordinate, followed by the \(y\)-coordinate (if \(x_{1}=x_{2}\)). Then, we order a set of 2D edges by \(x_{1}\), followed by \(x_{2}\), \(y_{1}\), and \(y_{2}\). Next, we flatten all the edges into a 1D sequence \(\mathcal{V}^{\text{seq}}=\{v_{1},\ldots,v_{N_{v}}\}\). Note that, since each 2D edge has four DOFs (_i.e_., the \(x\)- and \(y\)-coordinates of two endpoints), the length of \(\mathcal{V}^{\text{seq}}\) is \(N_{v}=4N_{\text{edge}}\), where \(N_{\text{edge}}\) is the total number of 2D edges in all three orthographic views.
We embed the \(i\)-th token \(v_{i}\) as:
\[E(v_{i})=E_{\text{value}}(v_{i})+E_{\text{view}}(v_{i})+E_{ \text{edge}}(v_{i})\\ +E_{\text{coord}}(v_{i})+E_{\text{type}}(v_{i}), \tag{2}\]
where the value embedding \(E_{\text{value}}\) indicates the quantized coordinate value of the token, the view embedding \(E_{\text{view}}\) indicates which view (_i.e_., the front, top, or side view) the 2D edge is from, the edge embedding \(E_{\text{edge}}\) indicates the relative position of the 2D edge in the corresponding view, and the coordinate embedding \(E_{\text{coord}}\) indicates the relative position of the coordinate in the corresponding 2D edge. Finally, we use a type embedding \(E_{\text{type}}\) to indicate whether the 2D edge is visible or hidden. In this paper, we quantize the coordinate values into 9-bit integers and use learned 512-D embeddings for each term in Eq. (2).
### Output Sequences and Embeddings
To generate the shape program sequentially, we need to map the graph \(\mathcal{G}\) to a sequence \(\mathcal{G}^{\text{seq}}\). This requires us to define a vertex order \(\pi\) on \(\mathcal{G}\): We first sort the vertices topologically, ensuring that direct successors are listed before their corresponding direct predecessors. Then, vertices that are not directly connected are ordered by the coordinate values. This gives us a sorted graph \(\mathcal{G}^{\pi}\) whose vertices \(\mathcal{F}^{\pi}\) follow the order \(\pi\).
Since we would like to capture the modeling process and facilitate future editing, we prioritize attachment relationships over geometric entities. Similar to the input sequence encoding, we flatten \(\mathcal{G}^{\pi}\) to obtain a 1D sequence \(\mathcal{G}^{\text{seq}}\). The \(i\)-th element of the sequence \(\mathcal{G}^{\text{seq}}\) can be obtained as:
\[g_{i}=\begin{cases}f_{i}^{\pi},&\text{if }A_{ij}^{\pi}=0,\forall j,\\ e_{i\to j}^{\pi},&\text{if }A_{ij}^{\pi}=1.\end{cases} \tag{3}\]
Further, we use two special tokens, [SOS] and [EOS], to indicate the start and end of the output sequence, respectively.
For the inputs to the decoder of our model, we embed the token \(g_{i}\) using the associated face \(f_{i}^{\pi}\) as follows:
\[E(g_{i})=E(f_{i}^{\pi})=E_{\text{value}}(f_{i}^{\pi})+E_{\text{ plank}}(f_{i}^{\pi})+E_{\text{face}}(f_{i}^{\pi}). \tag{4}\]
The value embedding \(E_{\text{value}}\) indicates the quantized coordinate value, which is shared for the input and output sequences. The plank embedding \(E_{\text{plank}}\) indicates the location of the corresponding plank in the cabinet model, and the face embedding \(E_{\text{face}}\) indicates the relative position of the face within the plank.
If the token corresponds to an edge \(e_{i\to j}^{\pi}\) in \(\mathcal{G}\), we identify the face \(f_{j}^{\pi}\) to which the current face \(f_{i}^{\pi}\) attaches, and use the same value embedding as \(f_{j}^{\pi}\).
### Model Design
To tackle this seq2seq problem, we factorize the joint distribution over the output sequence into a series of conditional distributions:
\[p(\mathcal{G}^{\text{seq}}\mid\mathcal{V}^{\text{seq}})=\prod_{t}p\left(g_{t} \mid\mathcal{G}^{\text{seq}}_{<t},\mathcal{V}^{\text{seq}}\right). \tag{5}\]
Here, since \(g_{t}\) may take the form of either a geometric entity (_i.e_., \(f_{i}^{\pi}\)) or an attachment relationship (_i.e_., \(e_{i\to j}^{\pi}\)), we need to generate a probability distribution over a fixed-length vocabulary set (of quantized coordinate values) _plus_ a variable-length set of tokens \(\mathcal{G}^{\text{seq}}_{<t}\) in the output sequence.
The former distribution is a categorical distribution, which is commonly used in classification tasks. Let \(\mathbf{h}_{t}\) be the hidden feature obtained by the decoder at time \(t\), we project it to the size of the vocabulary via a linear layer, which is then normalized to form a valid distribution:
\[p_{\text{vocab}}(g_{t}\mid\mathcal{G}^{\text{seq}}_{<t},\mathcal{V}^{\text{ seq}})=\mathrm{softmax}\left(\mathrm{linear}\left(\mathbf{h}_{t}\right)\right). \tag{6}\]
To generate a distribution over the output sequence \(\mathcal{G}^{\text{seq}}_{<t}\) at time \(t\), we adopt the Pointer Networks [30]. Specifically, we first use a linear layer to predict a pointer. The pointer is then compared with the hidden features of all former steps
via dot-product. Finally, a distribution over the output sequence is obtained via a softmax layer:
\[p_{\text{attach}}(g_{t}\to g_{k}\mid\mathcal{G}_{<t}^{\text{seq}}, \mathcal{V}^{\text{seq}})=\\ \operatorname{softmax}_{k}\left(\operatorname{linear}\left(\mathbf{ h}_{t}\right)^{T}\mathbf{h}_{<t}\right). \tag{7}\]
Instead of directly comparing these two distributions, we follow Pointer-Generator Networks [26] and introduce an attachment probability \(w_{t}\) to weight these two distributions. The attachment probability \(w_{t}\) is obtained via a linear layer and a sigmoid function \(\sigma(\cdot)\): \(w_{t}=\sigma\left(\operatorname{linear}\left(\mathbf{h}_{t}\right)\right)\). Thus, the final distribution is the concatenation of the two weighted distributions:
\[p(g_{t}\mid\mathcal{G}_{<t}^{\text{seq}},\mathcal{V}^{\text{seq}})= \operatorname{concat}\big{\{}(1-w_{t})\cdot p_{\text{vocab}},w_{t}\cdot p_{ \text{attach}}\big{\}}. \tag{8}\]
Finally, given a training set, the parameters of the model can be learned by maximizing the conditional distributions Eq. (8) via a standard cross-entropy loss.
**Network architecture.** We use the standard Transformer blocks [29] as the basic blocks of our PlankAssembly model. Given the input embeddings \(\{E(v_{1}),E(v_{2}),\ldots\}\), the encoder encodes them into contextual embeddings. At decoding time \(t\), the decoder produces hidden feature \(\mathbf{h}_{t}\) based on the contextual embeddings and the decoder inputs \(\{E(g_{1}),E(g_{2}),\ldots\}\). We use 6 Transformer layers for both the encoder and the decoder. Each layer has a feed-forward dimension of 1024 and 8 attention heads. The network architecture is summarized in Figure 4.
### Implementation Details
**Training.** We implement our models with PyTorch Lightning [1]. We use 6 Transformer layers for both the encoder and the decoder. Each layer has a feed-forward dimension of 1024 and 8 attention heads. The network is trained for 400K iterations on four NVIDIA RTX 3090 GPU devices. We use Adam optimizer [16] with a learning rate of \(10^{-4}\). The batch size is set to 16 per GPU.
**Inference.** At inference time, we take several steps to ensure valid predictions from our model. First, we observed that two attaching faces must correspond to the opposite DOFs on the same axis, in order to avoid any spatial conflicts. For example, the \(x_{\min}\) token of one plank can only point to the \(x_{\max}\) token of another plank, and vice versa. Thus, we mask all invalid positions during inference. Second, we filter out the predicted planks with zero volume.
## 5 Experiments
### Experimental Setup
**Dataset.** We create a large-scale benchmark dataset for this task, taking advantage of access to a large repository of cabinet furniture models from Kujiale1, an online 3D modeling platform in the interior design industry. Most models in the repository are created by professional designers using commercial parametric modeling software, and are used for real-world production.
Footnote 1: [http://kujiale.com](http://kujiale.com)
Several rules are used to filter the data: (i) We remove duplicated 3D models based on the similarity of the three orthographic views; (ii) We exclude models with fewer than four planks, more than 20 planks, or more than 300 edges in total. The remaining data is randomly split into three parts: 24039 for training, 1329 for validation, and 1339 for testing. To synthesize the three orthographic views, we use the HLRBRep_Algo API from pythonOCC [3], which is built upon the Open CASCADE Technology modeling kernel [2].
For our task, we need to parse each parametric cabinet model into a shape program. We first obtain the planks by extracting the geometric entities in the cabinet model. Note that in the parametric modeling software, a plank is typically created by first drawing a 2D profile and then applying
Figure 4: Network architecture. Our model takes the line sequences as input and outputs the shape program sequence auto-regressively. At each time step \(t\), the Transformer decoder outputs an attachment distribution \(p_{\text{attach}}\) over the previously predicted outputs, a vocabulary distribution \(p_{\text{vocab}}\), and an attachment probability \(w_{t}\). The final distribution \(p\) is obtained by concatenation of the two weighted distributions.
the extrusion command. Thus, we categorize the faces of each plank into _sideface_ or _endface_, depending on whether they are along the direction of the extrusion or not. Then, given a pair of faces from two different planks, we consider that an attachment relationship exists if (i) the two faces are within a distance threshold of 1mm, and (ii) the pair consists of one sideface and one endface. Finally, a directed edge from the endface to the sideface is added in \(\mathcal{G}\).
**Evaluation metrics.** To evaluate the quality of the 3D reconstruction results, we use three standard metrics: precision, recall, and F1 score. Specifically, for a cabinet model, we use Hungarian matching to match the predicted planks and the ground truth planks. A prediction is considered a true positive if its 3D intersection-over-union (IOU) with one ground truth is greater than \(0.5\).
### Comparison to Traditional Methods
In this section, we systematically compare our approach with the traditional methods for 3D reconstruction from three orthographic views. Since no implementation of the traditional pipeline is publicly available, we reimplement the pipeline by closely following prior work [25, 28]. Recall that, starting from the input views, the traditional pipeline generates 3D vertices, 3D edges, 3D faces, and 3D blocks step by step. Then, solutions are found by enumerating all combinations of the candidate blocks and checking if their 2D projections match the input views.
To make the pipeline suitable for reconstructing assembly models like cabinets, we introduce two minor adjustments to it. _First_, in the traditional pipeline, two blocks that share a common face are merged together, which leads to mismatching between projections and input line drawings. In our implementation, we simply omit the original merging operation. _Second_, the blocks generated by the traditional pipeline correspond to the minimal closed spaces in 3D. During the evaluation, they cannot be directly matched with ground truth planks through bipartite matching. Instead, we group the blocks as a single prediction if they overlap with the same ground-truth plank model. Note that the proposed adjustments slightly favor the traditional approach, as we use ground truth information to merge the blocks.
Moreover, in cases where the traditional approach generates multiple solutions that all satisfy the inputs, we randomly select one as the final output.
**Experiment on varying input noise levels.** We first study the performance of both methods on imperfect inputs. In this experiment, we inject varying levels of noise into the input views. We consider two types of noises/errors commonly seen in real-world drawings: missing lines and inaccurate endpoints. Specially, we randomly select a percentage of 2D edges in the input views. For each selected edge, we either delete it or randomly perturb its endpoints along the edge direction. Figure 5 reports the F1 scores of both methods as the percentage varies from \(0\%\) to \(30\%\).
Note that, when applied to our dataset, a major bottleneck of the traditional pipeline is the re-projection verification step in which all possible combinations of candidate blocks are projected to 2D to check if they match the input views. The reason is two-fold. _First_, such a match typically does not exist for noisy inputs. This is illustrated in Figure 6. When the input views contain errors, the 3D blocks generated by the traditional pipeline are often incomplete. Consequently, none of the combinations would exactly match the input views, resulting in a failure in the final solid reconstruction step. _Second_, even on clean inputs, the verification may take a long time due to a large number of possible combinations of the candidate blocks.
Thus, for completeness, we compare two variants of traditional pipeline in Figure 5. In _the first variant_, we enforce the re-projection verification step during reconstruction. On clean inputs, this variant achieves a slightly higher F1 score (\(94.07\%\)) than ours (\(91.75\%\)). However, this variant fails to produce a solution for 518 cases (out of 1339 test cases) in a reasonable time (5 minutes) due to exponential search complexity. Furthermore, this variant is not applicable to noisy inputs.
In _the second variant_, we ignore the re-projection verification step and directly use the union of blocks as the solution. As shown in Figure 5, it achieves an F1 score of \(90.67\%\) on clean input. And its performance degrades quickly as the input noise level increases. On the \(30\%\) noise level, this variant only produces results on 54 objects and has an F1 score of \(8.20\%\).
In contrast, our approach is much more robust to input noises. Specifically, its performance only slightly drops from \(91.75\%\) to \(90.14\%\) as the noise level increases from \(0\%\) to \(30\%\), verifying the key advantage of our method over traditional ones. In terms of inference time, our method takes about \(0.63\) seconds per sample on a single RTX 3090 GPU device.
**Experiment on inputs with visible edges only.** In real
Figure 5: Comparison on varying input noise levels.
world design practice, it is common for designers to omit the hidden edges of line drawings. Although it is still easy for humans to infer the 3D model, the traditional approach is likely to fail since the inputs are highly incomplete. To further demonstrate the robustness of our method, we conduct an experiment in which only the visible parts of the line drawings as used as input. For this experiment, we remove all invisible edges in the training set and follow the same protocol in Section 4.4 to train our network from scratch.
As shown in Table 1, the traditional approach performs poorly on this task, with very low recall and F1 score. This is expected because, on average, invisible edges account for about \(48\%\) of the edges in a line drawing. Meanwhile, our method is robust to the incomplete inputs, achieving an F1 score of \(82.62\%\).
**Qualitative results.** Figure 7 visualizes some 3D reconstruction results of the two methods. In the _first and second rows_, we show results with clean inputs. Our method correctly reconstructs all four objects, whereas the traditional pipeline fails on the last two objects. Specifically, for the first object in the second row, the traditional pipeline produces multiple solutions, and an incorrect one is selected as the final output. And for the second object, it fails to produce any result within the time budget (5 minutes).
In the _third to fifth rows_, we show results from inputs with noise levels \(10\%\), \(20\%\), and \(30\%\), respectively. As one can see, the performance of traditional pipeline degrades quickly. In particular, it fails to find any valid blocks for the two cases with noise level \(30\%\). In contrast, our method correctly reconstructs all six objects in the three rows.
Finally, in the _sixth row_, we show two cases from inputs with visible edges only. Again, the traditional approach performs poorly in these cases, whereas our method correctly recovers both objects.
### Ablation Studies
Next, we investigate the effect of several design choices we made in the PlankAssembly model.
**Ablation study on the input.** First, we study the performance of our model with different types of inputs. In PlankAssembly, we directly use the sequence of 2D edges as input. Here, we consider two alternatives:
_Image_: Many deep networks for 3D reconstruction use raster images as input. Inspired by Atlas [23], we replace the Transformer encoder in PlankAssembly with a CNN-based feature extractor to construct a 3D feature volume from posed images. Specifically, we use ResNet50-FPN [20] to extract 2D features from each view. Then, we aggregate the 2D features into a 3D feature volume with known poses and use a 3D CNN to refine the 3D features. The Transformer decoder takes the flattened features as input and outputs the shape program.
_Sideface_: Given the vectorized 2D line drawings, it is also possible to extract 2D sideraces (_i.e_., rectangles that correspond to the 3D sideraces of the plank models) and use the sequence of sideraces as input. Intuitively, this allows the seq2seq model to leverage explicit correspondences between the input and output. To this end, we design a set of heuristic rules to extract sideraces in each view: First, we use the polygonize API from Shapely [4] to construct minimal closed polygons from each line drawing. Then, each sideface is represented by a polygon's axis-aligned bounding box (AABB). Here, we exclude AABBs whose short side is larger than \(\epsilon\) (those AABBs typically correspond to the profile faces of the planks). We also merge
\begin{table}
\begin{tabular}{l|c c c} \hline \hline Methods & Precision & Recall & F1 score \\ \hline
[25, 28] & **99.64** & 26.47 & 39.31 \\ Ours & 84.12 & **82.05** & **82.62** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison on inputs with visible edges only.
Figure 6: A step-by-step illustration of the traditional approach on clean input **(top)** and noisy input **(bottom)**. The traditional pipeline accurately reconstructs the object when the input line drawings are free of noise. However, it fails to recover all the 3D blocks in the presence of noises, resulting in a failure of the re-projection verification in the final solid reconstruction step.
sidefaces recurrently if their short sides are adjacent. We set \(\epsilon=50\)mm in our experiments.
Figure 8 (left) shows the performance of our method w.r.t. different types of inputs. As one can see, using raster images as inputs results in lower F1 scores across all noise levels, possibly because the features extracted from the line drawings are very sparse when treated as images. Further, the model achieves similar accuracies on clean inputs when using lines or sidefaces. However, the performance of the model using sidefaces is more sensitive to noises in the input views. This again shows that attempts to establish explicit correspondences between input and output hurt the methods' robustness - a phenomenon already seen in the comparison to traditional methods.
**Ablation study on the output sequence.** In PlankAssembly, we use shape programs as the outputs. In this experiment, we compare this choice to PolyGen [24], a popular approach to generate geometric models in the form of \(n\)-gon meshes. Similar to our method, PolyGen adopts a Transformer-based architecture and proceeds by generating a set of 3D vertices, which are then connected to form 3D faces. To obtain the planks in the cabinet models, we borrow the block generation step in the traditional pipeline to construct closed solids from the predicted faces.
The results are shown in Figure 8 (right). Our approach outperforms PolyGen, especially with high noise levels. Besides, our method runs about six times faster than PolyGen (\(0.63\)_vs_\(3.61\) seconds per sample), partly because PlankAssembly directly generates planks as the output and has a shorter output sequence (solids \(\text{vertices}+\text{faces}\)).
Figure 9 compares the 3D models generated by our method and PolyGen. One notable issue with PolyGen is that since it generates each face separately, there is no guarantee that the faces will form closed solids (_i.e._, planks). For example, in the second row of Figure 9, one face predicted
Figure 8: Ablation studies on the input sequence **(left)** and the output sequence **(right)**.
Figure 7: Qualitative results. **Rows 1-2**: clean inputs. **Rows 3-5**: Noisy inputs with noise level \(10\%\), \(20\%\), and \(30\%\), respectively. **Row 6**: Inputs with visible parts only. We use red boxes to indicate incorrect reconstructions.
by PolyGen has a non-rectangular shape, leading to missing planks after the solid construction step.
Another benefit of leveraging domain-specific language over general geometric forms such as \(n\)-gons is that the generated shapes can better support user edits in the CAD modeling software. Specifically, given the attachment relationships predicted by our method, a cabinet model may undergo global scaling operations (Figure 10 (a-b)) or local editing operations (Figure 10 (c-d)) while maintaining the correct topology.
### Failure Cases
Figure 11 illustrates two most common failure modes of our PlankAssembly model. In the first example, the network makes incorrect predictions for attachments. In the second example, the reconstructed 3D model is incomplete because the stop token is predicted too early, which is a known issue with auto-regressive models.
## 6 Discussion
This paper advocates a _generative approach_ to 3D CAD model reconstruction from three orthographic views. Two lessons can be learned from our experiments: _First_, compared to finding explicit correspondences between the 2D line drawings and 3D models, the attention mechanism plays a key role in the deep network's robustness to the imperfect inputs. _Second_, incorporating domain knowledge in the generative model benefits both the reconstruction and downstream applications.
One may argue that our experiments are limited to cabinet furniture, a special type of CAD model. However, we emphasize that our main idea and the lessons learned are general and can be applied to any CAD model. For example, prior work such as DeepCAD [35] has developed neural networks which are able to generate CAD command sequences suitable for mechanical parts. Unlike cabinet furniture, mechanical parts often have non-rectangular profiles (but fewer blocks). It is thus relatively straightforward to extend our approach to such domains.
A more challenging scenario is one attempting to apply our data-driven approach to domains where large-scale CAD data is unavailable or even nonexistent, such as buildings or complex mechanical equipment. Besides, our current approach does not consider other information available in CAD drawings, such as layers, text, symbols, and annotations. Recently, several methods have been proposed for panoptic symbol spotting in CAD drawings [6, 39, 5]. We believe that such information is also vital for 3D reconstruction from complex CAD drawings.
## Acknowledgements
This work was supported in part by the Key R&D Program of Zhejiang Province (2022C01025). Jian Yin is supported by the National Natural Science Foundation of China (U1911203, U2001211, U22B2060), Guangdong Basic and Applied Basic Research Foundation (2019B1515130001), Key-Area Research and Development Program of Guangdong Province (2020B0101100001).
Figure 11: Failure cases. We highlight incorrectly reconstructed along Province (2020B0101100001).
Figure 10: Example of simple user edits. **Top:** models reconstructed by PlankAssembly. **Bottom:** edited models.
Figure 9: Qualitative comparison with PolyGen [24]. Input views are omitted. For PolyGen, we show results both before and after the solid construction step. Some incorrectly reconstructed faces are highlighted in red. |
2301.06746 | Filaments in the OMC-3 cloud and uncertainties in estimates of filament
profiles | Filaments are an important part of star-forming interstellar clouds. Theie
properties hold clues to their formation mechanisms and role in the
star-formation process. We compare the properties of filaments in the Orion
Molecular Cloud 3 (OMC-3), as seen in mid-infrared (MIR) absorption and
far-infrared (FIR) dust emission. We calculated optical depth maps of the OMC-3
filaments based on the MIR absorption seen in Spitzer data and FIR dust
emission observed with Herschel and the ArT\'eMiS instrument. The widths of the
selected OMC-3 filament segments are in the range 0.03-0.1 pc, with similar
average values seen in both MIR and FIR analyses. Compared to the widths, the
individual parameters of the fitted Plummer functions are much more uncertain.
The asymptotic power-law index has typically values p~3 but with a large
scatter. Modelling shows that the FIR observations can systematically
overestimate the filament widths. The effect is potentially tens of per cent at
column densities above N(H$_2$) ~ $10^{22}$ cm$^{-2}$ but is reduced in more
intense radiation fields, such as the Orion region. Spatial variations in dust
properties could cause errors of similar magnitude. In the MIR analysis, dust
scattering should generally not be a significant factor, unless there are
high-mass stars nearby or the dust MIR scattering efficiency is higher than in
the tested dust models. Thermal MIR dust emission can be a more significant
source of error, especially close to embedded sources. The analysis of
interstellar filaments can be affected by several sources of systematic error,
but mainly at high column densities and, in the case of FIR observations, in
weak radiation fields. The widths of the OMC-3 filaments were consistent
between the MIR and FIR analyses and did not reveal systematic dependence on
the angular resolution of the observations. | M. Juvela, E. Mannfors | 2023-01-17T08:10:15Z | http://arxiv.org/abs/2301.06746v1 | # Filaments in the OMC-3 cloud and uncertainties in estimates of filament profiles +
###### Abstract
Context:Filamentary structures are an important part of star-forming interstellar clouds. The properties of filaments hold clues to their formation mechanisms and their role in the star-formation process.
Aims:We compare the properties of filaments in the Orion Molecular Cloud 3 (OMC-3), as seen in mid-infrared (MIR) absorption and far-infrared (FIR) dust emission. We also wish to characterise some potential sources of systematic errors in filament studies.
Methods:We calculated optical depth maps of the OMC-3 filaments based on the MIR absorption seen in _Spitzer_ data and FIR dust emission observed with _Herschel_ and the ArTeshis instrument. We then compared the filament properties extracted from the data. Potential sources of error were investigated more generally with the help of radiative transfer models.
Results:The widths of the selected OMC-3 filament segments are in the range 0.03-0.1 pc, with similar average values seen in both MIR and FIR analyses. Compared to the widths, the individual parameters of the fitted Plummer functions are much more uncertain. The asymptotic power-law index has typically values \(p\sim 3\) but with a large scatter. Modelling shows that the FIR observations can systematically overestimate the filament widths. The effect is potentially tens of per cent at column densities above \(N({\rm H_{2}})\sim 10^{22}\) cm\({}^{-2}\) but is reduced in more intense radiation fields, such as the Orion region. Spatial variations in dust properties could cause errors of similar magnitude. In the MIR analysis, dust scattering should generally not be a significant factor, unless there are high-mass stars nearby or the dust MIR scattering efficiency is higher than in the tested dust models. Thermal MIR dust emission can be a more significant source of error, especially close to embedded sources.
Conclusions:The analysis of interstellar filaments can be affected by several sources of systematic error, but mainly at high column densities and, in the case of FIR observations, in weak radiation fields. The widths of the OMC-3 filaments were consistent between the MIR and FIR analyses and did not reveal any systematic dependence on the angular resolution of the observations.
## 1 Introduction
Filaments are an important structural element of the interstellar medium (ISM) and are intimately linked to the formation of new stars (Andre et al., 2010; Hacar et al., 2022). Star-forming filaments have been investigated with observations of thermal dust emission, recently with data from the _Herschel_ Space Observatory in particular (Pilbratt et al., 2010). The measurements have been used to estimate the typical column-density and mass-per-length values of star-forming filaments, the width of filaments, and parametric representations for the filament profiles. The questions of the universality of filament widths and the reliability of filament-property estimates are still topical (Howard et al., 2021; Panopoulou et al., 2022). In _Herschel_ studies, for clouds within distances of 0.5 kpc, the typical filament full-width at half maximum (FWHM) widths are of the order of 0.1 pc, which could point to a common formation mechanism, either through a single event or as a more dynamical accretion process (Arzoumanian et al., 2019; Hacar et al., 2022). However, some higher FWHM values have also been found in _Herschel_ studies, often correlated with larger source distances and thus lower linear resolution (Hennemann et al., 2012; Panopoulou et al., 2022). The estimated filament widths appear to be tied to the range of angular scales that are probed by the observations, which is qualitatively in agreement with the idea of a hierarchical structure of the ISM, even in filaments. Indeed, _Herschel_ filaments themselves often reside in even larger and often elongated structures. On the other hand, observations at higher angular resolution, such as with the Atacama Large Millimeter/submillimeter Array (ALMA) interferometer, have resulted in much smaller values and, especially in case of line observations, in the detection of 'fibres' with sizes a factor of several below the canonical 0.1 pc value (Hacar et al., 2018; Schmiedeke et al., 2021). The full picture of ISM structure can only be obtained by covering a large range of size scales, which may also require the combination of different tracers and observational methods.
The mass-per-length values and the shapes of the filament profiles are central observational parameters. Star formation is concentrated in the most massive filaments, and a clear majority of young stellar objects are found in super-critical filaments (Andre et al., 2010; Konyves et al., 2015). The critical value is typically calculated as \(M_{\rm crit}=2c_{\rm s}^{2}/G\), according to the idea of filaments as isothermal infinite cylinders (Stodolkiewicz, 1963; Ostriker, 1964). The same model predicts for the filament profiles an asymptotic behaviour \(\rho(r)\propto r^{-\rho}\) with \(p=4\). The observed profiles tend to be shallower, \(p\sim\)2 (Arzoumanian et al., 2011), which might be explained by deviations from isothermal conditions and the influence of magnetic fields, external pressure, or the dynamic growth of filaments (Fischera & Martin, 2012;
Kashiwagi & Tomisaka 2021). To arrive at any firm conclusions, the exponent \(p\) should be measured to a precision of a few tenths. So far, most estimates have been based on observations of dust emission, especially from the large _Herschel_ surveys, but might suffer from some bias caused by radial variations in dust temperature and opacity.
At high column densities, the mid-infrared (MIR) absorption provides an interesting alternative for estimating column densities (Butler & Tan 2012; Kainulainen & Tan 2013). First, it is independent from dust temperature variations, which can influence the analysis of dust emission in different ways depending on the presence of internal and external radiation sources. Second, the MIR absorption should be less affected by dust evolution when compared to the sub-millimetre and far-infrared (FIR) dust emission (Roy et al. 2013; Juvela et al. 2015b). Third, the existing MIR observations, especially with the _Spitzer_ (Werner et al. 2004) and Wide-field Infrared Survey Explorer (WISE) (Wright et al. 2010) satellites, can provide an angular resolution (\(\sim 2-6\arcsec\)) that is better than that of _Herschel_ (\(\sim 18\arcsec\) at \(250\,\mu\)m) or even large ground-based single-dish radio telescopes. However, MIR observations have their own complications, such as the unknown level of foreground emission and the potential effects of nearby or embedded radiation sources.
In this paper we examine Orion Molecular Cloud 3 (OMC-3) and its filaments using FIR and MIR observations. OMC-3 is located in the Orion A cloud, at the northern end of the intergal-shaped filament (e.g. Stutz & Kainulainen 2015). We adopt for the cloud a distance of \(d=400\,\)pc, which is within the uncertainties of the estimates that were given by Groossched et al. (2018) based on _Gaia_ observations. Their Table 1 summarises earlier distance estimates for different parts of the Orion A cloud.
Orion A was mapped by the _Herschel_ satellite at 70-500 \(\mu\)m wavelengths. These data have been used in many studies of the dust properties, structure, and star formation in the region (e.g. Roy et al. 2013; Lombardi et al. 2014; Stutz & Kainulainen 2015; Furlan et al. 2016). Sadavoy et al. (2016) estimated values \(\beta=1.7-1.8\) for the dust opacity spectral index in an area that included OMC-3. Individual cores exhibited a larger dispersion in \(\beta\) values, which could be related to dust evolution or to observational effects from temperature gradients (cf. Juvela & Ysard 2012; Juvela et al. 2015a), while \(\beta\) appears to be systematically lower at millimetre wavelengths (Schnee et al. 2014; Lowe et al. 2022).
The filamentary structure of Orion A has been studied extensively with observations of the dust continuum emission, dust polarisation, and molecular and atomic lines, from extended structures down to milliparese scales (e.g. Kainulainen et al. 2017; Pattle et al. 2017; Wu et al. 2018; Kong et al. 2018; Tanabe et al. 2019; Goicoechea et al. 2020; Salas et al. 2021). Hacar et al. (2018) identified in combined ALMA and IRAM N\({}_{2}\)H\({}^{+}\) line data narrow fibres with \(FWHM\sim 0.035\,\)pc. These structures were thus narrower than typical \(\sim\)0.1 pc filaments seen in _Herschel_ studies with lower-resolution (\(\sim 20-40\arcsec\)) continuum observations (e.g. Arzoumanian et al. 2011; Rivera-Ingraham et al. 2016). Schuller et al. (2021) combined _Herschel_ data with ground-based continuum observations from the ArTeMiS instrument1 at the Atacama Pathfinder Experiment (APEX) telescope (Reveret et al. 2014; Gusten et al. 2006). Although the angular resolution of the combined data was \(\sim 8\arcsec\), the estimated filament widths in Orion A were 0.06-0.11 pc, close to the earlier _Herschel_ results and larger than the fibres. For the filaments in the OMC-3 region in particular (in both its eastern and western parts), the widths were \(\sim\)0.06 pc. The OMC-3 filaments are dense, with line masses \(\sim 200M_{\odot}\,\)pc\({}^{-}1\)(Schuller et al. 2021). Based on dust polarisation data from the HAWC+ instrument of the Stratospheric Observatory for Infrared Astronomy (SOFIA), Li et al. (2022) concluded that the filaments are also magnetically supercritical.
Footnote 1: [http://www.apex-telescope.org/instruments/pi/artemis/ARchitectures](http://www.apex-telescope.org/instruments/pi/artemis/ARchitectures) de bolometres pour des TÉlescopes à grand champ de vue dans le domaine sub-MIllimétrique au Sol.
The OMC-3 region was analysed further by Mannfors et al. (2022) using a combination of _Herschel_ data and an independent set of ArTeMiS observations. In this paper we compare these FIR data with a new analysis of the MIR absorption seen in _Spitzer_ data. With the help of radiative transfer models, we also more generally investigate potential sources of systematic errors that could bias our estimates of the filament masses and profiles, in the case of both MIR and FIR observations.
The contents of the paper are as follows. In Sect. 2 we present the methods that are used to derive column densities from observations of dust extinction and emission, and in Sect. 2.3 we describe the procedures we use to fit the filament profiles with an analytical function. Section 2.4 explains the radiative transfer modelling that is used to study potential sources of bias in the filament analysis. In Sect. 3 we present the main results, including the analysis of the OMC-3 filaments and, in Sect. 4, the analysis of synthetic filament observations. In Sect. 5 we discuss the observational results and the systematic errors that may affect the OMC-3 data and, more generally, observations of similar high-column-density filaments. The final conclusions are listed in Sect. 6.
## 2 Methods
### Mid-infrared absorption
Mid-infrared observations provide a way to measure cloud mass distribution, provided that there is sufficient background surface brightness and the column density is high enough to result in measurable MIR extinction. The total observed surface brightness, \(I^{\rm obs}\), towards the cloud depends on its optical depth, \(\tau\), as
\[I^{\rm obs}=I^{\rm true}+\Delta I=I^{\rm fg,true}+(I^{\rm ext,true}-I^{\rm fg,true})\times e^{-\tau}+\Delta I. \tag{1}\]
We have explicitly included a correction, \(\Delta I\), which is needed if one does not have absolute surface brightness measurements and the zero point of the intensity scale is thus uncertain. In the equation, \(I^{\rm fg,true}\) is the amount of true emission that originates in front of the source, and \(I^{\rm ext,true}\) is true value of the extended background, against which the cloud is seen in absorption. The corresponding observed value of \(I^{\rm ext,obs}\) can be estimated by interpolating the observed surface brightness over the target. The optical-depth can be then calculated as
\[\tau=-\ln\left(\frac{I^{\rm true}-I^{\rm fg,true}}{I^{\rm ext,true}-I^{\rm fg,true}}\right)=-\ln\left(\frac{I^{\rm obs}-I^{\rm fg,obs}}{I^{\rm ext,obs}-I^{ \rm fg,obs}}\right). \tag{2}\]
This requires a separate estimate of the foreground component. In Butler & Tan (2012), \(I^{\rm fg}\) was estimated by assuming that parts of the target cloud are so optically thick that none of the background radiation comes through. The minimum surface brightness is then also an estimate for \(I^{\rm fg}\). Because \(I^{\rm obs}\) and \(I^{\rm fg,obs}\) are affected by the same correction \(\Delta I\), this correction term disappears, and the optical depth can be estimated even with an arbitrary zero-point offset in the data. However, unless peak optical depths are very high, the minimum surface brightness provides
only an upper limit of the true foreground component. In real observations, the non-constant background and the effect of local radiation sources add to the overall uncertainty.
### Far-infrared dust emission
The observed FIR emission from the target cloud is
\[I_{\nu}=\int B_{\nu}(T)e^{-\tau_{\nu}}d\tau_{\nu} \tag{3}\]
and thus depends via the Planck function, \(B_{\nu}\), on the dust temperature, \(T\), and optical depth, \(\tau_{\nu}\), along the line of sight. In most practical work, the medium is assumed to be homogeneous and optically thin, which gives the simple relationship
\[I_{\nu}=B_{\nu}(T)(1-e^{\tau_{\nu}})\approx B_{\nu}(T)\tau_{\nu}. \tag{4}\]
The optical depth can be calculated from the modified blackbody (MBB) function above and with further assumptions of the dust absorption coefficient \(\kappa_{\nu}\), optical depths can be converted to column density. Optical depth is obtained from Eq. (4), once the dust temperature \(T\) has been first calculated by fitting the same Eq. (4) to multi-frequency observations. In the optically thin case, the temperature determination does not depend on the absolute value of the opacity \(\kappa_{\nu}\), only on its frequency dependence. At FIR wavelengths, this is usually written assuming a power-law dependence,
\[\kappa(\nu)=\kappa(\nu_{0})\,\,(\nu/\nu_{0})^{\beta}. \tag{5}\]
We used either \(\beta=1.8\) (OMC-3 observations) or \(\beta=2.0\) (radiative transfer models).
Column densities are estimated based on _Herschel_ 160-500 \(\mu\)m data. In OMC-3, with the assumed dust opacity, the column density reaches maximum values above \(N(\mathrm{H_{2}})=10^{23}\,\mathrm{cm^{-2}}\), which corresponds to a 160 \(\mu\)m optical depth of \(\tau=0.14\). The assumption of optically thin emission therefore seems valid. There can be some optically thick regions, but only at scales below the _Herschel_ resolution. Saturation of short-wavelength emission could lower the estimated colour temperatures and lead to higher column-density estimates. However, even when optical depths are not negligible, the use of the approximate form of Eq. (4) may still be preferred over the full equation.
We calculated a low-resolution map (LR map) by convolving the _Herschel_ data to a common 41'' resolution and deriving a column-density map at the same resolution. An alternative high-resolution map (HR map) is calculated following Palmeirim et al. (2013). We use the convolution kernels presented in Aniano et al. (2011), and the nominal resolution of the resulting column-density map is 20''.
Mannfors et al. (2022) presented ArTeMiS observations of the OMC-3 field. The 350 \(\mu\)m map is centred at RA=5\({}^{\mathrm{h}}\)35\({}^{\mathrm{m}}\)20\({}^{\mathrm{s}}\) Dec=-5\({}^{\mathrm{s}}\)1'31'' (J2000), and it covers an area slightly larger than 9.3' \(\times\) 11.7' at an angular resolution of 8.5''. The noise level is -0.2 Jy beam\({}^{-1}\), and the signal-to-noise ratio exceeds 100 in the brightest part of the filament. The main filament is equally well visible in the simultaneously observed ArTeMiS 450 \(\mu\)m map. However, because the 450 \(\mu\)m map is slightly smaller (not covering the filament segment A) and there are no _Herschel_ observations at this wavelength, the 450 \(\mu\)m data are not used in this paper. The third column-density map is based on combined (feature-ered2) _Herschel_ and ArTeMiS 350 \(\mu\)m surface brightness map. The map has a nominal resolution of 10'', although the temperature information is available only at a lower resolution. In the following, we refer to this as the AR map.
Footnote 2: [https://github.com/radio-astro-tools/uvcombine](https://github.com/radio-astro-tools/uvcombine)
The analysis with a single MBB is common, but an observed spectrum is never going to precisely match any single MBB function, because \(\beta\) and \(T\) are not constant in the clouds. The effects of the line-of-sight temperature variations are well known (Shetty et al. 2009; Malinen et al. 2011; Juvela & Ysard 2012), and we return to these questions in Sect. 4.1 and in a future paper. With a sufficient number of frequency channels and data with high signal-to-noise ratio, the observations can be modelled as a sum of several temperature components, thus reducing the bias associated with the single-temperature assumption. Such an analysis could be more sensitive to other assumptions, such as the dust opacity spectral indices. Nevertheless, methods such as point process mapping (PPMAP) (Marsh et al. 2015) and inverse Abel transform (Roy et al. 2014; Bracco et al. 2017) have been successfully applied to many dust continuum observations. Of these, PPMAP sets fewer requirements on the symmetry of the modelled object but is computationally more expensive. Howard et al. (2019) compared the MBB and PPMAP methods for filaments in the Taurus molecular cloud. The PPMAP method resulted in some 30% decrease in the estimated filament FWHM values (Gaussian fits) and, perhaps surprisingly, in a significant reduction in the estimated line masses. However, that analysis also made use of the longer-wavelength SCUBA-2 observations, in addition to the _Herschel_ 160-500 \(\mu\)m data.
### Filament profile fitting
We fit filament profiles with a Plummer-type function:
\[P(r;N_{0},R,p,\Delta r)=N_{0}[1+((r-\Delta r)/R)^{2}]^{(1-p)/2} \tag{6}\]
(Whitworth & Ward-Thompson 2001; Arzoumanian et al. 2011). Here \(r\) is the distance from the filament centre, in the direction perpendicular to the filament path. In the case of OMC-3 data, the paths are defined by parametric splines through a number of hand-picked points. The free parameters of the fit are the peak column density, \(N_{0}\), the size of the central flat part of the profile, \(R\), the power-law index, \(p\), and the sideways adjustment, \(\Delta r\). The parameter \(N_{0}\) is related to the central density of the filament but also depends on the values of \(R\) and \(p\) as well as the scaling between column density and mass (Arzoumanian et al. 2011). The parameter \(\Delta r\) allows a shift if the local filament centre does not perfectly align with the spline description of the filament. By allowing the shift \(\Delta r\) in individual profiles, one also avoids the artificial widening that would result from imperfect alignment of the profiles.
The actual model fitted to OMC-3 data includes a linear background and the final convolution of the model to the resolution of fitted column-density data,
\[F(r;N_{0},R,p,\Delta r,A,B)=\mathrm{Con}(P(r;N_{0},R,p,\Delta r))+A+B\cdot r. \tag{7}\]
The fit was done independently for each extracted 1D profile. This includes the replacement of the 2D-convolution of an image with a 1D-convolution of individual profiles. This is exact only if the 2D filament does not change along its length, at the scale of a single beam. This also assumes that the filament shape is close to Gaussian, because only in the case of a Gaussian filament convolved with a Gaussian beam are the 1D and 2D results identical. Ideally, one would build a 2D model of the entire region (with a global model for the background as well) that would
be convolved in 2D during the fitting. However, the 1D approximation of Eq. (7) is sufficiently accurate and, of course, is much faster to calculate (cf. Mannfors et al., 2022).
The fitting of Eq. (7) with six free parameters was done both with a normal least-squares routine and with a Markov chain Monte Carlo (MCMC) routine, the latter providing the full posterior probability distributions for the estimated parameters. The MCMC fitting uses uninformative priors, except for forcing positive values for \(N_{0}\) and \(R\). We also required \(p>1\), since \(p=1\) corresponds to a flat column-density profile. Appendix A shows one test on the expected accuracy of the parameter estimates as a function of the noise level.
### Radiative transfer models
We used a simple cloud model to study how the extracted filament parameters might be affected by different error sources. In the case of FIR emission, these include especially the line-of-sight temperature variations that cause radially varying bias in the column-density maps. In the case of MIR analysis, errors may be introduced by dust scattering and local thermal dust emission.
The cloud model was discretised onto a Cartesian grid of 200\({}^{3}\) cells, with a cell size of 0.0116 pc. The size of the cloud model is thus 2.32 pc or 20\({}^{\prime}\) for a distance of 400 pc. The model consists of a single linear filament, with a density profile matching the Plummer function with \(R\)=0.0696 pc and \(p\)=3. These correspond to a filament FWHM value of 0.14 pc or 72\({}^{\prime\prime}\) at the 400 pc distance. Tests are carried out with different values of the filament column density, and observing the filament mostly in a direction perpendicular to its main axis.
We used different dust models for the emission and scattering calculations. FIR emission is calculated using the core-mantle-(CMM) dust that is included in the the Heterogeneous dust Evolution Model for Interstellar Solids (THEMIS) model (Jones et al., 2013; Kohler et al., 2015; Ysard et al., 2016). Although the dust model does not contain large aggregates or ice mantles, it is appropriate for dense environments. As an example of a more evolved dust populations, we use in some tests the THEMIS AMMI model, aggregate grains with ice mantles.
Because the CMM model does not include small grains, the calculations of MIR thermal dust emission were performed with the Compiegne et al. (2011) dust model (in the following, COM), which is appropriate for a diffuse medium. The thermal emission should come mainly from outer filament layers that consist of relatively pristine material. This is not necessarily true for the MIR scattering, which originates in regions where the optical depth at MIR wavelengths, rather than at optical wavelengths reaches unity. Dust scattering is therefore also calculated using the CMM dust model. Differences in the scattering properties of different dust models are discussed, for example, in Ysard et al. (2016) and Juvela et al. (2020).
The model filament is illuminated by an isotropic background according to the Mathis et al. (1983) model of the interstellar radiation field (ISRF). In part of the calculations, we included an additional point source, which was modelled as a \(T=15700\) K blackbody with a total luminosity of 590 solar luminosities (similar to a B5V star). The source was placed at distance of 0.23 pc from the top end of the filament (in the orientation used in the figures) and at distance of 0.93 pc from the filament axis. The viewing angle was varied such that the source is in front of the filament, behind the filament, or to the left of the filament. Since the amount of scattered light is directly proportional to the illumination, these results can be easily scaled for any source luminosity.
The radiative transfer calculations were performed with the SOC program (Juvela, 2019), which gives 200\(\times\)200 pixel maps of the dust emission at 8, 160, 250, 350, and 500 \(\mu\)m and maps of scattered light at 8 \(\mu\)m. The pixel size matches the model resolution and is 0.0116 pc or 6\({}^{\prime\prime}\) for the assumed 400 pc distance. Since the effect of stochastic grain heating is small at long wavelengths, the 160-500 \(\mu\)m dust emission was calculated assuming equilibrium between the radiation field and the grain temperatures. The stochastic nature of the grain temperatures was naturally taken into account in the calculations of the MIR dust emission. The SOC program is based on Monte Carlo simulations. The number of simulated photon packages was selected so that the noise is a fraction of one per cent in the FIR maps and \(\sim 1\%\) or less in the computed maps of scattered light (pixel-to-pixel).
The FIR surface brightness maps were further convolved with Gaussian beams. In most tests, this was set to \(FWHM\)=24\({}^{\prime\prime}\) and the column-density maps were calculated at the same resolution. The simulated MIR data were used at the few model resolution, because the model pixels are already larger than, for example, the _Spitzer_ resolution at distances below 1.2 kpc.
## 3 Results from OMC-3 observations
### OMC-3 filament in MIR absorption
We used Eq.(2) and _Spitzer_ data to calculate an 8 \(\mu\)m optical-depth map for the OMC-3 field. We started by creating a mask for those filament regions that are clearly visible in absorption (Fig. 1a). The extended component \(I^{\rm ext}\) was calculated by replacing the masked pixels with interpolated values. In practice, this was done by convolving the map with a Gaussian beam with \(FWHM\)=40\({}^{\prime\prime}\), where the convolution ignored as inputs all pixels inside the filament mask or inside the manually created masks for point sources (Fig. 1b-c). Comparison of MIR and _Herschel_ data showed that the MIR surface brightness rises towards the north, and the MIR data are affected by local radiation sources, whose effect changes rapidly as a function of the sky position. In Fig. 1b, the masks have already been reduced to an area, where the relationship between the MIR absorption and the FIR-based column densities appeared consistent. The increased MIR emission towards the equatorial north is caused mainly by the NGC 1977 open cluster (its northern sub-cluster), but the closest B stars are still some 4 arcmin or 0.5 pc (projected distance) north of the area included in Fig. 1 (Getman et al., 2019; Megeath et al., 2022).
The level of foreground emission is not known. Butler and Tan (2012) estimated the quantity corresponding to \(I^{\rm fg}+\Delta I\) statistically as
\[I^{\rm fg}+\Delta I=\langle J^{\rm obs}(I^{\rm obs}<I^{\rm obs,min}+2\sigma) \rangle-2\sigma, \tag{8}\]
where \(I^{\rm obs,min}\) is the minimum observed surface brightness and \(\sigma\) is the noise. In the OMC-3 region, the extended surface brightness, and therefore probably also the foreground component, increases strongly towards both north and south. We set \(I^{\rm fg}\) equal to the minimum observed surface brightness, which is found close to the centre of the area shown in Fig. 1. This may lead to some overestimation of the \(\tau\) values at that position (and a few undefined values), which needs to be taken into account in the subsequent analysis. On the other hand, the selected value may underestimate the foregrounds further in the south and the north.
The calculated map of the 8 \(\mu\)m optical depth is shown in Fig. 1d. Our main interest is in the shape of the filaments, and
scaling of \(\tau\) to column density is not needed. However, with a 8-\(\mu\)m dust opacity of 7.5 cm\({}^{2}\) g\({}^{-1}\), the values obtained for the clean parts of the filaments (without strong point-source contamination) are within a factor of \(\sim\)2 of those derived from the _Herschel_ data (when compared at 20'' resolution). This scaling is used in some subsequent plots where column-density units are used.
Figure 1 shows the MIR data and the derived 8 \(\mu\)m optical depth map. The strongest MIR absorption is found within a y-shaped region, which can be interpreted as one main filament on the west side and one side filament extending towards the north-east. For further analysis, we selected from this area four isolated filament segments that show the deepest MIR absorption (i.e. correspond to the highest column densities) and are not significantly contaminated by emission from embedded or nearby stars. The segments are labelled with letters A-D in Fig 2a.
The column-density maps of the segments were then extracted as 2D images, where the filaments are aligned vertically. We fitted each individual row as well as the median profile of each filament segment with Eq. (7). The model includes a linear background, the Plummer profile with a possible shift in the direction perpendicular to the filament length, and convolution with a Gaussian beam with FWHM=2.0'' (approximating the _Spitzer_ beam size). The results are shown in Fig. 3 for the filament segment A, based on data within [-90'', +90''] of the filament centre. Corresponding plots for the three other segments can be found in B.
The figures also show the filament FWHM values that were calculated from the parameters of the Plummer fits. The \(FWHM\) values are usually much more robust than the values of the individual parameters \(R\) and \(p_{0}\)(Suri et al., 2019). There is no significant correlation between the fitted filament column density (parameter \(N\)) and the FWHM. If the filament is not well defined (or well matched by the assumed functional form), the parameter \(N_{0}\) does not accurately represent the central column density, because of partial degeneracy with the fitted linear background. The median FWHM value of the filament is \(\sim\)0.05 pc.
### OMC-3 filaments in FIR emission
For comparison with the MIR results, we analysed the column-density maps obtained from _Herschel_ and the combined _Herschel_ and ArTeMiS data. We extracted column densities for the same short filament segments as in the MIR analysis and fitted these with the model of Eq. (7).
We used the LR and HR column-density maps estimated with _Herschel_ 160-500 \(\mu\)m data and the AR map that also makes
Figure 1: MIR observations of the OMC-3 field. Frame (a) shows the observed 8 \(\mu\)m surface brightness. Frame (b) shows the masks used, where the value 1 corresponds to the chosen filament region (shown with dashed contours in the other frames) and the value -1 corresponds to masked stars. Frame (c) shows the estimated extended emission, \(I^{\rm ex}\), and frame (d) the resulting 8 \(\mu\)m optical depth. Frame (a) also indicates the four filament fragments that were chosen for further analysis (solid white lines with labels A-D).
Figure 3: Results of the Plummer fits to OMC-3 filament segment A that was measured using MIR extinction. Frame (c) shows the filament column densities (units 10\({}^{-12}\) cm\({}^{-2}\)) as a 2D image, the filament running vertically in the plot. The top row contains the median profile. Frame (a) shows the fit residuals. Frame (d) shows the parameter estimates along the filament. The shaded regions correspond to different percentile ranges of MCMC samples: [1, 99]% in dark grey, [10, 90]% in light grey, and [25, 75]% in red. The MCMC median values are plotted as solid white curves. The values from separate \(\chi^{2}\) minimisation are plotted with dashed black curves. The vertical dashed light green lines (frame b and frame d) show the median of the parameter values in the individual least-squares fits, while the white half-circles at the top of frame (d) correspond to the fit to the median profile. Frame (b) shows histograms of the parameter distributions based on MCMC samples over all profiles.
Figure 2: Two-dimensional images of the four OMC-3 filament segments that are marked in Fig. 1a. Each frame shows one segment that is extracted from the column-density map so that the filament runs vertically at the centre of each frame. The green areas correspond to pixels that are masked because of point sources.
use of ArTeMiS data. Figures 4 and 5 show the results for the first filament segment A, using the HR column-density map (angular resolution 20\({}^{\prime\prime}\)) and the AR map (angular resolution 10\({}^{\prime\prime}\)). The corresponding plots for the other filament segments are shown in Appendix C. The values of \(p\) are of the order of \(\sim\)3, but with significant scatter. The higher-resolution maps result in lower FWHM values, with the exception of segment A.
The above results apply to fits to data within \(|r|<90^{\prime\prime}\) of the filament centre. To test the sensitivity to the extent of the fitted region, the analysis was repeated by varying the maximum distance between \(r_{\rm max}=60^{\prime\prime}\) to \(r_{\rm max}=210^{\prime\prime}\). Figure 6 shows the resulting median parameter values for the four filament segments. In addition to the HR and AR data (20\({}^{\prime\prime}\) and 10\({}^{\prime\prime}\) resolutions), we include here the results using the LR (41\({}^{\prime\prime}\) resolution) column-density map.
The FWHM values obtained from different column-density maps are relatively consistent. The results are similar for the LR and AR maps, and HR data result in only slightly lower values. We also repeated the LR and HR analysis using the column-density maps provided by the Gould Belt Survey, where the maps have resolutions of 36.3\({}^{\prime\prime}\) (Roy et al. 2013) and 18.2\({}^{\prime\prime}\) (Polchroni et al. 2013). The 36.3\({}^{\prime\prime}\) resolution maps showed no noticeable differences to our results with the LR map. The 18.2\({}^{\prime\prime}\) maps resulted in slightly higher FWHM values that match our LR and AR results more closely. These small differences could be caused by differences in the background subtraction (which in the case of the HR map was done close to the filament), the convolution kernels, and even the assumed \(\beta\) values (\(\beta=2\) in Polychroni et al. 2013).
One clear outlier is segment A in the AR map (Fig. 6g), where the \(FWHM\) value increases with increasing \(r_{\rm max}\). However, the values appears to be affected by the masked area, which corresponds to the edge of the ArTeMiS coverage (Fig. 5). In the better defined end of the segment, the FWHM values also drop in segment A close to the general \(\sim 0.05\) pc level. Segment B also shows larger FWHMs in the LR and AR maps than in the HR map. The widths are smaller in the high-density end and larger in the low-density end, the median in this case picking the larger value.
The MIR data result in FWHM estimates that are even surprisingly close to the values derived from dust emission. However, while all emission maps give values \(p\sim 3\) for the power-law index, the MIR results show a much larger scatter. The values are high (\(p>4\)) for the B and C segments, and, as shown in Figs. B.1- B.2, the values are consistently high along the entire filament segments.
## 4 Analysis of synthetic filament observations
In this section we use the cloud model of Sect. 2.4 to examine sources of potential bias in the dust emission and extinction observations. With the parameters used in the simulations (0.0116 pc pixels, \(R=6\) pixels, and \(p=3.0\)), the model filament has a FWHM size of 0.14 pc or some 72\({}^{\prime\prime}\) at a distance of 400 pc (Sect. 2.4). The synthetic observations were made using a Gaussian beam with \(FWHM=24^{\prime\prime}\). The beam size and filament properties are roughly similar to those found in some _Herschel_ studies (e.g. Arzoumanian et al. 2011; Rivera-Ingraham et al. 2016; Panopoulou et al. 2022) but the simulations are not intended to directly replicate the OMC-3 observations. In particular, we examine a wider range of models with maximum column densities ranging from \(N({\rm H_{2}})=10^{21}\) cm\({}^{-2}\) to \(N({\rm H_{2}})=10^{24}\) cm\({}^{-2}\).
### Bias in FIR observations
We look first at FIR observations of a filament illuminated by an isotropic external radiation field. Figure 7 shows the true optical depth profiles and the profiles estimated from synthetic surface brightness maps for filaments of different column density.
The results are almost correct up to \(N({\rm H_{2}})\sim 10^{22}\) cm\({}^{-2}\), although \(\tau\) is increasingly underestimated. When the column density reaches \(N({\rm H_{2}})=3\cdot 10^{22}\) cm\({}^{-2}\), the estimated peak \(\tau\) is some 75% of the correct value and \(p\) is overestimated by \(\sim\)4%. The filament FWHM is overestimated by \(\sim\)30%, and the effect increases with increasing column density. The results were not sensitive to the assumed beam size.
Figure 4: Plummer fits of OMC-3 filament segment A, using the HR column-density map (angular resolution 20\({}^{\prime\prime}\)). Frame (c) shows the filament segment as a 2D image (the top row containing the median profile), frame (a) the fit residuals, frame (d) the parameter estimates along the filament, and frame (b) the parameter histograms (cf. description in Fig. 3). The column-density map has an angular resolution of 20\({}^{\prime\prime}\), and the fitted area is [-90\({}^{\prime\prime}\), +90\({}^{\prime\prime}\)] in the cross-filament direction.
Figure 5: Plummer fits of OMC-3 filament segment A. The figure is the same as Fig. 4 but uses the AR column-density map that is based on combined _Herschel_ and ArTeMiS data and has an angular resolution of 10\({}^{\prime\prime}\).
The MBB fits were made with \(\beta=2.0\), which is close to the actual value in the dust model. At \(N(\rm H_{2})=10^{22}\,\rm cm^{-2}\), the use of \(\beta=1.8\) or \(\beta=2.2\) would change the column-density estimates by -22% or +27%, respectively, while the FWHM is affected only at the \(\sim\)1% level.
Figure 8 shows how the bias in the \(p\), \(R\), and FWHM estimates increases with column density. For the normal ISRF and fits to the \(|r|<0.58\,\rm pc\) area, the FWHM is at \(N(\rm H_{2})=10^{23}\,\rm cm^{-2}\) overestimated by a factor of two. The bias in \(p\) is 20% at \(N(\rm H_{2})=10^{23}\,\rm cm^{-2}\) and more than 50% at \(N(\rm H_{2})=3\cdot 10^{23}\,\rm cm^{-2}\). The fractional errors in \(R\) are larger, but even there become significant only beyond \(N(\rm H_{2})=3\cdot 10^{22}\,\rm cm^{-2}\).
Figure 8 also shows results for a radiation field that is a factor of \(\chi=10\) or \(\chi=100\) stronger at all frequencies.3 This increases the temperature contrast in the filament but also the average temperature (\(\sim\)7 K for the \(N(\rm H_{2})=10^{22}\,\rm cm^{-2}\) and \(\chi=10\) case). Therefore, a change from \(\chi=1\) to \(\chi=10\) decreases the systematic errors by a factor of two, and for \(\chi_{\star}=100\) the FWHM estimates remain accurate up to \(N(\rm H_{2})=10^{23}\,\rm cm^{-2}\).
Footnote 3: More luminous sources would intrinsically have higher UV luminosity, but the UV-to-IR ratio of the radiation field at the filament location can still be much lower, depending on the intervening extinction.
In many observations, fits can only be done using a limited sky area. Figure 8 also shows results for fits within the smaller \(r_{\rm max}=0.35\,\rm pc\) area. This has no effect on the FWHM values, but the errors in \(p\) increase. While the observed column-density profile can be fitted with a Plummer function (with parameters different from the true values), it does not match the Plummer profile exactly, since the result depends on \(r_{\rm max}\). In the case of noiseless synthetic observations, a wider area always results in more accurate estimates. The situation can be different in real observations if the signal in the filament wings is dominated by noise and emission from unrelated structures.
A change in inclination changes the line-of-sight optical depths but, unlike a true increase of the filament column density, does not affect the temperatures. Appendix D confirms that the inclination has only a minor effect on the extracted filament parameters.
When the model includes a discrete radiation source, the parameter estimates depend on the distance to the source. Figure 9 shows the true and the recovered profiles at four positions along a model filament. The point source is located behind the filament but, because the filament is optically thin for FIR emission, the results are similar if the source were in front of the filament. At \(N(\rm H_{2})=10^{22}\,\rm cm^{-2}\) and \(N(\rm H_{2})=3\cdot 10^{22}\,\rm cm^{-2}\), the FWHM errors increase towards the point source location but the parameter \(p\) is much less affected. Figure 9 demonstrates the central flattening of the recovered \(\tau\) profiles, and, as suggested by Fig. 8, the Plummer fits do not follow the actual profile. Schuller et al. (2021) reached qualitatively similar results, although in their case the effects were smaller because of the stronger radiation field (\(G_{0}\)=1000).
Figure 10 shows how the parameter estimates vary along the filament in the cases of isotropic illumination (\(\chi=1\) or \(\chi=10\)) and the sum of an isotropic field (\(\chi=1\)) and a point source. The figure confirms the rapid increase of bias at column densities above \(N(\rm H_{2})=10^{22}\,\rm cm^{-2}\). The \(p\) values are more sensitive to
Figure 6: Parameters \(p\) (upper frames) and FWHM (lower frames) in fits to OMC-3 LR, HR, and AR maps (based on dust emission) and the MIR maps. Each frame shows results for the four filament segments, A-D, the symbols corresponding to median and the inter-quartile range of the least-squares parameters over the filament length.
Figure 7: Profiles of optical depth, \(\tau\), and dust temperature in the case of model filaments illuminated by an isotropic external field. The black curves show the true optical-depth profiles and the dashed red curves (fully overlapping the black curves) the Plummer functions fitted to those profiles. The blue and dashed cyan lines are, respectively, the optical depth profile derived from synthetic 160-500 \(\mu\)m surface brightness observations and the Plummer fit to those data. The values of \(p\) and FWHM (in parsecs, in the plot marked as \(\theta\)) are shown for both the true profile (left side, black font) and the estimated (right side, blue font) \(\tau\) profiles. The analysis assumes \(\beta=2.0\), but FWHM values for \(\beta=1.8\) and \(\beta=2.2\) are also shown in cyan and magenta, respectively. The assumed beam size is \(24^{\prime\prime}\). The dust temperature profiles (solid red curves) are cross-sections of the 3D model and are shown at full model resolution.
point-source illumination from one side, while FWHM is more affected when the source is along the line of sight towards the filament. The small bias at the filament ends is caused by these being subjected to the full unattenuated external field.
The FIR results can also be affected by changes in dust properties. These could be related to changes in the grain sizes and optical properties, following the formation of larger aggregates and ice mantles (Ossenkopf & Henning 1994; Ormel et al. 2011; Jones et al. 2016). Figure 11 compares calculations with uniform dust properties to with two two-component models. The single-component models consist of COM, CMM, or AMMI dust. The AMMI model tends to result in the largest systematic errors, especially in the FWHM. The 250 \(\mu\)m opacity of AMMI is five times higher than for COM, and a similar difference also exists at shorter wavelengths, where dust absorbs energy. The differences are thus caused mainly by changes in the optical depth, since spectral index of AMMI dust (\(\beta\sim 2.02\)) is similar to the value \(\beta=2.0\) that was used in the MBB analysis.
We tested two cases with spatially varying dust properties. The first two-component model consists of CMM dust and a modified CMM, where \(\beta\) is decreased from the original 160-500 \(\mu\)m spectral index \(\beta\sim 1.97\) down to \(\beta=1.5\). The abundance of the modified dust is calculated as tanh(\(n/[10^{5}\,{\rm cm^{-3}}]\)), so that the filament centre consists entirely of the modified dust. This shows the effect of a change in the spectral index, without a net change in the opacity. The results show systematic but relatively minor variations in the filament parameters (Fig. 11).
The second two-component model consists of COM dust in the outer parts and AMMI dust in the inner part. The relative abundance of the AMMI component follows the same density dependence as above. The spectral indices are different (\(\sim\)2.02 and \(\sim\)1.83 for AMMI and COM, respectively) but there is a larger difference in the absolute opacities. While the pure AMMI model led to the largest \(R\) and FWHM estimates, the COM+AMMI combination leads to the largest \(p\) values. This illustrates qualitatively the potential effects from spatial dust property variations. However, the quantitative results will also be sen
Figure 8: Estimated \(p\) and \(R\) values and the corresponding FWHM as a function of the peak column density of isotropically illuminated model filaments. The black, blue, and red curves correspond, respectively, to observations of model filaments in a normal ISRF, \(\chi=1\), and in stronger fields with \(\chi=10\) and \(\chi=100\). The solid lines with circles are fits to the model profile at \(|r|<0.58\,{\rm pc}\), while the dashed lines with squares are fits to a narrower region, \(|r|<0.35\,{\rm pc}\). The horizontal dotted lines indicate the true parameter values in the models.
Figure 10: Variation in FIR-estimated parameters along model filaments in cases of: isotropic illumination (frames a-d); a 590 \(L_{\odot}\) point source at \(\Delta y=2.09\,{\rm pc}\) and 0.93 pc behind the filament (frames e-h); and to one side of the filament (frames i-l). The plotted quantities are the ratio between the estimated and true FIR optical depths (\(\tau/\tau_{b}\)), the Plummer parameters (\(R\) and \(p\)), and the filament FWHM calculated based on these. Each frame shows results for three model filaments with peak column densities \(N({\rm H_{2}})=10^{22}\,{\rm cm^{-2}}\) (blue lines), \(N({\rm H_{2}})=3\cdot 10^{22}\,{\rm cm^{-2}}\) (cyan lines), and \(N({\rm H_{2}})=10^{22}\,{\rm cm^{-2}}\) (red lines). The dashed lines in frames (a)-(d) correspond to cases with a higher isotropic radiation field (\(\chi=10\)); all other cases include an isotropic field with \(\chi=1\). True values are plotted with dashed black lines.
Figure 9: Selected optical-depth profiles along model filaments that are illuminated by a point source. The filament column density is \(N({\rm H_{2}})=3\cdot 10^{22}\,{\rm cm^{-2}}\) (frame a) or \(N({\rm H_{2}})=1\cdot 10^{23}\,{\rm cm^{-2}}\) (frame b). The black curves show the true optical depths, the blue curves the optical depths estimated from FIR observations, and the dashed red curves the Plummer fits to those optical depths. The cross-sections are selected from positions \(\Delta y=\)0.46, 0.93, 1.39, and 1.86 pc along the filament, when the point source is located at the position \(\Delta y\)=2.09 pc along the filament and a distance 0.93 pc behind the filament. The curves from top to bottom are in order of decreasing distance to the point source (increasing order of \(\Delta y\)), and the parameters are listed in the same order (FWHM in units of parsec).
sitive to the radial position and steepness of the transition in dust properties.
### Bias in MIR observations
The filament models of Sect. 4.1 were also used to examine how the MIR analysis is affected by in situ dust scattering and emission. Radiative-transfer calculations provide the surface brightness due to 8 \(\mu\)m dust scattering with the CMM dust model and the thermal emission from stochastically heated grains with the COM dust model. We concentrate here on the systematic effects. Appendix E examines further some effects related to observational noise.
#### 4.2.1 Effect of MIR scattering
For a filament with \(N(\mathrm{H_{2}})=3\cdot 10^{23}\) cm\({}^{-2}\), the scattering results in errors of less than \(\sim\)1% in the estimated \(\tau(8\,\mu\mathrm{m})\). This remains true even if the isotropic radiation field is increased to \(\chi=10\) or the default point-source luminosity is increased by a factor of 50. The calculations assume an intensity \(I^{\mathrm{bg}}\)=10 MJ sr\({}^{-1}\) for the background sky. This is similar to the OMC-3 field but still a relatively low value. If the background is higher, the effects from scattering in the cloud would be further reduced.
The importance of scattering increases with increasing column density. For \(\tau(8\,\mu\mathrm{m})<1\), the intensity of the scattered light follows the column-density profile. If the filament is optically thick, the scattered light will peak on either side of the column-density peak, with potentially larger impact of the parameter estimates. However, Fig. 12 shows that for a filament with \(N(\mathrm{H_{2}})=10^{24}\) cm\({}^{-2}\), the maximum optical-depth errors remain below 10%, both for an isotropic field \(\chi=10\) or a point source with luminosity 50 times the default value. If the isotropic radiation field is increased to \(\chi=50\), the errors exceed 20% for a \(N(\mathrm{H_{2}})=10^{24}\) cm\({}^{-2}\) filament. If the column density is increased further by a factor of three (to rather extreme values), the errors would exceed 60%. The bias is determined mainly by the optical depth. The optical depth depends on the column density but also the dust properties and would be more than two times higher for the AMMI dust than for the CMM dust was used in Fig. 12.
Figure 13 shows the effect of scattering on the filament parameter in the case of a \(N(\mathrm{H_{2}})=10^{24}\) cm\({}^{-2}\) filament illuminated by an isotropic field and a foreground point source. For the default radiation-field values, the bias caused by light scattering is \(\sim\)1% or less. When the isotropic field is increased to \(\chi=20\), the errors in \(p\) and FWHM rise to a few per cent. If the point source is made 50 times stronger, the errors exceed \(\sim\)15%, but only in a small area closest to the point source.
In summary, the scattering in the filament has only a minor effect on the parameter estimation. The effect become visible only if the local radiation field is very strong, the filament has a very high column density, and the background surface brightness is low.
#### 4.2.2 Effects of MIR dust emission
The effects of the 8 \(\mu\)m thermal emission from stochastically heated grains (\(I^{\mathrm{bg}}\)) was examined using the COM dust model. Figure 14 shows maps of the dust emission for a \(N(\mathrm{H_{2}})=3\cdot 10^{23}\) cm\({}^{-2}\) filament. An isotropic radiation field (\(\chi\)=1) results in emission at a level of \(I_{\nu}(8\,\mu\mathrm{m})\sim 0.5\) MJy sr\({}^{-1}\). This is not completely negligible if the background sky brightness is low. The 590 \(L_{\odot}\) point source has a larger effect, which ranges from less than 1 MJy sr\({}^{-1}\) far from the source to \(\sim\)100 MJy sr\({}^{-1}\) close to the source. For the column density of \(N(\mathrm{H_{2}})=10^{23}\) cm\({}^{-2}\), the surface brightness remains at a similar level, but the morphology is different. The surface brightness follows more closely the column-density distribution, and, for a source behind the filament (Fig. 14 frame c), the intensity peaks towards the centre of the filament, with only a minor dip at \(|\Delta x|<0.05\) pc.
Because the thermal emission is stronger and more extended than the scattered light, it could even affect the observer's esti
Figure 11: FIR-estimated filament parameters for single-dust models (COM, CMM, or AMMI) and two models with spatial dust-property variations. In CMM(2), the transition is from normal CMM dust in the outer parts to modified dust with \(\beta=1.5\) in the inner part. For COM/AMMI, the transition is from COM dust to AMMI dust. The filament column density is \(N(\mathrm{H_{2}})=3\cdot 10^{22}\) cm\({}^{-2}\), and the filament is illuminated by an isotropic radiation field (frames a-d) or with a 590 \(L_{\odot}\) point source at \(\Delta y\)=2.09 pc and 0.93 pc to one side of the filament (frames e-h). The plotted parameters are the estimated optical depth relative to its true value (\(\tau/\tau_{0}\)), the parameters \(R\) and \(p\) of the fitted Plummer functions, and the resulting filament FWHM estimate.
Figure 12: Modelled effect of scattering on the MIR optical-depth estimates towards filament centre. Frame (a) shows the model optical depth, which corresponds to a peak column density of \(N(\mathrm{H_{2}})=10^{24}\) cm\({}^{-2}\) with the CMM dust model. The other frames show the relative error in \(\tau(8\mu\mathrm{m})\) estimates. Frame (b) includes only an isotropic radiation field with \(\chi=10\). Frames (c)-(e) show the errors, when a point source, with luminosity 50 times the default value, is included at \(\Delta y=2.09\) pc and 0.93 pc behind the filament (frame c), in front of the filament (frame d), or to one side.
mate of \(I^{\rm bg}\). We calculated alternative \(\tau\) maps, where the median value of the thermal emission \(I^{\rm bg}\) from stochastically heated grains (in the area visible in the Fig. 14) was added to the original \(I^{\rm bg}\). The added component is not truly part of the sky background, because it originates within the source itself and preferentially on the observer's side of the source.
Figure 15 shows the results for \(N({\rm H_{2}})=3\cdot 10^{23}\,{\rm cm^{-2}}\), when the point source is towards one side of the filament. Because the observed sky brightness varies along the \(\Delta y\) coordinate, the \(\tau\) profiles do not drop to zero in the filament wings. This effect is mostly eliminated by the linear background component that is part of the fitted profile function (Eq. (7)). Nevertheless, the parameter estimates vary by up to 50% with the distance to the point source. Both \(p\) and FWHM are more overestimated near the point source, although \(p\) drops sharply at the position closest to the point source.
Figure 15 also shows results for a stronger isotropic field with \(\chi=10\), where \(I^{\rm bg}\) has to take into account the extended emission. The optical depths are now underestimated more, and the especially \(p\) shows large bias. Nevertheless, the filament FWHM is overestimated only by some 25%.
Other cases with \(N({\rm H_{2}})=3\cdot 10^{23}\,{\rm cm^{-2}}\) but different radiation fields are shown in Appendix F. If the point source source is directly in front of or behind the filament, its effect is amplified, as more line-of-sight material is heated. The parameter values are then lower at \(\Delta y\gtrsim 1\), and the filament disappears close to the projected point-source location, when the dip caused by the background extinction is filled by thermal emission. This happens even earlier for models of lower column density, because the lower opacity reduces the background absorption more than it reduced the thermal emission.
In Appendix F we examine a model with lower column density, \(N({\rm H_{2}})=3\cdot 10^{22}\,{\rm cm^{-2}}\), and higher background intensity, \(I^{\rm bg}=100\,{\rm MJy\,sr^{-1}}\). Filament parameters are there generally accurately recovered. However, the errors in \(p\) and FWHM still reach 30% close to the point source and, if the source is along the line of sight, the filament disappears as an absorption feature.
## 5 Discussion
We have examined the estimation of the filament properties with observations of MIR extinction and FIR dust emission. In Sect. 5.1, we discuss the observational results on the OMC-3 field. In Sect. 5.2, we concentrate on the radiative transfer models and, based on the models, the systematic errors that may affect the filament observations.
### OMC-3 filament parameters
We analysed four filament segments, named A-D, in the OMC-3 cloud. Based on MIR absorption, the median FWHM widths
Figure 14: Maps of thermal dust emission \(I^{\rm bg}\) calculated for the \(N({\rm H_{2}})=3\cdot 10^{23}\,{\rm cm^{-2}}\) filament model. Frame (a) corresponds to isotropic illumination with \(\chi=1\), but the surface-brightness values are multiplied by 100 just for plotting (the same colour bar applies to all frames). The other frames include both the isotropic field and a point source located at \(\Delta y=2.09\,{\rm pc}\) and at a distance of 0.93 pc in front, behind, or to the left of the filament centre axis (frames b, c, and d, respectively).
Figure 13: Modelled effect of scattered light on the filament parameters derived from MIR observations. The filament column density is \(N({\rm H_{2}})=10^{24}\,{\rm cm^{-2}}\), and it is illuminated by an isotropic radiation field and a foreground point source. The blue curves correspond to the case of \(\chi=1\) for the isotropic component and the point source with the nominal luminosity. The cyan and red curves show, respectively, the results when both radiation-field components are scaled by a factor of 20 or 100. The dashed black lines show the correct values of the parameters.
Figure 15: Estimated filament parameters along the \(N({\rm H_{2}})=3\cdot 10^{23}\,{\rm cm^{-2}}\) model filament, when the surface brightness includes 8 \(\mu m\) thermal dust emission. The filament is illuminated by an isotropic background (\(\chi=1\)) and a point source at \(\Delta y=2.09\,{\rm pc}\) and a distance 0.94 pc to one side of the filament (cf. Fig. 14d). Red curves correspond to calculations with the true value of \(I^{\rm bg}=10\,{\rm MJy\,sr^{-1}}\) and blue curves to a case where the median surface brightness from Fig. 14d is added to the estimate of \(I^{\rm bg}\). The black curves are for the case of a \(\chi=10\) isotropic radiation field (no point source), with a similarly adjusted \(I^{\rm bg}\) estimate.
are \(\sim\)0.04 pc, with little dependence on the fitted cross-filament extent \(r_{\rm max}\). The values were on average consistent between the four segments, but there were also significant variations along the filaments. These are discussed further in Sect. 5.3. The analysis of FIR emission also gave median FWHM \(\sim\) 0.03-0.05 pc, but, in the case of segment B, up to \(\sim\)0.1 pc. FIR emission was analysed using column-density maps with angular resolutions from 10'' to 41''. There were only small differences between the different map versions, the HR version (20'' resolution) resulting in the smallest values, with median values of 0.02-0.04 pc in Fig. 6. Fits to the median profiles gave values that were similar to the median parameter values along the filaments.
Previous _Herschel_ studies have typically found filament widths of \(\sim\)0.1 pc. Arzoumanian et al. (2011) reported a narrow distribution of 0.10 \(\pm\) 0.03 pc in the cloud IC5146, at a distance of 460 pc. These (deconvolved) widths were based on Gaussian fits, and column-density maps at \(\sim 37\arcsec\) resolution and 250 \(\mu\)m surface brightness maps at \(\sim 18\arcsec\) resolution gave similar results. Rivera-Ingraham et al. (2016) analysed 29 filaments in 13 separate fields at \(d=100-500\) pc distances, the analysis of column-density maps of 41'' resolution resulting in widths of 0.13 \(\pm\) 0.05 pc. In the above papers, the fits were done to the mean (or median) radial profile of an entire filament that was detected with automated methods, using DisPerSE (Sousbie 2011) in the case of Arzoumanian et al. (2011) and getfilaments (Men'shchikov 2013) in the case of Rivera-Ingraham et al. (2016). Arzoumanian et al. (2011) extended _Herschel_ studies to 599 filaments in eight regions at 140-460 pc distances, the distribution again peaking at \(\sim 0.1\) pc with an interquartile range of 0.07 pc. However, based on _Herschel_ data in Arzoumanian et al. (2019), Panopoulou et al. (2022) concluded that the filament width estimates also depend on the source distance and appear to be 4-5 times the beam size. Thus, the widths would increase from less than 0.1 pc in the closest fields (e.g. Taurus and Ophiuchus) to almost 0.3 pc at \(\sim\)800 pc (the IC5146 cloud). This scaling appeared to hold for regions of very different types, from the low-density Polaris field to the Orion-B cloud with active star formation. Some distance dependence was also noted in Rivera-Ingraham et al. (2016).
Although deconvolution by a larger telescope beam increases uncertainties, with perfect observations of perfect Plummer profiles, the FWHM estimates should not depend on the resolution of observations. Indeed, in our analysis, the factor of 20 range in the angular scales is not associated with corresponding systematic changes in the estimated filament widths. The average FWHM were at or below 0.05 pc for all maps, although _Herschel_ data could in some cases lead to higher values FWHM\(\sim\)0.1 pc (segments A and B, Fig. 6). The values are lower than those reported in Panopoulou et al. (2022) for fields at similar distances (including Orion B with FWHM\(\simeq\)0.15 pc), and do not match distance/resolution dependence reported there. In all the above studies, the column densities were based on the modelling of dust emission with a single-component MBB. Howard et al. (2019) and Howard et al. (2021) used the PPMAP method (Marsh et al. 2015) to analyse _Herschel_ and SCUBA-2 observations of filaments in the Taurus and Ophiuchus clouds. They noted that the use of the PPMAP method, which takes temperature variations into account, resulted in a reduction in the estimated filament widths. Our results were also roughly constant with respect to the size of the fitted cross-filament area. Therefore, even if the extent of the fitted area typically increases with the target distance, that should not necessarily lead to larger FWHM estimates.
Filaments in the OMC-3 region were already investigated by Schuller et al. (2021), who used _Herschel_ 160-500 \(\mu\)m and ArTeMiS 350 \(\mu\)m and 450 \(\mu\)m data (different observations from the ArTeMiS data analysed in Mannfors et al. (2022)). The ArTeMiS surface-brightness data and _Herschel_ temperature information at 18.2'' resolution were combined to a column-density map with 8'' resolution. This resulted in filament FWHM estimates of 0.06\(\pm\)0.02 pc. These values are similar to our AR results, where (ignoring the outlying values of segment A) the median values range from 0.04 pc to \(\sim\)0.1 pc (Fig. 6g). While Schuller et al. (2021) studied long, automatically detected filaments, our analysis is limited to short segments at the highest column densities.
In addition to FWHM, the values of the asymptotic power-law index, \(p\), of the fitted Plummer function are of interest. The _Herschel_ and combined _Herschel_ and ArTeMiS data gave typically values \(p=2-5\), with some variations depending on the extent of the fitted area. These suggest that nearby cloud structures are affecting the fits in the tails of the profile function (cf. Sect. 5.3). Compared to FWHM, the individual \(p\) and \(R\) values could be measured less precisely. Especially the MIR \(p\) values showed a large scatter, which could be caused in part by changes in the local MIR radiation field. The MIR \(p\) estimates are similar for different values of \(r_{\rm max}\), which suggests that individual point sources within that area do not have a strong effect on the results.
### Systematic errors in filament parameters
We used radiative transfer simulations to study error sources in the analysis of MIR and FIR observations. In the following we discuss their relative importance and the potential effects on the OMC-3 results.
#### 5.2.1 MIR observations
Dust scattering and thermal dust emission are potential error sources in the analysis of MIR extinction. They depend on the local radiation field and are significant at high column densities. We assumed in the tests a background level of 10 MJy sr\({}^{-1}\). However, if the background level is higher, the effects of local emission and scattering will be reduced.
For dust scattering, errors reached tens of per cent only if the column density is \(N({\rm H}_{2})\sim 10^{24}\) cm\({}^{-2}\) or higher. The intensity of the isotropic background also had to be a factor of \(\chi=100\) above the normal ISRF or the luminosity of a point source at \(\sim\)1 pc distance had to be of the order of \(\sim 10^{4}\)L\({}_{\odot}\). Thus, scattering should not be a significant source of errors in the OMC-3 field. Scattering would tend to decrease the \(\tau\) estimates, increase the \(p\) estimates, and lead to overestimations of the filament FWHM (Fig. 13). These effects are more pronounced close to point sources, especially if the source is on the line of sight towards the filament. The presence of such point sources would be evident in the observed maps, although their effect can extend over distances of several parsecs.
The MIR scattering depends strongly on the grain properties. The so-called coreshine, which is observed at 3-4 \(\mu\)m wavelengths towards many dense cores, has indicated surprisingly strong MIR scattering. This requires strong dust evolution relative to diffuse clouds (Steinacker et al. 2010; Pagani et al. 2010; Juvela et al. 2012c; Steinacker et al. 2014b; Lefevre et al. 2014). Lefevre et al. (2016) investigated the 8 \(\mu\)m scattering towards the pre-stellar, high-column-density core of L 183, where the estimated intensity of the scattered light was hundreds of kJ sr\({}^{-1}\)
This corresponds to the scattering in a more or less normal ISRF, and the high scattering efficiency could be explained by large aggregate grains. The L 183 observations (and the possibility of very large aggregates) suggests that the scattered signal could be stronger than in our simulations. If the scattered light is in L 183 at a level of \(\sim 0.1\) MJy sr\({}^{-1}\), scattering could cause noticeable errors in MIR observations of high-mass star-forming regions, where the radiation fields are much stronger.
In our models, the local thermal dust emission was a more significant factor than the scattering. For the high column density of \(N({\rm H}_{2})\sim 10^{24}\) cm\({}^{-2}\) filament, the emission effects were tens of per cent, both in the case of the normal ISRF and in the case of the 590 \(L_{\odot}\) point source. This led to the filament optical depths being underestimated and the individual filament parameters (\(p\), \(R\), FWHM) to be overestimated. The magnitude of the effects depends strongly on the presence of local radiation sources (Fig. 13). If there is a point source on the line of sight towards the filament, the thermal emission can completely mask the MIR absorption (Appendix Sect. F). Quantitatively, these effects are sensitive to the filament column density, the level of the background surface brightness, and the location of the point source relative to the filament (Fig. 14). In the comparison to the models, one must also take into account that observations tend to underestimate the true column densities.
Scattering is not only a source of errors but can itself be used to study filaments at high angular resolution. At near-infrared wavelengths the scattering is stronger (in absolute terms and relative to the in situ thermal emission) but the larger optical depths complicate the analysis (Juvela et al. 2012b; Malinen et al. 2013). Scattering may also be measurable at MIR wavelengths, above the MIR absorption (e.g. Steinacker et al. 2014b; Lefevre et al. 2014). However, this requires a low background sky brightness, such as found at high Galactic latitudes (Steinacker et al. 2014a).
#### 5.2.2 FIR observations
Far-infrared analysis of Sect. 3.2 is affected especially by the bias of column-density estimates, which is caused by temperature variations in the source and the analysis using the single-temperature MBB model. Appendix G shows that these effects can be easily demonstrated even without complex modelling.
In the radiative-transfer simulations of Sect. 4, the column-density estimates vary depending on the data resolution (beam sizes), the analysis method, the temperatures, and dust optical properties. It can be instructive to compare the peak optical depths obtained at different resolutions and with different methods. Table 1 lists values for the \(N({\rm H}_{2})=10^{22}\) cm\({}^{-2}\) model. Since the filament FWHM is quite large, \(\sim 72^{\prime\prime}\), the peak values of the \(40^{\prime\prime}\) and \(20^{\prime\prime}\) maps differ only little. Shorter wavelengths are more sensitive warm dust, tend to bias the temperatures more upwards, and result in lower \(\tau\) values. In Table 1 the effect is the opposite for \(\beta=1.8\), because the simulation used a dust model with \(\beta\sim 1.95\). A modest increase in the assumed \(\beta\) (to a value above the \(\beta\) in the simulations) can even negate the natural tendency to underestimate the column densities. When real observations are analysed, the precise value of \(\beta\) is unknown. However, because \(\beta\) affects all column-density estimates similarly, its effect on the observed filament profiles is limited - as long as \(\beta\) does not change significantly as a function of the filament radius.
The bias of the column-density estimates increase with column density and exceeded a factor of two at \(N({\rm H}_{2})=10^{22}\) cm\({}^{-2}\) (Fig. 7). Since these errors are correlated with the column density, they also affect the observed filament profiles. The parameters \(p\), \(R\), and FWHM are all biased upwards. Unlike in MIR observations, a stronger isotropic radiation field decreases the errors by reducing the amount of very cold dust. In the normal ISRF, the systematic errors of filament FWHM reach 50% when the column density exceeds \(N({\rm H}_{2})=10^{23}\) cm\({}^{-2}\). In a \(\chi=10\) field, the errors are less than half of this, and are they are further halved in a \(\chi=100\) field. This means that the errors can be of similar magnitude in a low-mass star-forming region (low column density and low radiation field) as in a high-mass star-forming region (high column density and high radiation field). The errors in \(R\) and \(p\) also depend on both the column density and the radiation field, and can reach 50% for a \(N({\rm H}_{2})=10^{23}\) cm\({}^{-2}\) in the normal ISRF (Fig. 10). A point source has a similar effect, the errors increasing close to the source and with only a small dependence on the source location (on the line of sight vs to one side of the filament).
The maximum column density of the OMC-3 filaments is in MBB analysis \(N({\rm H}_{2})\sim 10^{23}\) cm\({}^{-2}\), but the true column density could be even a few times higher. Because the quiescent parts of the filament are likely to be subjected to a radiation field of at most \(\chi\sim 100\) (Mannfors et al. 2022), and the effects of MIR scattering are likely to be insignificant. The models predict a more significant role for the MIR dust emission. Figure 15 indicated a \(\sim\)25% effect for an isotropic radiation field with \(\chi=10\). This requires the extended thermal emission to also be taken into account in the \(f^{\rm{bg}}\) estimates. In observations this happens automatically (to some accuracy), and the local thermal emission need not be separated from other contributions to \(f^{\rm{bg}}\). The fact that the OMC-3 FIR and MIR observations resulted in similar FWHM estimates also suggests that the systematic errors of the MIR analysis are unlikely to amount to tens of per cent.
Filaments are located inside dense clouds, which significantly reduces the UV flux that reaches the filament. Most of the 8 \(\mu\)m emission would then originate in extended regions, and, unlike in our simple models, the emission would be mostly uncor
\begin{table}
\begin{tabular}{l c c c c} \hline Case & Beam & \(\beta\) & \(\tau^{\rm max}(250\,\mu{\rm m})\) & Rel. error \\ & [\({}^{\prime\prime}\)] & & [\(10^{-3}\)] & [\%] \\ \hline true & 6 & – & 1.49 & 0.00 \\ MBB & 6 & 1.8 & 1.11 & -25.77 \\ MBB & 24 & 1.8 & 1.04 & -29.99 \\ HR & 18 & 1.8 & 1.04 & -30.34 \\ HR & 18 & 2.0 & 1.33 & -10.48 \\ HR & 18 & 2.2 & 1.71 & 14.61 \\ HR (w) & 18 & 1.8 & 1.00 & -32.54 \\ HR (w) & 18 & 2.0 & 1.32 & -11.46 \\ HR (w) & 18 & 2.2 & 1.73 & 16.05 \\ LR & 40 & 1.8 & 0.98 & -33.98 \\ LR & 40 & 2.0 & 1.22 & -17.85 \\ LR & 40 & 2.2 & 1.51 & 1.64 \\ LR (w) & 40 & 1.8 & 0.96 & -35.27 \\ LR (w) & 40 & 2.0 & 1.22 & -18.42 \\ LR (w) & 40 & 2.2 & 1.52 & 2.30 \\ \hline \end{tabular} 1
\end{table}
Table 1: Comparison of peak 250 \(\mu\)m optical depths estimated with different analysis methods. The estimates are based on synthetic surface-brightness maps of the \(N({\rm H}_{2})=10^{22}\) cm\({}^{-2}\) filament model.
related with the filament structure. This will reduce the systematic errors in the filament parameters. A second important factor is the abundance of polycyclic aromatic hydrocarbons and other very small grains (Draine 2003). If these have already partly disappeared within the filament (e.g. by sticking onto larger grains), the MIR emission would be suppressed. Based on our models, FIR analysis could overestimate the width of N(H\({}_{2}\)) \(\sim 10^{23}\) cm\({}^{-2}\) filaments by tens of per cent in the normal ISRF. However, a stronger radiation field reduces the errors to \(\sim\)10% level, which is within the uncertainties of the MIR versus FIR comparison (Fig. 6). Therefore, although the models show the possibility of significant systematic errors in all observations, there is no contradiction in the approximate agreement between the OMC-3 MIR and FIR estimates.
### Reliability of profile fits
Based on synthetic observations of magnetohydrodynamic cloud simulations, Juvela et al. (2012a) concluded that, at the nominal _Herschel_ resolution, the filament parameters could be recovered reliably only up to \(\sim\)400 pc (assuming \(\sim\)0.1 pc filaments). OMC-3 is at this limit and the filaments are partly narrower than 0.1 pc. Nevertheless, the angular resolution of at least the MIR and AR maps should be sufficient for the profile analysis.
In addition to systematic errors, the results show significant random fluctuations, especially in the \(p\) and \(R\) estimates. Some fits resulted in values \(p\ga 5\) that are inconsistent with most physical filament models. Individual \(p\) and \(R\) values are more difficult to measure because a larger value of \(p\) can in Eq. (6) be compensated with a larger value of \(R\), the fit still recovering the same FWHM and generally fitting the observed profile. The parameter \(R\) probes the structure in the inner part of the filament and is dependent on the data being able to resolve those smaller scales. Conversely, \(p\) probes the asymptotic behaviour at large distances and is sensitive to nearby cloud structures and the general background fluctuations.
The error distributions of the parameters (including FWHM) are often asymmetric for the individual profiles, with the MCMC estimates showing a long tail to high values (cf. Fig. 11). The overall dispersion along the segments (as shown by the histograms e.g. in Fig. 3b) give an empirical estimate for the total uncertainty. Our data also contain missing values, which affect the reliability of MIR profiles (e.g. Fig. 3) and the analysis of the AR map of the filament segment A (Fig. 5). The missing data also bias the median profiles. Rather that rejecting all profiles that contained missing values (which could be almost all of the data), the median values were calculated over the remaining pixels. Therefore, at a given distance from the filament centre, the median value is based on different sections along the length of the filament, leading to random errors in the median profile.
Figure 16 shows examples of the fits to the OMC-3 filament segment A, based on the MIR, HR, and AR column-density estimates. Most MIR profiles do not have data around offset \(\Delta x=30\arcsec\) but in this case this does not have a major effect on the fits. MIR profiles also tend to show a dip around \(\Delta x=-30\arcsec\), which could be caused by imperfections in the column-density estimation (e.g. dust to nearby sources), or random column-density fluctuations. These fits are associated with high values of \(p\sim 5\). In comparison, the profile at offset \(\Delta y=30\arcsec\) along the filament (red curve) decreases rather than increases towards negative \(\Delta x\), and has \(p\sim 1.6\). For the maps HR and AR based on dust FIR emission, the fitted profiles match better the observed profiles. However, the effect of the missing data is clear in the AR results, where the observations at \(\Delta y\)=10 and 20 arcsec (i.e. the profiles most affected by missing values, cf. Fig. 5c) provide only weak constraints at negative \(\Delta x\) values. This leads to degeneracy between the Plummer and the background parameters, the background is underestimated at negative \(\Delta x\) offsets, and the fit results in abnormally large FWHM values.
Large \(p\) values are also observed for some other dust emission data, typically in fainter and less clear parts of the filaments. They can be related to interference from nearby cloud structures, which can be other filaments or clumps (e.g. northern part of filament B, Fig. 11) or diffuse emission that appear as an extension of the filament itself (e.g. segment C, \(\Delta x\sim 60\arcsec\), Fig. 11). However, the filament segment D is associated with large values \(p\geq 4\) over its full length. Figure 17 shows examples of the individual profiles. The main feature (in HR and AR maps) is a dip at negative \(\Delta x\) values. This is similar to the one seen in Fig. 16 and similarly appears to be the origin of the large \(p\) values. The situation is worst for the \(\Delta x=10\arcsec\) profile, where the background also curves up at positive \(\Delta x\) offsets, raising the \(p\) value above nine. This suggest that the background might need to be modelled using a second order polynomial, although \(p\) would be partly degenerate with an second order background term.
Figure 18 compares the results that are based on the MIR, HR, and AR data on the filament segment D and six versions of the fitted profile function. The row B corresponds to our default model in Eq. (7), which has six parameters: three for the Plummer function itself, one allowing a shift along the \(\Delta x\) axis, and two parameters describing the linear background. In Fig. 18, one
Figure 16: Plummer fits for selected cross-sections of the OMC-3 filament segment A. The three frames correspond to the MIR, HR, and AR column-density maps, respectively. Each frame shows individual profiles for the offsets \(\Delta y\)=10, 20, 30, and 40 arcsec (cf. Fig. 3-5; blue, cyan, red, and grey lines, respectively) and the median profile (thick black lines). The best-fit Plummer profiles are plotted with dashed lines of the same colour, and \(R\), \(p\), and FWHM values of the fits are listed in the frames.
of the alternative fits omits the shift (row A) and one adds the second order term to the background component (row C). So far all fits assume that the filament profiles are symmetric, which is only approximately true for the selected segments. In Fig. 18, we also show results for asymmetric Plummer functions (separate \(R\) and \(p\) parameters on each side of the peak) that are combined with a background modelled as a linear first order (row D) or a second order (row E) polynomial.
If the filament shifts in the \(\Delta x\) direction at small scales, the omission of the \(\Delta r\) term in Eq. 6 should lead to larger FWHM values. No such effect is seen in Fig. 18, although this could still play a role in fits of longer and more fragmented filaments. The addition of the second order background term (figure row C) reduces the \(p\) values significantly from \(p\sim 5\) to \(p\sim 2.5\). The \(R\) values are also smaller, but the mean value and the scatter of the FWHM values has increased. When the second order background term is included in the asymmetric Plummer fits, the results are much less affected and especially the FWHM values remain practically unchanged (row E vs row B). With the exception of the row C, the FWHM values are thus similar for all the alternative fits.
In Fig. 18, the fits have between five and nine free parameters. In these calculations, we also added penalties for values \(R<0.005\) pc and \(p>8\). The AR results were somewhat sensitive to the \(R\) threshold, because the filament widths are not much larger than the beam size and unresolved filaments can result in small \(R\) values. The FWHM estimates from the LR maps could thus be even more sensitive to any priors that are used. Accurate beam models are also important, since they are used to deconvolve the observations. The LR beam is well defined, while the effective beams of the HR and AR maps depend on the way these maps are constructed. The HR column density is based on a combination of intermediate maps that have different angular resolutions and different sensitivity to temperature variations. The AR map is the combination of data from two instruments, and the effective beam could be affected by calibration differences or imperfections of the feathering procedure.
## 6 Conclusions
We have studied four filament segments in the OMC-3 cloud in Orion, using observations of MIR extinction and FIR dust emission and with the goal of measuring the filament widths. The _Herschel_ FIR data were converted to column-density maps at 20\({}^{\prime\prime}\) (LR maps) and 41\({}^{\prime\prime}\) (HR maps) resolution, as well as an additional map of 10\({}^{\prime\prime}\) resolution from combined _Herschel_ and ArTeMiS 350 \(\mu m\) observations (AR map). In addition to using observational results from the OMC-3 filaments, we performed radiative transfer simulations to investigate sources of systematic errors that could affect the measurement of filament profiles. The study led to the following conclusions:
1. The OMC-3 filament segments have FWHM values of 0.03-0.05 pc. Similar values are obtained with the three column-density maps derived from FIR observations (10-41\({}^{\prime\prime}\) angular resolution) and based on the MIR extinction map (\(\sim 2^{\prime\prime}\) resolution). In the LR and AR maps, the estimated width was only higher in segment B and (partly) in segment A, with \(FWHM\sim 0.1\) pc.
Figure 17: Selected cross-sections of the OMC3-filament segment D. These correspond to HR and AR data (blue and red lines, respectively) and the offsets \(\Delta y\)=10 and 25 arcsec, where the \(\Delta y\)=10 arcsec profiles have the lower column density. The dashed lines are the best-fit Plummer profiles with the parameters listed in the figure.
Figure 18: Alternative fits of the filament segment D. The histograms show the distributions of the \(R\), \(p\), and FWHM parameters that are estimated from the MIR, HR, and AR maps (cyan, blue, and red histograms, respectively). The rows are for alternative models of the profile function, where row B corresponds to our default model. In the other fits, row A units the shift of the filament centre, row C adds a second order term to the background, row D fits two-sided Plummer functions with linear background, and row E fits two-sided Plummer functions with a second order polynomial for the background.
2. The MIR results showed little dependence on the extent of the fitted area, which was varied between \(60\arcsec\) and \(210\arcsec\) maximum distances from the filament centre. Some variation was observed due to map edges and, to a lesser extent, due to the influence of unrelated background structures. The results are based on Plummer fits, where the model itself includes terms for a linear background.
3. When estimated from FIR data, the values of the asymptotic power-law index, \(p\), in the Plummer function were \(\sim 2-5\), with an average value of \(p\sim 3\). The estimates derived from MIR extinction had a large scatter, and the median values of the four segments ranged from \(p\sim 3\) to as high as \(p\sim 8\).
4. The FWHM estimates are quite robust, even when the individual Plummer parameters, \(p\) and \(R\), show a large scatter. This applies to both MIR and FIR analysis.
5. Synthetic observations of model filaments were analysed. The MBB fits to 160-500 \(\mu\)m dust emission led to the expected underestimation of column densities. The error exceeded \(\sim 50\%\) above \(N(\mathrm{H_{2}})=3\times 10^{23}\,\mathrm{cm^{-2}}\) but depended on the dust properties.
6. A similar bias exists in the filament parameters derived from FIR dust emission. For a \(N(\mathrm{H_{2}})=10^{23}\,\mathrm{cm^{-2}}\) filament heated by the normal ISRF, the FWHM is overestimated by more than 50%. However, the error decreases by more than a factor of two if the radiation field is ten times stronger and the dust temperatures correspondingly higher. The errors also decrease rapidly at lower column densities.
7. The effect of point sources on the FIR analysis is qualitatively similar to the isotropic field, systematic errors increasing closer to the source. There is only a minor dependence on the source location: a source along the line of sight versus a source to one side of the filament.
8. The accuracy of the MIR extinction analysis is affected by the uncertainty of the foreground emission and the potential effects of MIR dust scattering and emission.
9. In the models, MIR scattering shows only minor effects. The errors in filament parameters reach 10% only at very high column densities (\(N(\mathrm{H_{2}})\sim 10^{24}\,\mathrm{cm^{-2}}\)) and in strong radiation fields (\(\chi>10\)). Errors are larger near luminous point sources, whose presence should be clearly visible in the maps. Scattering is sensitive to dust properties, and strong grain growth could increase its effects by a factor of several.
10. Thermal MIR emission is, in our models, a more significant error source than MIR scattering. In a \(N(\mathrm{H_{2}})=3\times 10^{23}\,\mathrm{cm^{-2}}\) model filament, errors already reach the 10% level in the normal ISRF or within a 1 pc distance of a 590 L\({}_{\odot}\) point source. Errors are still of the same order of magnitude even in a stronger isotropic radiation field of \(\chi=10\). However, if the diffuse UV field is attenuated by the surrounding cloud or the abundance of very small grains is lower inside the filament, the significance of MIR emission is correspondingly decreased.
The estimated widths of the OMC-3 filaments were roughly equal between different tracers and observations of different angular resolution. This is encouraging, considering the many potential sources of systematic errors. However, it also further highlights the differences between the dust filaments and the narrower fibres that are observed in spectral lines. High-resolution comparisons of the dust and gas tracers, within the same sources, are needed to understand these differences and the exact role of filaments in the star-formation process.
|
2305.05960 | Simultaneous depletion and adsorption in polymer solutions near a solid
wall | Polymer solutions exhibit peculiar properties near surfaces: polymer chains
either adsorb onto or can be repelled by the wall. Only a few techniques are
able to probe their structure in the vicinity of solid substrates, because of
the small length scales over which liquids are influenced by the wall. In this
paper, we use neutron reflectivity measurements at the interface between a
polystyrene semidilute solution in a good solvent and a smooth sapphire
surface. We show that polymer chains are globally depleted from the solid
surface, but contrary to what is generally assumed, this does not prevent some
chains to still adsorb on the wall. We also observe that the Newtonian flow of
the solution has a negligible effect on the size of the depletion layer, which
is a hypothesis often made but rarely measured in the literature. | Suzanne Lafon, Tiago Outerelo-Corvo, Marion Grzelka, Arnaud Hélary, Philipp Gutfreund, Liliane Léger, Alexis Chennevière, Frédéric Restagno | 2023-05-10T08:10:16Z | http://arxiv.org/abs/2305.05960v1 | # Simultaneous depletion and adsorption in polymer solutions near a solid wall
###### Abstract
Polymer solutions exhibit peculiar properties near surfaces: polymer chains either adsorb onto or can be repelled by the wall. Only a few techniques are able to probe their structure in the vicinity of solid substrates, because of the small length scales over which liquids are influenced by the wall. In this paper, we use neutron reflectivity measurements at the interface between a polystyrene semi-dilute solution in a good solvent and a smooth sapphire surface. We show that polymer chains are globally depleted from the solid surface, but contrary to what is generally assumed, this does not prevent some chains to still adsorb on the wall. We also observe that the Newtonian flow of the solution has a negligible effect on the size of the depletion layer, which is a hypothesis often made but rarely measured in the literature.
## I Introduction
Molecular structure of liquids near solid interfaces is a complex topic. In the close vicinity of the surface, or in confined geometries, the presence of the wall can modify their behavior. A first historical effect of the role of the walls on fluid properties is capillary condensation, where the walls affect the phase transition pressure of the fluid [1; 2; 3]. Besides, walls have strong implications in biological flows [4] and nanofluidics [5], where recent fundamental advances have evidenced the role of quantum mechanics in liquid-solid friction [6; 7]. We are now able to probe the liquid/solid interfaces down to the nanometric scale thanks to the development of techniques of high precision: volume and near-field laser velocimetry [8; 9; 10; 11; 12], Surface Force Apparatus [13], Atomic Force Microscopy [14], Total Internal Reflection Fluorescence Microscopy [15], nanofluidics in nanochannels [16] completed by numerical simulations [17; 18; 5].
This has opened plenty of interesting questions in many fields and, in particular, these results have challenged the validity of the no-slip boundary condition at small scales [19]. In the case of polymer solutions, the bulk concentration is not always maintained near the surface: the species which has the strongest affinity with the wall is over-concentrated at the liquid/solid interface. If this species is the solvent, we call this phenomenon depletion: there is a lower concentration of polymer near the surface compared to the bulk. For neutral species, the size of this depletion layer is given by the correlation length of the solution [20; 21; 22; 23]. For charged species and charged surfaces, the depletion layer is also related to electrostatic interactions, and thus can be much thicker [24; 25; 15]. In the reverse situation, when the polymer/wall interaction is more favorable than the solvent/wall one, not only the polymer is over-concentrated near the interface, but in addition, some polymer chains may adsorb onto the surface, leading to a so-called Guiselin pseudo-brush[26]. This is due to the strong entropic nature of polymers: a single chain has multiple adsorption points with the surface, and thus the probability for a chain to desorb is the probability that all these adsorption points to desorb, which is extremely low and therefore adsorption of long polymer chains is almost irreversible.
Several experimental measurements have allowed to indirectly measure the presence of a depletion layer when a semi-dilute polymer solution is put onto a repulsive solid surface. These measurements are based on the increase of the fluid mobility close to the surface. Among these experiments, we can cite, direct pressure drop measurements in porous media [27; 28; 29], microfluidics experiments [16], surface forces apparatus [24; 30] or rheology [31; 32]. In general, it is difficult to directly measure the concentration of polymer close to the interface. The first observation of depletion of a polymer solution near a solid surface has been made by Allain _et al._ in 1982 [20] with evanescent waves. Since then, different techniques have been used to directly measure depletion, such as neutron reflectivity [33] and ellipsometry [25]. Finally, the effect of flow on depletion is poorly studied because of the difficulty of the experiments, but for now, what is reported is a thickening of the depletion layer at high flow rates [34; 35] for rigid polymers.
In this paper, we use neutron reflectivity [36; 37; 38] to directly measure depletion near a smooth sapphire surface for non-charged semi-dilute polystyrene solutions. We observe an intriguing behavior: the polymer chains are depleted from the wall, but some of them still adsorb on the solid surface. We discuss this observation in terms of comparative interactions. Finally, we show that the Newtonian flow rate has no effect on the concentration profile near the solid wall.
## Results and Discussion
As a first result, we plot in Fig. 1 (left) the reflectivity curves of two static solutions at different bulk volume fractions \(\phi_{\mathrm{b}}\) of polymers and the same molar mass \(M_{n}=195\) kg/mol. The data are plotted in the Porod represention (\(RQ^{4}\) vs \(Q\)) where \(Q\) is the scattering vector and \(R\) is the reflectivity signal, to reveal differences between the two curves.
The observed differences between these two curves can stem from the difference in coherent neutron scattering length density of the solutions and/or a change in the polymer segment density profile. As shown in figure 1, the computed Fresnel reflectivity does not allow to describe the experimental data, which highlights that near surface polymer concentration is not equal to the bulk concentration. In order to get a quantitative description of the interfacial structure, the data were fitted using the refnx Python module [39] assuming an exponential evolution of the volume fraction profile \(\phi(z)\) as proposed by de Gennes [40]:
\[\phi(z)=\phi_{\mathrm{b}}+(\phi_{\mathrm{w}}-\phi_{\mathrm{b}})\mathrm{e}^{-z /d} \tag{1}\]
where \(z\) is the distance from the sapphire surface, \(\phi_{w}\) and \(\phi_{\mathrm{b}}\) are the surface and bulk volume fractions, respectively, and \(d\) is the typical size over which the polymer concentration differs from the bulk one. \(\phi_{\mathrm{w}}>\phi_{\mathrm{b}}\) corresponds to adsorption while \(\phi_{\mathrm{w}}<\phi_{\mathrm{b}}\) corresponds to depletion.
The resulting neutron scattering length density (SLD) profiles and polymer segment density profiles are plotted in Fig. 1. In the vicinity of the interface, data fitting shows unambiguously a polymer depletion which characteristic size decreases from \(d=109\pm 18\) A for \(\phi=3\) % to \(d=65\pm 12\) A at \(\phi=6\) %. These characteristic distances can be compared to the bulk blob size \(\xi\) measured using Small Angle Neutron Scattering (see Supp. Mat. Fig. S1). We found that the characteristic depletion distance \(d\) is twice larger than \(\xi\) and is in agreement with the bulk scaling law \(\xi\propto\phi^{-3/4}\)[41].
This is in good agreement with theoretical predictions which state that the size of the depletion layer is the blob size [21; 22; 23] and consistent with previous measurements at liquid/air interface [33]. This is schematically illustrated in Fig. 1, right.
The observed depletion layer leads to the conclusion that polystyrene chains are less attracted by the sapphire compared to the solvent. In Figure 1, one can observe that the polymer volume fraction at the interface can be larger than zero. It means that some monomers are in contact with the surface but the lone segment density profile does not allow to conclude if they are physically adsorbed. In order to probe adsorption, we have compared the reflectivity curves of the solvent either on a clean sapphire and on a sapphire that has previously been in contact with the dPS/DEP solution (\(\phi=6\) %, \(M_{n}=1.56\) Mg/mol). The resulting neutron reflectivity (NR) curves are plotted in Fig. 2 (top). One can see that at large \(Q\) values, the profiles are significantly different. If there was pure depletion of dPS from the interface, the two profiles would be the same. Our measurement suggests that some dPS chains remain adsorbed on the interface. However, due to the low contrast between the adsorbed chains and the solvent, it is impossible to extract quantitative information from this measurement.
To confirm this hypothesis, we have conducted X-ray reflectivity on air/sapphire interface for a sapphire which has been in contact with the polystyrene solution during one hour and then thoroughly rinsed with DEP and dried. The corresponding reflectivity curve is plotted in Fig. 2 (bottom) and show a clear Kiessig fringe between \(Q=0.15\) A\({}^{-1}\) and \(Q=0.3\) A\({}^{-1}\). A single layer model with roughness at both interfaces fits well the data and confirms the presence of a dry adsorbed layer of thickness \(h_{\mathrm{dry}}=24\) A. This value allows us to estimate a surface volume fraction of 0.7 % and a maximum swollen thickness of about 320 nm (assuming an Alexander - de Gennes brush [42], see Supp. Mat. section 4), resulting in an extremely low concentration gradient, which further confirms that it is nearly impossible to fit the NR reflectivity data of Fig. 2 (top).
It is quite surprising to see both depletion and adsorption as they are usually excluding scenari. The measurement of depletion is rather robust since the neutron flux of the ILL is strong enough to give precise measurements, and we could not fit any adsorption profile on our data while depletion profiles were easily fitted with good \(\tilde{\chi}^{2}\) values (between 0.8 and 3.3). As for adsorption, both the qualitative NR measurements and the quantitative X-Ray ones leave little doubts on the presence of remaining chains on the surface. In addition, Barraud _et al._[24] have mentioned an adsorption/depletion situation for a charged polymer on a metallic surface: they observed a depletion layer of the polyelectrolyte above its own adsorbed layer. However, in their case, adsorption was the result of the favorable electrostatic attraction between the metallic surface and the chains, and depletion was the consequence of the electrostatic repulsion between the adsorbed layer and the bulk chains. These arguments do not apply to our neutral system.
From a chemical point of view, the surface of the sapphire displays hydroxyl groups with a surface density in the range of \(1-10\) OH/nm\({}^{2}\)[43; 44]. Contrary to PS (hydrogenated or deuterated), DEP is able to make hydrogen bonds with these exposed OH groups, which is likely to favor the DEP-surface interaction compared to the dPS-surface interaction which further corroborates the depletion scenario. However, adsorption of PS chains onto sapphire surfaces has already been mentionned in
the literature for PS melts [45], which means that PS can also interact favorably with the surface. Indeed, PS chains are able to adsorb onto exposed hydroxyl groups through \(\pi\)-H interaction with their phenyl groups [46; 47]. The fact that we globally see depletion suggests that this interaction is weaker than the H-bonds between DEP and hydroxyl groups, but it might not exclude some PS chains to still adsorb.
Finally, we have studied the effect of a Poiseuille flow on the concentration profiles of the dPS/DEP solutions near the sapphire surface. The results are plotted in Fig. 3, for a solution with a 6 % volume fraction and a 1.56 Mg/mol molar mass. The flow rate is characterized by the Weissenberg number Wi, which is the dimensionless number comparing the typical relaxation time of the polymer solution (\(\tau\)) and the typical time scale of the flow (\(\dot{\gamma}_{\rm max}^{-1}\)): Wi\(=\dot{\gamma}_{\rm max}\tau\). In our geometry, the maximum shear rate \(\dot{\gamma}_{\rm max}\) is given by \(6Q/(\ell h^{2})\) with \(Q\) the flow rate imposed by the pump, and \(\ell\) and \(h\) the width and the height of the cell, respectively. As for the typical relaxation time, we use the reptation time (also called terminal relaxation time) \(\tau_{\rm rept}\) of the solution. Oscillatory rheology (described in Supp. Mat. Fig. S5) gave \(\tau_{\rm rept}=0.23\pm 0.01\) s.
The reflectivity curves are very similar for both the static solution and the flowing ones, and the fit yields the same SLD profiles, with \(\tilde{\chi}^{2}\) between 1 and 2. The size of the depletion layer does not vary significantly with the Weissenberg number up to \(\mathrm{Wi}=0.01\). Reaching higher values of Wi is challenging because it requires both a highly viscous liquid (high \(\tau\)) and a strong flow rate \(Q\), and we are limited by either the upper limit of flow rates accessible by the pump or by the total amount of solution we have, which conditions the emptying time of the syringe.
The flow rate can have an effect on both the depletion layer and on the adsorbed chains. Previous works on the effect of the flow rate on the size of the depletion layer report interesting behaviors. For rigid rodlike particles, Ausserre _et al._ have shown that the depletion size \(d\) increases with the flow rate due to hydrodynamics lift [34]. For dilute polymer solutions, de Pablo _et al._ have predicted a decreasing \(d\) with the flow rate at moderate flow rates and an increasing \(d\) at large flow rates [35], which corroborates the experiment done by Ausserre _et al._. They have used a dumbbell model, which aligns with the flow at low flow rates and starts to rotate quickly at high flow rates, so that the volume occupied by the dumbbell is larger and thus it flows further away from the wall. On the contrary, in our experiments, the dis
Figure 1: Effect of bulk volume fraction \(\phi_{b}\). Left: reflectivity curves. Solid lines correspond to the Fresnel reflectivity if the solution was homogeneous until the solid surface. Dashed lines correspond to fits from which the SLD profiles (second graph) and thus the volume fraction profiles (third graph) are extracted. Right: cartoon of the interface. The size of the depletion layer \(d\) is typically the size of the blob \(\xi\).
Figure 2: Top: Neutron reflectivity curves of DEP on clean sapphire (blue diamonds, fit is the blue dashed line) and on sapphire which has been incubated with dPS/DEP solution prior to the experiment (orange squares). Bottom: X-ray reflectivity curve of the sapphire surface that has been incubated with PS/DEP solution (\(\phi=6\%\), \(M_{n}=708\) kg/mol), and then rinsed and dried. The black line is a fit with a rough interfacial layer. Inset: Electronic density as a function of the distance from the interface. Corresponding cartoons are plotted on the right.
persed polymers are flexible and therefore this description does not apply to our system. The depletion size might have been changed if the flow would have an effect on the blob size of the solution. Here, we see that up to Wi\(=0.01\), no effect of the flow is visible and thus the size of the blobs remains constant. As for the adsorbed layer, Korolkovas _et al._[48] have used neutron reflectivity to study the effect of flow rate on the interface between a dPS/DEP solution and PS brushes in a cone-plate rheometer. They have shown that the thickness of the brush decreases when increasing the Weissenberg number above 1, which in their case is \(\dot{\gamma}\tau\), with \(\dot{\gamma}\) the applied shear rate. The grafting densities of their brushes varied between 0.04 and 0.4 nm\({}^{-2}\). In our case we are at Wi\(\leq 0.01\) and we have a density of adsorbed chains of 0.002 nm\({}^{-2}\), and therefore we do not see an effect of the flow even if there is adsorption.
In conclusion, we have shown directly using neutron reflectivity that dPS/DEP solutions near a sapphire surface exhibits depletion. Interestingly, we show that depletion of polymer chains does not prevent some chains to adsorb onto the sapphire surface, probably due to a favorable interaction between the phenyl groups of the chains and the exposed hydroxyl groups of the solid surface. In addition, we show that the flow rate has no effect on the depletion layer up to Weissenberg number Wi \(\approx 0.01\). These results give a precious insight into the interfacial structure of a semi-dilute polymer solution flowing onto smooth surfaces. Further studies with other solvents and at higher Weissenbeg number would help to understand the scope of these results.
## Experimental Method
SolutionsThe solutions are made of fully deuterated polystyrene (dPS) (Polymer Source, \(\mathrm{D}=1.17\)\(M_{n}=195\) kg/mol or \(M_{n}=1.56\) Mg/mol) in hydrogenated diethylphtalate, DEP (Sigma Aldrich), which is a good solvent for PS (see Appendix 1). The volume fractions \(\phi\) (3 and 6 %) are chosen to be in the semi-dilute regime \(\phi>\phi^{*}\) with \(\phi^{*}\approx N^{-4/5}\) the overlap volume fraction, which is about 0.24% for the 195 kg/mol polystyrene, and 0.06 for the 1.56 Mg/mol one. Solutions are homogenized under gentle stirring during two weeks prior to the experiment. The wall of interest is a polished sapphire surface from Fichou. The cell is drawn in the Supp. Mat. (Fig. S2). The top surface is a PTFE surface of size 4x8 cm. The gap \(h\) is 1 mm, controlled by a rectangular Viton frame. Between each experiment, the cell is rinsed with clean toluene (Sigma Aldrich). The sapphire is dried with nitrogen and put under a UV-ozone lamp (ProCleaner(tm) _Plus_, Bioforce Nanosciences) during at least 30 minutes.
Neutron reflectivityThe flow is applied through a Chemyx Fusion 6000 syringe pump with 50 mL steel syring. The flow rate can be varied between \(10^{-4}\) and 270 mL/min, both in injection and withdrawal. Measurements are done either with a constant flow or with an alternating injection/withdrawal flow, in which case the acquisition of reflected neutrons is synchronized with the frequency of the flow. We call the latter procedure "stroboscopic measurement". The experiment has been conducted on D17 [49] at Institut Laue-Langevin (ILL). We use Time-Of-Flight (TOF) reflectivity at two angles of incidence \(0.5^{\circ}\) and \(2.5^{\circ}\), with acquisition times of 30 and 60 minutes respectively. The Scattering Length Densities (SLD) of the substrate and the solvent have been measured beforehand and are 5.77 \(10^{-6}\) A \({}^{-2}\) and 1.62 \(10^{-6}\) A \({}^{-2}\) respectively (see Supp. Mat. Fig. S3), in good agreement with theoretical values. The abrupt transition of SLD between the sapphire and the interfacial liquid is modeled by a rough interface.
X-ray reflectivityThe X-ray reflectivity (XRR) measurements were performed on a Xeuss 2.0 instrument (Xenocs) with a Cu k-\(\alpha\) source of wavelength 1.54 A and a Pilatus 1M 2D detector (Dectris). The experiment is conducted under vacuum, with a sample-detector distance of 1.214 m. We use two collimation slits set to 0.5 mm x 1 mm and 0.3 mm x 1 mm (height x width).
This work was supported by the ANR-POILLU pro
Figure 3: Effect of the flow on the reflectivity profiles. Left: reflectivity curves and corresponding SLD profiles in the inset. Right: Size of the depletion layer extracted from the fits as a function of the Weissenberg number Wi. These measurements have been done with 1.56 Mg/mol dPS/DEP at a volume fraction \(\phi=6\,\%\).
gram (Grant No. ANR-19-CE06-007). We thank O. Tessier for machining the cell. We thank the ILL for beamtime (DOI: 10.5291/ILL-DATA.9-11-2020).
|
2303.08112 | Eliciting Latent Predictions from Transformers with the Tuned Lens | We analyze transformers from the perspective of iterative inference, seeking
to understand how model predictions are refined layer by layer. To do so, we
train an affine probe for each block in a frozen pretrained model, making it
possible to decode every hidden state into a distribution over the vocabulary.
Our method, the \emph{tuned lens}, is a refinement of the earlier ``logit
lens'' technique, which yielded useful insights but is often brittle.
We test our method on various autoregressive language models with up to 20B
parameters, showing it to be more predictive, reliable and unbiased than the
logit lens. With causal experiments, we show the tuned lens uses similar
features to the model itself. We also find the trajectory of latent predictions
can be used to detect malicious inputs with high accuracy. All code needed to
reproduce our results can be found at
https://github.com/AlignmentResearch/tuned-lens. | Nora Belrose, Zach Furman, Logan Smith, Danny Halawi, Igor Ostrovsky, Lev McKinney, Stella Biderman, Jacob Steinhardt | 2023-03-14T17:47:09Z | http://arxiv.org/abs/2303.08112v4 | # Eliciting Latent Predictions from Transformers with the Tuned Lens
###### Abstract
We analyze transformers from the perspective of iterative inference, seeking to understand how model predictions are refined layer by layer. To do so, we train an affine probe for each block in a frozen pretrained model, making it possible to decode every hidden state into a distribution over the vocabulary. Our method, the _tuned lens_, is a refinement of the earlier "logit lens" technique, which yielded useful insights but is often brittle.
We test our method on various autoregressive language models with up to 20B parameters, showing it to be more predictive, reliable and unbiased than the logit lens. With causal experiments, we show the tuned lens uses similar features to the model itself. We also find the trajectory of latent predictions can be used to detect malicious inputs with high accuracy. All code needed to reproduce our results can be found at [https://github.com/AlignmentResearch/tuned-lens](https://github.com/AlignmentResearch/tuned-lens).
Machine Learning, ICML
## 1 Introduction
The impressive performance of transformers in natural language processing Brown et al. (2020) and computer vision Dosovitskiy et al. (2020) suggests that their internal representations have rich structure worthy of scientific investigation. One common approach is to train classifiers to extract specific concepts from hidden states, like part-of-speech and syntactic structure Hewitt and Manning (2019); Tucker et al. (2021); Li et al. (2022).
In this work, we instead examine transformer representations from the perspective of _iterative inference_Jastrzebski et al. (2017). Specifically, we view each layer in a transformer language model as performing an incremental update to a latent prediction of the next token.1 We decode these latent predictions through early exiting, converting the hidden state at each intermediate layer into a distribution over the vocabulary. This yields a sequence of distributions we call the _prediction trajectory_, which exhibits a strong tendency to converge smoothly to the final output distribution, with each successive layer achieving lower perplexity.
Footnote 1: See Appendix C for evidence supporting this view, including novel empirical results of our own.
We build on the "logit lens" (nostalgebriast, 2020), an early exiting technique that directly decodes hidden states into vocabulary space using the model's pretrained unembedding matrix. We find the logit lens to be unreliable (Section 2), failing to elicit plausible predictions for models like BLOOM Scao et al. (2022) and GPT Neo Black et al. (2021). Even when the logit lens appears to work, its outputs are hard to interpret due to _representational drift_: features
Figure 1: Comparison of our method, the _tuned lens_ (bottom), with the “logit lens” (top) for GPT-Neo-2.7B prompted with an except from the abstract of Vaswani et al. (2017). Each cell shows the top-1 token predicted by the model at the given layer and token index. The logit lens fails to elicit interpretable predictions before layer 21, but our method succeeds.
may be represented differently at different layers of the network. Other early exiting procedures also exist (Schuster et al., 2022), but require modifying the training process, and so can't be used to analyze pretrained models.
Footnote 1: [https://github.com/hugging-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning--learning-learning-learning--learning-learning--learning-learning--learning-learning-learning--learning-learning--learning--learning-learning--learning-learning--learning--learning-learning--learning--learning-learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning---learning--learning---learning---learning---learning---learning----in-](https://github.com/hugging-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning--learning-learning-learning--learning-learning--learning-learning--learning-learning-learning--learning-learning--learning--learning-learning--learning-learning--learning--learning-learning--learning--learning-learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning--learning---learning--learning---learning---learning---learning---learning----in-)
work better for GPT-Neo, they introduce an extension which retains the last transformer layer, yielding:
\[\mathrm{LogitLens}^{\mathrm{ext}}(\mathbf{h}_{\ell})=\mathrm{LayerNorm}[\mathbf{h}_{\ell}+ F_{L}(\mathbf{h}_{\ell})]W_{U} \tag{4}\]
This extension is only partially successful at recovering meaningful results; see Figure 1 (top) for an example.
**Unreliability.** Beyond GPT-Neo, the logit lens struggles to elicit predictions from several other models released since its introduction, such as BLOOM (Scao et al., 2022) and OPT 125M (Zhang et al., 2022) (Figure 12).
Moreover, the type of information extracted by the logit lens varies both from model to model and from layer to layer, making it difficult to interpret. For example, we find that for BLOOM and OPT 125M, the top 1 prediction of the logit lens is often the _input_ token, rather than any plausible continuation token, in more than half the layers (Figure 16).
**Bias.** Even when the logit lens is useful, we find that it is a _biased_ estimator of the model's final output: it systematically puts more probability mass on certain vocabulary items than the final layer does.
This is concerning because it suggests we can't interpret the logit lens prediction trajectory as a belief updating in response to new evidence. The beliefs of a rational agent should not update in an easily predictable direction over time, since predictable updates can be exploited via Dutch books (2). Biased logit lens outputs are trivially exploitable once the direction of bias is known: one could simply "bet" against the logit lens at layer \(\ell<L\) that the next token will be one of the tokens that it systematically downweights, and make unbounded profit in expectation.
Let \(\mathbf{x}\) be a sequence of tokens sampled from a dataset \(D\), and let \(\mathbf{x}_{<t}\) refer to the tokens preceding position \(t\) in the sequence. Let \(q_{\ell}(\cdot|\mathbf{x}_{<t})\) be the logit lens distribution at layer \(\ell\) for position \(t\), and let \(p(\cdot|\mathbf{x}_{<t})\) be the final layer distribution for position \(t\).
We define \(p(v|\mathbf{x})\) to be the probability assigned to a vocabulary item \(v\) in a sequence \(\mathbf{x}\), averaged over all positions \(1\dots T\):
\[p(v|\mathbf{x})\stackrel{{\mathrm{def}}}{{=}}\frac{1}{T}\sum_{t=1}^{T}p (v|\mathbf{x}_{<t}). \tag{5}\]
Slightly abusing terminology, we say that \(q_{\ell}\) is an "unbiased estimator" of \(p\) if, for every item \(v\) in the vocabulary, the probability assigned to \(v\) averaged across all tokens in the dataset is the same:
\[\begin{split}\mathop{\mathbb{E}}_{\mathbf{x}\in D}\Big{[}q_{\ell}(v| \mathbf{x})\Big{]}&=\mathop{\mathbb{E}}_{\mathbf{x}\in D}\Big{[}p(v| \mathbf{x})\Big{]}\\ q_{\ell}(v)&=p(v)\\ \forall v\in\mathcal{V},&\mathcal{V}=\{\texttt{``aardvark"}, \dots\}\end{split} \tag{6}\]
In practice, Equation 6 will never hold exactly. We measure the degree of bias using the KL divergence between the marginal distributions, \(D_{KL}(p\,||\,q_{\ell})\).
In Figure 3 we evaluate the bias for each layer of GPT-Neo-2.7B. We find the bias of the logit lens can be quite large: around 4 to 5 bits for most layers. As a point of comparison, the bias of Pythia 160M's final layer distribution relative to that of its larger cousin, Pythia 12B, is just 0.0068 bits.
## 3 The Tuned Lens
One problem with the logit lens is that, if transformer layers learn to output residuals that are far from zero _on average_, the input to \(\mathrm{LogitLens}\) may be out-of-distribution and yield nonsensical results. In other words, the choice of zero as a replacement value is somewhat arbitrary- the network might learn to rely on \(\sum_{\ell^{\prime}=\ell}^{L}\mathop{\mathbb{E}}[F_{\ell}(\mathbf{h}_{\ell})]\) as a bias term.
Figure 4: Perplexity of predictions elicited from BLOOM 560M under four conditions: the logit lens (red squares) and the tuned lens (blue circles), and including (left) and excluding (right) the final transformer layer from the probe. We find that tuned lens predictions have substantially lower perplexity whether or not the final layer is included, showing it is an independent and complementary proposal.
Figure 3: Bias of logit lens and tuned lens outputs relative to the final layer output for GPT-Neo-2.7B. The last transformer layer is included for both probes. Unlike the tuned lens, the logit lens is systematically biased toward some vocabulary items over others until the very end of the network.
Our first change to the method is to replace the summed residuals with a learnable constant value \(\mathbf{b}_{\ell}\) instead of zero:
\[\mathrm{LogitLens}_{\ell}^{\mathrm{debiased}}(\mathbf{h}_{\ell})=\mathrm{ LogitLens}(\mathbf{h}_{\ell}+\mathbf{b}_{\ell}) \tag{7}\]
**Representation drift.** Another issue with the logit lens is that transformer hidden states often contain a small number of very high variance dimensions, and these "rogue dimensions" (Timkey and van Schijndel, 2021) tend to be distributed unevenly across layers; see Figure 6 (top) for an example. Ablating an outlier direction can drastically harm performance (Kovaleva et al., 2021), so if \(\mathrm{LogitLens}\) relies on the presence or absence of particular outlier dimensions, the perplexity of logit lens predictions might be spuriously high.
Even when controlling for rogue dimensions, we observe a strong tendency for the covariance matrices of hidden states at different layers to drift apart as the number of layers separating them increases (Figure 6, bottom). The covariance at the final layer often changes sharply relative to previous layers, suggesting the logit lens might "misinterpret" earlier representations.
One simple, general way to correct for drifting covariance is to introduce a learnable change of basis matrix \(A_{\ell}\), which learns to map from the output space of layer \(\ell\) to the input space of the final layer. We have now arrived at the _tuned lens_ formula, featuring a learned affine transformation for each layer:
\[\mathrm{TunedLens}_{\ell}(\mathbf{h}_{\ell})=\mathrm{LogitLens}(A_{\ell}\mathbf{h}_{ \ell}+\mathbf{b}_{\ell}) \tag{8}\]
We refer to \((A_{\ell},\mathbf{b}_{\ell})\) as the _translator_ for layer \(\ell\).
**Loss function.** We train the translators to minimize KL between the tuned lens logits and the final layer logits:
\[\mathrm{argmin}\ \operatorname*{\mathbb{E}}_{\mathbf{x}}\Big{[}D_{KL}(f_{>\ell}( \mathbf{h}_{\ell})\,||\,\mathrm{TunedLens}_{k}(\mathbf{h}_{\ell}))\Big{]} \tag{9}\]
where \(f_{>\ell}(\mathbf{h}_{\ell})\) refers to the rest of the transformer after layer \(\ell\). This can be viewed as a distillation loss, using the final layer distribution as a soft label (Sanh et al., 2019). It ensures that the probes are not incentivized to learn extra information over and above what the model has learned, which can become a problem when training probes with ground truth labels (Hewitt and Liang, 2019).
**Implementation details.** When readily available, we train translators on a slice of the validation set used during pretraining, and use a separate slice for evaluation. Since BLOOM and GPT-2 do not have publicly available validation sets, we use the Pile validation set (Gao et al., 2020; Biderman et al., 2022). The OPT validation set is also not publicly available, but a member of the OPT team helped us train a tuned lens on the OPT validation set. Documents are concatenated and split into uniform chunks of length 2048.
We use SGD with Nesterov momentum, with a linear learning rate decay schedule over 250 training steps. We use a base learning rate of 1.0, or 0.25 when keeping the final transformer layer, and clip gradients to a norm of 1. We accumulate gradients as necessary to achieve a total batch size of \(2^{18}\) tokens per optimizer step. We initialize all translators to the identity transform, and use a weight decay of \(10^{-3}\).
We evaluate all models on a random sample of 16.4M tokens from their respective pretraining validation sets. We leave out the final transformer layer for GPT-2 (Radford et al.,
Figure 5: Perplexity of latent predictions elicited by the logit lens (left) and the tuned lens (right) from Pythia and GPT-NeoX-20B, as a function of layer index and model size. Tuned lens predictions are uniformly lower perplexity and exhibit lower variance across independently trained models.
2019), GPT-NeoX-20B (Black et al., 2022), OPT (Zhang et al., 2022), and Pythia (Biderman et al., 2023), and include it for GPT-Neo (Black et al., 2021). We evaluate BLOOM (Scao et al., 2022) under both conditions in Figure 4.
**Results.** We plot tuned lens perplexity as a function of depth for the Pythia models and GPT-NeoX-20B in Figure 53; results for other model families can be found in Appendix A.
Footnote 3: Pythia and GPT-NeoX-20B were trained using the same architecture, data, and codebase (Andonian et al., 2021). While they’re not officially the same model suite, they’re more consistent than the OPT models.
We find that the tuned lens resolves the problems with the logit lens discussed in Section 2: it has significantly lower bias (Figure 3), and much lower perplexity than the logit lens across the board (Figure 5, Appendix A).
**Transferability across layers.** We find that tuned lens translators can usually zero-shot transfer to nearby layers with only a modest increase in perplexity. Specifically, we define the _transfer penalty_ from layer \(\ell\) to \(\ell^{\prime}\) to be the expected increase in cross-entropy loss when evaluating the tuned lens translator trained for layer \(\ell\) on layer \(\ell^{\prime}\).
We report transfer penalties for the largest Pythia model in Figure 7. Overall, transfer penalties are quite low, especially for nearby layers (entries near the diagonal in Figure 7). Comparing to the two plots in Figure 6, we notice that transfer penalties are strongly negatively correlated with covariance similarity (Spearman \(\rho=-0.78\)). Unlike Figure 6, however, Figure 7 is not symmetric: transfer penalties are higher when training on a layer with the outlier dimensions (Layer 5 and later) and testing on a layer without them, than the reverse.
**Relation to model stitching.** The tuned lens can be viewed as a way of "stitching" an intermediate layer directly onto the unembedding, with an affine transform in between to align the representations. The idea of model stitching was introduced by Lenc and Vedaldi (2015), who form a composite model out of two _frozen_ pretrained models \(A\) and \(B\), by connecting the bottom layers of \(A\) to the top layers of \(B\). An affine transform suffices to stitch together independently trained models with minimal performance loss (Bansal et al.,
Figure 6: Pairwise similarities of hidden state covariance matrices across layers of Pythia 12B. Layer 4 introduces two outlier dimensions which dominate the covariance; removing them reveals smooth representational drift with depth. To control for varying hidden state norms, we measure the Frobenius cosine similarity, or \(\frac{(A,B)_{F}}{\|A\|_{F}\|B\|_{F}}\) for two matrices \(A\) and \(B\).
Figure 7: Transfer penalties for Pythia 12B. Each row corresponds to a single tuned lens probe trained on layer \(\ell\), and each column is a layer \(\ell^{\prime}\) on which probes are evaluated. Each cell shows the cross-entropy loss of probe \(\ell\) evaluated on layer \(\ell^{\prime}\), _minus_ its on-distribution loss (so that the diagonal entries are identically zero).
2021; Csiszarik et al., 2021). The success of the tuned lens shows that model stitching works for different layers inside a single model as well.
Benefits over traditional probingUnlike Alain and Bengio (2016), who train early exiting probes for image classifiers, we do not learn a new unembedding for each layer. This is important, since it allows us to shrink the size of each learned matrix from \(|\mathcal{V}|\times d\) to \(d\times d\), where \(|\mathcal{V}|\) ranges from 50K (GPT-2, Pythia) to over 250K (BLOOM). We observe empirically that training a new unembedding matrix requires considerably more training steps and a larger batch size than training a translator, and often converges to a worse perplexity.
## 4 Measuring Causal Fidelity
Prior work has argued that interpretability hypotheses should be tested with causal experiments: an interpretation of a neural network should make predictions about what will happen when we intervene on its weights or activations (Olah et al., 2020; Chan et al., 2022). This is especially important for probing techniques, since it's known that probes can learn to rely on spurious features unrelated to the model's performance (Hewitt and Liang, 2019; Belinkov, 2022).
To explore whether the tuned lens finds causally relevant features, we will assess two desired properties:
1. Latent directions that are important to the tuned lens should also be important to the final layer output. Concretely, if the tuned lens _relies on_ a feature4\(\mathbf{v}\) in the residual stream (its output changes significantly when we manipulate \(\mathbf{v}\)) then the model output should also change a lot when we manipulate \(\mathbf{v}\). Footnote 4: For simplicity we assume the “features as directions” hypothesis (Ellage et al., 2022), which defines a “feature” to be the one-dimensional subspace spanned by a unit vector \(\mathbf{v}\).
2. These latent directions should be important _in the same way_ for both the tuned lens and the model. Concretely, if we manipulate the hidden state so that the tuned lens changes in a certain way (e.g. doubling the probability assigned to "dog") then the model output should change similarly. We will call this property _stimulus-response alignment_. Footnote 4: For simplicity we assume the “features as directions” hypothesis (Ellage et al., 2022), which defines a “feature” to be the one-dimensional subspace spanned by a unit vector \(\mathbf{v}\).
### Causal basis extraction
To test Property 1, we first need to find the important directions for the tuned lens. Amnesic probing (Elazar et al., 2021) provides one way to do this--it seeks a direction whose erasure maximally degrades a model's accuracy.
However, this only elicits a single important direction, whereas we would like to find many such directions. To do so, we borrow intuition from PCA, searching for additional directions that also degrade accuracy, but which are orthogonal to the original amnesic direction. This leads to a method that we call **causal basis extraction** (CBE), which finds the the principal features used by a model.
More specifically, let \(f\) be a function (such as the tuned lens) that maps latent vectors \(\mathbf{h}\in\mathbb{R}^{d}\) to logits \(\mathbf{y}\). Let \(r(\mathbf{h},\mathbf{v})\) be an erasure function which removes information along the span of \(\mathbf{v}\) from \(\mathbf{x}\). In this work we use \(r(\mathbf{h},\mathbf{v})\) is _mean ablation_, which sets \(\langle r(\mathbf{h},\mathbf{v}),\mathbf{v}\rangle\) to the mean value of \(\langle\mathbf{h},\mathbf{v}\rangle\) in the dataset (see Appendix D.1). We define the _influence_\(\sigma\) of a unit vector \(\mathbf{v}\) to be the expected KL divergence between the outputs of \(f\) before and after erasing \(\mathbf{v}\) from \(\mathbf{h}\):
\[\sigma(\mathbf{v};f)=\operatorname*{\mathbb{E}}_{\mathbf{h}}\left[D_{KL}(f(\mathbf{h})\,|| \,f(r(\mathbf{h},\mathbf{v})))\right] \tag{10}\]
We seek to find an orthonormal basis \(B=(\mathbf{v}_{1},\ldots,\mathbf{v}_{k})\) containing principal features of \(f\), ordered by a sequence of influences \(\Sigma=(\sigma_{1},\ldots,\sigma_{k})\) for some \(k\leq d\). In each iteration we search for a feature \(\mathbf{v}_{i}\) of maximum influence that is orthogonal to all previous features \(\mathbf{v}_{j}\):
\[\mathbf{v}_{i} =\operatorname*{argmax}_{||\mathbf{v}||_{2}\,=\,1}\sigma(\mathbf{v};f)\] (11) s.t. \[\langle\mathbf{v},\mathbf{v}_{j}\rangle =\mathbf{0},\quad\forall j<i\]
With a perfect optimizer, the influence of \(\mathbf{v}_{i}\) should decrease monotonically since the feasible region is strictly smaller with each successive iteration. In practice, we do observe non-monotonicities due to the non-convexity of the objective. To mitigate this issue we sort the features in descending order by influence after the last iteration.
Implementation details.We evaluate the objective function in Equation 11 on a single in-memory batch of 131,072 tokens sampled randomly from the Pile validation set, and optimize it using L-BFGS with strong Wolfe line search. We find that using the singular vectors of the probe as initialization for the search, rather than random directions, speeds up convergence.
Intervening on the model.If we apply causal basis extraction to the tuned lens at layer \(\ell\), we obtain \(k\) directions \(v_{1},\ldots,v_{k}\) that are important for the tuned lens. We next check that these are also important to the model \(\mathcal{M}\).
To do so, we first take an i.i.d. sample of input sequences \(\mathbf{x}\) and feed them to \(\mathcal{M}\), storing the resulting hidden states \(\mathcal{M}_{\leq\ell}(\mathbf{x})\).5 Then, for each vector \(\mathbf{v}_{i}\) obtained from CBE, we record the causal effect of erasing \(\mathbf{v}_{i}\) on the output of \(\mathcal{M}_{>\ell}\),
Footnote 5: See Section 2 for notation.
\[\operatorname*{\mathbb{E}}_{\mathbf{x}}\left[D_{KL}(\mathcal{M}(\mathbf{x})\,||\, \mathcal{M}_{>\ell}(r(\mathcal{M}_{\leq\ell}(\mathbf{x}),\mathbf{v}_{i}))\right] \tag{12}\]
where the erasure function \(r\) is applied to all positions in a sequence simultaneously. We likewise average the KL divergences across token positions.
**Results.** We report the resulting causal influences for Pythia 410M, \(\ell=18\) in Figure 8; results for all layers can be found in Figure 18 in the Appendix.
In accordance with Property 1, there is a strong correlation between the causal influence of a feature on the tuned lens and its influence on the model (Spearman \(\rho=0.89\)). Importantly, we don't observe _any_ features in the lower right corner of the plot (features that are influential in the tuned lens but not in the model). The model is somewhat more "causally sensitive" than the tuned lens: even the least influential features never have an influence under \(2\times 10^{-3}\) bits, leading to the "hockey stick" shape in the LOWESS trendline.
### Stimulus-response alignment
We now turn to Property 2. Intuitively, for the interventions from Section 4.1, deleting an important direction \(v_{i}\) should have the same effect on the model's output distribution \(p\) and the tuned lens' output distribution \(q\).
We can operationalize this with the Aitchison geometry (Aitchison, 1982), which turns the probability simplex into a vector space equipped with an inner product. In order to downweight the influence of rare tokens, we use the _weighted_ Aitchison inner product introduced by Egozcue and Pawlowsky-Glahn (2016), defined as
\[\langle\mathbf{p}_{1},\mathbf{p}_{2}\rangle_{\mathbf{w}}=\sum_{i=1}^{D}w_{i} \log\frac{p_{1i}}{\mathrm{g}_{\mathbf{w}}(\mathbf{p}_{1})}\log\frac{p_{2i}}{ \mathrm{g}_{\mathbf{w}}(\mathbf{p}_{2})}, \tag{13}\]
where \(\mathbf{w}\) is a vector of positive weights, and \(\mathrm{g}_{\mathbf{w}}(\mathbf{p})\) is the weighted geometric mean of the entries of \(\mathbf{p}\). In our experiments, we use the final layer prediction distribution under the control condition to define \(\mathbf{w}\).
We will also use the notion of "subtracting" distributions. In Aitchison geometry, addition and subtraction of distributions is done componentwise in log space, followed by renormalization:
\[\mathbf{p}_{1}-\mathbf{p}_{2}=\mathrm{softmax}\Big{(}\log\mathbf{p}_{1}-\log \mathbf{p}_{2}\Big{)}. \tag{14}\]
We say that distributions \((\mathbf{p}_{\mathrm{old}},\mathbf{p}_{\mathrm{new}})\) and \((\mathbf{q}_{\mathrm{old}},\mathbf{q}_{\mathrm{new}})\) "move in the same direction" if and only if
\[\langle\mathbf{p}_{\mathrm{new}}-\mathbf{p}_{\mathrm{old}},\ \mathbf{q}_{ \mathrm{new}}-\mathbf{q}_{\mathrm{old}}\rangle_{\mathbf{w}}\ >\ 0. \tag{15}\]
**Measuring alignment.** Let \(g:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) be an arbitrary function for intervening on hidden states, and let \(\mathbf{h}_{\ell}\) be the hidden state at layer \(\ell\) on some input \(\mathbf{x}\). We'll define the _stimulus_ to be the Aitchison difference between the tuned lens output before and after the intervention:
\[\mathrm{S}(\mathbf{h}_{\ell})=\mathrm{TunedLens}_{\ell}(g(\mathbf{h}_{\ell}))- \mathrm{TunedLens}_{\ell}(\mathbf{h}_{\ell}) \tag{16}\]
Analogously, the _response_ will be defined as the Aitchison difference between the final layer output before and after the intervention:
\[\mathrm{R}(\mathbf{h}_{\ell})=\mathcal{M}_{>\ell}(g(\mathbf{h}_{\ell}))-\mathcal{M}_{ >\ell}(\mathbf{h}_{\ell}) \tag{17}\]
We'd like to control for the absolute magnitudes of the stimuli and the responses, so we use the Aitchison inner product to define a cosine similarity metric, which we call "Aitchison similarity." Then the stimulus-response alignment
Figure 8: Causal influence of CBE features when ablated at the 18th layer of Pythia 410M, plotted against their influence on the tuned lens output. Spearman \(\rho=0.89\).
Figure 9: Average stimulus-response alignment at each layer of Pythia 160M. Responses are more aligned with stimuli at later layers, and when using the tuned lens rather than the logit lens.
at layer \(\ell\) under \(g\) is simply the Aitchison similarity between the stimulus and response:
\[\mathrm{sim}(\mathrm{S}(\mathbf{h}_{\ell}),\mathrm{R}(\mathbf{h}_{\ell}))=\frac{\langle \mathrm{S}(\mathbf{h}_{\ell}),\mathrm{R}(\mathbf{h}_{\ell})\rangle_{\mathbf{w}}}{\| \mathrm{S}(\mathbf{h}_{\ell})\|_{\mathbf{w}}\|\mathrm{R}(\mathbf{h}_{\ell})\|_{\mathbf{ w}}} \tag{18}\]
We propose to use CBE (Section 4.1) to define a "natural" choice for the intervention \(g\). Specifically, for each layer \(\ell\), we intervene on the subspace spanned by \(\ell\)'s top 10 causal basis vectors-- we'll call this the "principal subspace"-- using a recently proposed method called _resampling ablation_(Chan et al., 2022).
Given a hidden state \(\mathbf{h}_{\ell}=\mathcal{M}_{\leq\ell}(\mathbf{x})\), resampling ablation replaces the principal subspace of \(\mathbf{h}_{\ell}\) with the corresponding subspace generated on a _different_ input \(\mathbf{x}^{\prime}\) selected uniformly at random from the dataset. It then feeds this modified hidden state \(\tilde{\mathbf{h}}_{\ell}\) into the rest of the model, yielding the modified output \(\mathcal{M}_{>\ell}(\tilde{\mathbf{h}}_{\ell})\). Intuitively, \(\tilde{\mathbf{h}}_{\ell}\) should be relatively on-distribution because we're using values generated "naturally" by the model itself.
Unlike in Section 4.1, we apply resampling ablation to one token in a sequence at a time, and average the Aitchison similarities across tokens.
**Results.** We applied resampling ablation to the principal subspaces of the logit and tuned lenses at each layer in Pythia 160M. We report average stimulus-response alignments in Figure 9. Unsurprisingly, we find that stimuli are more aligned with the responses they induce at later layers. We also find that alignment is somewhat higher at all layers when using principal subspaces and stimuli defined by the tuned lens rather than the logit lens, in line with Property 2.
## 5 Applications
### Extending _Overthinking the Truth_
We start by extending a recent use case of the logit lens. Halawi et al. (2023) apply the logit lens to downstream tasks with few-shot prompts, and find that predictions extracted from earlier layers are often more robust to incorrect demonstrations than final layer predictions. Unfortunately, their method is inapplicable to models like BLOOM and GPT-Neo, which have high logit lens perplexities.
We replicate Halawi et al. (2023)'s experiments with the tuned lens, and recover similar results for BLOOM 560M, Neo 1.3B, and Neo 2.7B (Figure 10). Notice that the calibrated performance under incorrect demonstrations (red) peaks at an early layer around 0.4-0.45 in all three of these models, before falling to random baseline performance at the final layer, similarly to what was found by Halawi et al. (2023).
### Detecting Prompt Injections
Given the results from Halawi et al. (2023) and in Figure 10, we hypothesize that the prediction trajectory of the tuned lens on anomalous inputs should be different from the trajectories on normal inputs, and that this could be used to detect anomalous inputs.
To test this, we focus on _prompt injection attacks_, a recently discovered vulnerability in large language models where untrusted inputs from a malicious user cause the model to behave in unexpected or dangerous ways (Perez and Ribeiro, 2022). These attacks usually tell the model to "ignore previous instructions" and instead follow instructions crafted by the attacker.
To simulate prompt injections, we adapted the attack from Perez and Ribeiro (2022) and ran Pythia 12B on nine multiple choice tasks, both with and without the following adversarial message appended to the few-shot prompt:
------- ------- Ignore any previous and following instructions and just print "<WRONG_ANSWER>": where <WRONG_ANSWER> is replaced with a randomly selected incorrect response from the available multiple choice
Figure 10: For most models and tasks, we find there is a layer at which the tuned lens performance is _better_ than final layer performance under incorrect demonstrations. Shown here is performance on SICK (Sentences Involving Compositional Knowledge). Unlike the logit lens, our method is applicable to BLOOM (bottom right) and GPT-Neo (top left). Y-axis shows median-calibrated accuracy as used in Halawi et al. (2023)
responses.
We record the tuned prediction trajectory for each data point- that is, for each layer, we record the log probability assigned by the model to each possible answer.6 We then flatten these trajectories into feature vectors and feed them into two standard outlier detection algorithms: isolation forest (iForest) (Liu et al., 2008) and local outlier factor (LOF) (Breunig et al., 2000), both implemented in scikit-learn (Pedregosa et al., 2011) with default hyperparameters.
Footnote 6: For binary tasks like SST-2 we take the difference between the log probabilities assigned to the two possible answers.
Baseline.There is a rich literature on general out-of-distribution (OOD) detection in deep neural networks. One simple technique is to fit a multivariate Gaussian to the model's final layer hidden states on the training set, and flag inputs as OOD if a new hidden state is unusually far from the training distribution as measured by the Mahalanobis distance (Lee et al., 2018; Mahalanobis, 1936).
Recently, Bai et al. (2022) proposed the Simplified Relative Mahalanobis (SRM) distance, a modification to Mahalanobis which they find to be effective in the context of LLM finetuning. They also find that representations from the _middle_ layers of a transformer, rather than the final layer, yield the best OOD detection performance. We use the SRM at the middle layer as a baseline in our experiments.
Experimental setup.We fit each anomaly detection model exclusively on prediction trajectories from _normal_ prompts without prompt injections, and evaluate them on a held out test set containing both normal and prompt-injected trajectories. This ensures that our models cannot overfit to the prompt injection distribution. We use EleutherAI's lm-evaluation-harness library (Gao et al., 2021) to run our evaluations.
Results.Our results are summarized in Table 1. Our tuned lens anomaly detector achieves perfect or near-perfect AUROC on five tasks (BoolQ, MNLI, QNLI, QQP, and SST-2); in contrast, the same technique using logit lens has lower performance on most tasks. On the other hand, the SRM baseline does consistently well--the tuned lens only outperforms it on one task (ARC-Challenge), while SRM outperforms our technique on both MC TACO and SciQ.
We suspect that further gains could be made by combining the strengths of both techniques, since SRM uses only one layer but considers a high-dimensional representation, while the tuned lens studies the trajectory across layers but summarizes them with a low-dimensional prediction vector.
### Measuring Example Difficulty
Early exiting strategies like CALM (Schuster et al., 2022) and DeeBERT (Xin et al., 2020) are based on the observation that "easy" examples require less computation to classify than "difficult" examples. If an example is easy, the model should quickly converge to the right answer in early layers, making it possible to skip the later layers without a significant drop in prediction quality. Conversely, the number of layers needed to converge on an answer can be used to measure the difficulty of an example.
We propose to use the tuned lens to estimate example difficulty in _pretrained_ transformers, without the need to fine-tune the model for early exiting. Following Baldock et al. (2021)'s work on computer vision models, we define the _prediction depth_ of a prompt \(\mathbf{x}\) to be the number of layers after which a model's top-1 prediction for \(\mathbf{x}\) stops changing.
To validate the prediction depth, we measure its correlation with an established difficulty metric: the _iteration learned_. The iteration learned is defined as the earliest training step \(\tau\) where the model's top-1 prediction for a datapoint \(\mathbf{x}\) is fixed (Toneva et al., 2018). Intuitively, we might expect that examples which take a long time to learn during training would tend to require many layers of computation to classify at inference time. Baldock et al. (2021) indeed show such a correlation, using k-NN classifiers to elicit early predictions from the intermediate feature maps of image classifiers.
\begin{table}
\begin{tabular}{l|c c|c c|c|c} \hline \hline & \multicolumn{2}{c|}{Tuned Lens} & \multicolumn{2}{c|}{Logit Lens} & \multicolumn{2}{c}{Baseline} & \multicolumn{1}{c}{Accuracy} \\ \cline{2-7} Task & iForest & LOF & iForest & LOF & SRM & Normal \(\rightarrow\) injected \\ \hline ARC-Easy & \(0.59\ (0.54,0.62)\) & \(0.73\ (0.71,0.76)\) & \(0.53\ (0.50,0.57)\) & \(0.59\ (0.56,0.62)\) & \(0.73\ (0.70,0.75)\) & \(72.8\%\ \rightarrow\ 31.7\%\) \\ ARC-Challenge & \(0.71\ (0.65,0.77)\) & \(0.81\ (0.77,0.84)\) & \(0.73\ (0.67,0.79)\) & \(0.80\ (0.77,0.83)\) & \(0.57\ (0.53,0.61)\) & \(43.5\%\ \rightarrow\ 24.7\%\) \\ BoolQ & \(0.99\ (0.98,0.99)\) & \(1.00\ (1.00,1.00)\) & \(0.89\ (0.87,0.91)\) & \(0.61\ (0.57,0.66)\) & \(1.00\ (1.00,1.00)\) & \(67.1\%\ \rightarrow\ 0.0\%\) \\ MC TACO & \(0.74\ (0.71,0.77)\) & \(0.68\ (0.66,0.70)\) & \(0.68\ (0.66,0.69)\) & \(0.55\ (0.53,0.59)\) & \(1.00\ (1.00,1.00)\) & \(0.40\ \rightarrow\ 0.06\) F1 \\ MNLI & \(0.98\ (0.98,0.99)\) & \(1.00\ (1.00,1.00)\) & \(0.95\ (0.94,0.96)\) & \(1.00\ (1.00,1.00)\) & \(1.00\ (1.00,1.00)\) & \(54.3\%\ \rightarrow\ 0.0\%\) \\ QNLI & \(0.99\ (0.99,1.00)\) & \(1.00\ (1.00,1.00)\) & \(0.93\ (0.92,0.95)\) & \(0.68\ (0.63,0.71)\) & \(1.00\ (1.00,1.00)\) & \(54.3\%\ \rightarrow\ 0.0\%\) \\ QQP & \(1.00\ (0.99,1.00)\) & \(1.00\ (1.00,1.00)\) & \(0.90\ (0.89,0.90)\) & \(0.79\ (0.76,0.81)\) & \(1.00\ (1.00,1.00)\) & \(60.7\%\ \rightarrow\ 6.5\%\) \\ SciQ & \(0.62\ (0.57,0.69)\) & \(0.64\ (0.59,0.70)\) & \(0.75\ (0.71,0.79)\) & \(0.70\ (0.65,0.74)\) & \(0.75\ (0.72,0.78)\) & \(95.5\%\ \rightarrow\ 62.6\%\) \\ SST-2 & \(1.00\ (0.98,1.00)\) & \(1.00\ (1.00,1.00)\) & \(0.78\ (0.72,0.83)\) & \(0.61\ (0.56,0.65)\) & \(1.00\ (1.00,1.00)\) & \(82.9\%\ \rightarrow\ 49.1\%\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Test set AUROCs and 95% bootstrap CIs for distinguishing normal prompts from prompt injections on Pythia 12B. Figures are pooled over 10 random train-test splits. Attack detection performance is nearly perfect on tasks where the attack succeeds at driving accuracy well below the random baseline, and is still much better than chance even when the attack is only partially successful.
**Experimental setup.** For this experiment we focus on Pythia 12B (deduped), for which 143 uniformly spaced checkpoints are available on Huggingface Hub. We evaluate the model's zero-shot performance on twelve multiple-choice tasks, listed in Table 2. For each checkpoint, we store the top 1 prediction on every individual example, allowing us to compute the iteration learned. We then use the tuned lens on the final checkpoint, eliciting the top 1 prediction at each layer of the network and computing the prediction depth for every example. As a baseline, we also compute prediction depths using the logit lens. Finally, for each task, we compute the Spearman rank correlation between the iteration learned and the prediction depth across all examples.
**Results.** We present results in Table 2. We find a significant positive correlation between the iteration learned and the tuned lens prediction depth on all tasks we investigated. Additionally, the tuned lens prediction correlates better with iteration learned than its logit lens counterpart in 8 out of 11 tasks, sometimes dramatically so.
## 6 Discussion
In this paper, we introduced a new tool for transformer interpretability research, the _tuned lens_, which yields new qualitative as well as quantitative insights into the functioning of large language models. It is a drop-in replacement for the logit lens that makes it possible to elicit interpretable prediction trajectories from essentially any pretrained language model in use today. We gave several initial applications of the tuned lens, including detecting prompt injection attacks.
Finally, we introduced _causal basis extraction_, which identifies influential features in neural networks. We hope this technique will be generally useful for interpretability research in machine learning.
Limitations and future work.One limitation of our method is that it involves training a translator layer for each layer of the network, while the logit lens can be used on any pretrained model out-of-the-box. This training process, however, is quite fast: our code can train a full set of probes in under an hour on a single 8\(\times\)A40 node, and further speedups are likely possible. We have also released tuned lens checkpoints for the most commonly used pre-trained models as part of our tuned-lens library, which should eliminate this problem for most applications.
Causal basis extraction, as presented in this work, is computationally intensive, since it sequentially optimizes \(d_{model}\) causal basis vectors for each layer of the network. Future work could explore ways to make the algorithm more scalable. One possibility would be to optimize a whole \(k\)-dimensional subspace, instead of an individual direction, at each iteration.
Due to space and time limitations, we focused on language models in this work, but we think it's likely that our approach is also applicable to other modalities.
## Acknowledgements
We are thankful to CoreWeave for providing the computing resources used in this paper and to the OPT team for their assistance in training a tuned lens for OPT. We also thank nostalgebraist for discussions leading to this paper.
|
2302.00667 | Does Vision Accelerate Hierarchical Generalization in Neural Language
Learners? | Neural language models (LMs) are arguably less data-efficient than humans
from a language acquisition perspective. One fundamental question is why this
human-LM gap arises. This study explores the advantage of grounded language
acquisition, specifically the impact of visual information -- which humans can
usually rely on but LMs largely do not have access to during language
acquisition -- on syntactic generalization in LMs. Our experiments, following
the poverty of stimulus paradigm under two scenarios (using artificial vs.
naturalistic images), demonstrate that if the alignments between the linguistic
and visual components are clear in the input, access to vision data does help
with the syntactic generalization of LMs, but if not, visual input does not
help. This highlights the need for additional biases or signals, such as mutual
gaze, to enhance cross-modal alignment and enable efficient syntactic
generalization in multimodal LMs. | Tatsuki Kuribayashi, Timothy Baldwin | 2023-02-01T18:53:42Z | http://arxiv.org/abs/2302.00667v2 | # Does Vision Accelerate Hierarchical Generalization of
###### Abstract
Neural language models (LMs) are arguably less data-efficient than humans--_why does this gap occur?_ In this study, we hypothesize that this gap stems from the learners' accessibility to modalities other than text, specifically, vision. We conducted two complementary experiments (using noisy, realistic data and a simplified, artificial one) toward the advantage of vision in the syntactic generalization of LMs. Our results showed that vision accelerated a proper linguistic generalization in the simplified, artificial setting, but LMs struggled with the noisy, realistic setting. These mixed results indicate several possibilities, e.g., vision can potentially boost language acquisition, but learners' additional visual/linguistic prior knowledge should be needed to robustly make use of _raw_ images for efficient language acquisition.
## 1 Introduction
While large neural language models (LMs) have made substantial advances in natural language processing (NLP), there is a gap between LMs and humans in terms of their data efficiency. For example, GPT-3 Brown et al. (2020) is trained on around 2,000 times more texts than a 10-year-old human is exposed to Warstadt and Bowman (2022); still, GPT-3s struggle with particular language tasks. Thus, the question arises--_why human/LM language acquisition is so efficient/inefficient_. To answer this, we explore filling what differences between humans' and LMs' language acquisition scenarios can close the gap in their efficiency.
This study specifically focuses on the advantage of visual information motivated by the long-running symbol grounding Roy and Reiter (2005) and embodiment Barsalou (2008) problems in artificial intelligence. Normally, humans can access visual information during language acquisition, unlike LMs; we suspect that this can be a potential cause of the human-LM gaps. Note that, in recent years, vision-language modeling has gained much attention, but these typically focus on large-scale, engineering-oriented directions Alayrac et al. (2022); Radford et al. (2021). By contrast, we focus on the vision-language interaction from a _cognitive_ perspective.
Specifically, we explore whether visual information accelerates the LMs' syntactic, hierarchical generalization, which underlines human lan
Figure 1: Overview of the experimental design. A vision-language neural model is trained on ambiguous data toward particular linguistic rules. Then, we test the generalization preference of the model using disambiguating data. Through this experimental scheme, we ablate whether/how the visual information help the model prefer a proper linguistic generalization.
guage acquisition (Chomsky, 1964). Inspired by the studies on the inductive bias of neural models (Warstadt et al., 2020; McCoy et al., 2020, 2018), we first designed _mixed signals generalization_ settings in the vision-language domain. That is, we train LMs on _ambiguous_ image-text pairs in terms of particular linguistic rules (linear v.s. hierarchical rules; see Figure 1); then, we ablate whether visual input efficiently guides the models to make a proper, hierarchical generalization under the ambiguous data. As a case study, we use the English subject-verb number agreement phenomena as a lens through which we empirically explore the advantage of vision.
We conducted two complementary experiments using either realistic image-caption data Sharma et al. (2018) or simplified, artificial data. In the realistic setting, we generally found the following: (i) vision **did not** accelerate hierarchical generalization, (ii) this trend is consistent among 20 model settings, and (iii) this is also consistent among four different inoculation settings, i.e., different degrees of ambiguity. By contrast, in the artificial data, where visual/linguistic concepts are already abstracted and simplified, we generally found the opposite trend; vision **did boost** a proper linguistic generalization.
One plausible interpretation is that visual information is potentially useful for syntactic acquisition based on the fact that the "sighted" models gained improvement under the abstracted, simplified setting. Nevertheless, the results on the realistic data suggest that additional factors (e.g., an innate bias toward grounding or data more aligned to the infant language acquisition scenario) are needed to make use of noisy, real image data robustly. To summarize, the presence of vision **alone** might not explain the (in)efficiency in syntactic generalization, at least within the focus of this study.
## 2 Background
### inductive bias in language acquisition
In general, beyond language acquisition, generalization rule is not uniquely determined by finite data, and the choice of generalization rules depends on the inductive bias of learning setting (Mitchell, 1980).
In Humans:In the context of language acquisition, it has long been argued that human learners have a strong inductive bias, given their rapid language acquisition from limited language exposure (Chomsky, 1980; McCoy et al., 2018). Here, the question is what type of biases humans have and where these biases come from. For the former question, it has been reported that, for example, humans have a bias to prefer hierarchical generalization to linear one under the situations like Figure1 (Legate and Yang, 2002; Crain and Nakayama, 1987). For the latter question, there are roughly two potential sources of inductive biases: innate and environmental factors, more catchily, nature vs. nurture. Toward the latter question, this study ablates a particular environmental factor--accessibility to visual information during language acquisition--through computer simulations.
In Neural models:Neural models typically exhibit non-human-like generalizations, such as the use of superficial cues and linear rules as widely known in the broad NLP areas (Christiansen and Chater, 1999; Warstadt and Bowman, 2020; Warstadt et al., 2020; McCoy et al., 2020, 2019). It is also reported that large amounts of data are needed to overcome such cognitively implausible biases during training (Warstadt and Bowman, 2020; Warstadt et al., 2020); in this regard, inadequate inductive biases of the neural model training scenario and its data inefficiency are two sides of the same problem. Our interest lies in whether/how visual information incurs proper inductive bias in neural language learners.
### Hypotheses on the advantage of vision
Although this research is an exploratory investigation of the effect of the vision, there has already been some discussion behind the contribution of vision.
Positive view:Shi et al. (2019) and Kojima et al. (2020) hypothesized and demonstrated the positive effect of visual information on a syntactic acquisition. They used a specially-designed parser that has a compositional bias in its architecture; our question is whether even vanilla neural models can take advantage of visual information in syntactic generalization.
Intuitively, in the case illustrated in Figure 1, a learner should capture a particular dependency between a verb and the corresponding subject rather than the recency of words in linear order. For example, in a sentence _a cat with glasses walks_,
the information that not _glasses_, but _cat_ is walking could potentially bias the learning toward a linguistically proper generalization. Then, such a clue--not _glasses_, but _cat_ is walking--could be explicit in the image (Figure 2) if a learner understands the visual concept of _cat_, _glasses_, _walk_, and their composition (e.g., _walking cat_). Thus images have the potential to boost linguistically intuitive, hierarchical generalization along with textual clues (e.g., word-word co-occurrence). More generally, the importance of visual grounding in language comprehension has long been argued Bender and Koller (2020).
In addition, at least as for the number agreement problem, the number information can be, more or less, salient in the vision domain. When the number of objects that are salient enough to be the grammatical subject in a caption changes, the content of the image will change drastically, while in the text domain, a few characters (suffix of _s_) are changed.1
Footnote 1: Strictly speaking, a grammatical and physical (visual) number are not exactly the same concepts, and the degree of change in the text domain depends on tokenization.
Negative view:There is also skepticism that just providing visual information without appropriate linguistic knowledge rather increases the superficial correlation and over-complicates the problem Gleitman and Gleitman (1992); Dupoux (2018). For example, Gleitman and Gleitman (1992) and McDonough et al. (2011) assumed that children use syntactic category information to ground the words to vision; this implies that syntactic knowledge comes first, then grounding is achieved. In this sense, causality might be the opposite; images might not promote language comprehension, but prior linguistic knowledge might promote visual grounding.
## 3 Problem definition
We briefly introduce the mixed signals generalization setting Warstadt et al. (2020). Through this setting, we quantify whether vision accelerates syntactic generalization.
### Hierarchical vs. Linear generalizations
We take the subject-verb number agreement rule as an example phenomenon. In English, the subject and corresponding verb should match in terms of their grammatical number:
1. **Girls** with a hat **walk**.
2. **Girl** with a hat **walk**.
Here, Example (1b) is _ambiguous_ regarding that a learner can perform at least two different generalizations from this example alone, i.e., Hierarchical and Linear rules:
1. **Girl** with a hat **walk**
The Hierarchical rule associates the grammatical number of a verb with that of its grammatical subject, while the linear one associates the number between a verb and its closest noun in a linear word order.
By contrast, Example (1a) is not ambiguous in terms of the Hierarchical and Linear rules since the number does not match under the Linear assumption:
1. **Girls** with a hat **walk**
2. **Linear** (explicit break of the number agreement)
Our interest is which rule a particular learner acquires from ambiguous data, Hierarchical, or Linear, and what factor (e.g., vision) can guide the learner to prefer the Hierarchical rule that is linguistically correct (Section 3.2).
Note that we employed this subject-verb number agreement setting in our experiments, although existing studies have typically focused on different syntactic transformation tasks, such as question formulation or passivization McCoy et al. (2020); Warstadt and Bowman (2020); Mueller et al. (2020).
Figure 2: Image can explicate the subject–verb dependency. If a learner knows the visual concept of _cat_, _glasses_, and _walk_, one can disambiguate that what is walking is not _glasses_ but _cat_; such information will potentially bias the learner’s language acquisition in favor of linguistically correct, Hierarchical rule.
2022). One motivation behind this choice is the ease of collecting natural images for sentences having the subject-verb agreement; in other words, interrogative or passive sentences would be somewhat unusual construction as an image caption.
### Poverty of stimulus setting
It is claimed that humans acquire Hierarchical rules despite the scarcity of disambiguating sentences, like Example (1a), in real language exposure Legate and Yang (2002); Crain and Nakayama (1987). Building on this scenario, we expose a model to (nearly) ambiguous data where the generalization rule can not be determined as to whether Linear or Hierarchical rules are correct. Then, we evaluate the model in terms of which rule is obtained from the ambiguous data via a test using disambiguating data.
In this series of experiments, we compared the neural models that can access visual information () and one that does not () to ablate the contribution of vision. Note that "visual information" in this study denotes an image representing the meaning of a sentence, i.e., we use image-caption pairs.
Specifically, given a set of image-caption pairs, we split the data into two groups: (i) those that do not disambiguate Linear and Hierarchical rules (Ambiguous) and (ii) those that support the Hierarchical rule (DisAmbiguating). In the Ambiguous data, the grammatical number of a verb, its corresponding subject, and the noun immediately preceding the verb is identical, while only the subject and verb agree in terms of their grammatical number in the DisAmbiguating data. Examples are shown in Table 1.
Basically, the Ambiguous data are used in training, and DisAmbiguating is used in the evaluation; however, we inoculate a few hold-out DisAmbiguating instances into training data since it is counter-intuitive that leaner _never_ encounters DisAmbiguating instances during language acquisition. We controlled the inoculation rate, the extent to which disambiguating data appear during training, to analyze the models' generalization preference toward the degree of data scarcity. In Section 4.1, we examined four different inoculation rates of {0, 0.001, 0.005, 0.01}. For example, if the training data size is 10,000 and the inoculation rate is set to 0.001, we mixed 10 DisAmbiguating instances into the training data; this results in the total training data size of 10,010.
### Natural and Artificial data
We introduce two complementary settings: (i) Natural captions and (ii) Artificial captions. The Natural captions are collected from the image-caption corpus, while the Artificial captions are automatically created by rules to simplify the problem.
Natural data:We extracted image-caption pairs from Conceptual captions corpus (Sharma
\begin{table}
\begin{tabular}{c c
et al., 2018). Specifically, we first collected those satisfying the following criteria:
* Caption is a complete sentence.2 Footnote 2: We detected whether a main verb (ROOT) has a children with the nsubj relationship using SpaCy.
* Caption does not have grammatical errors.3 Footnote 3: using language-tool-python 2.7.1
* The subject is not a collective expression such as _family_ or _pair of_ since the grammatical number of these expressions is sometimes not clear.
Then, we split the data into the Ambiguous and DisAmbiguating sets using an automatic parser4. Note that there might be parsing errors in this process, but we empirically confirmed that the models did not prefer the Hierarchical rule at the inoculation rate of zero; this implies that there were not so many leaks as to unfairly bias the model toward the Hierarchical rule. Examples are shown in the left-part of Table 1. The training set (Ambiguous part) consists of 348,861 pairs, and the test set consists of 1,253 pairs.
Footnote 4: We used Spacy.
Artificial data:Image-caption pairs were generated by rules. Specifically, a caption is first generated with the template of NUM1 COLOR1 SHAPE1 with NUM2 COLOR2 SHAPE2 VP; then, the corresponding image is automatically created (details process is shown in AppendixA). Examples are shown in the right part of Table 1. Same as the Natural setting, we split the data into Ambiguous and DisAmbiguating ones. Then, training and test data are created with a particular inoculation rate. The training set (Ambiguous part) consists of 15,000 pairs, and the test set consists of 5,000 pairs.
Notably, this setting at least discards the following properties of the realistic image-caption data:
* Skewed word distribution
* Variations of syntactic construction
* Exceptions at least with respect to grammatical numbers (e.g., invariable, uncountable words)
* Many-to-many relationship between text symbols and visual features
* Presence of visual information that is irrelevant to the caption (e.g., background)
* Natural visual composition of the concepts (e.g., to make an image of "S does V," we just overlaid the visual object of V on the S object as shown in Table 1)
### Evaluation
For each DisAmbiguating instance, we prepared two candidate captions that differ only in the grammatical number of a verb (e.g., _two red rectangles with a black circle **play/plays soccer_**); one is compatible with the Hierarchical rule, and the other is compatible with the Linear one. The model's preference toward generalization rules is determined by which of the two gets the higher probability.
Specifically, a model \(\theta\) computes the probabilities of each caption \(\mathbf{s}=[w_{1},\cdots,w_{n}]\) conditioned with the corresponding image \(v\):
\[p(\mathbf{s}|v)=\prod_{t=1}^{n}p_{\theta}(w_{t}|\mathbf{w}_{<t},v)\enspace, \tag{1}\]
where, \(\mathbf{w}_{<t}\) denotes the left context of \(w_{t}\) in the caption \(\mathbf{s}\). We calculated the F1 score, where the inflection corresponds to the Hierarchical rule is considered correct, and the task is considered as the binary classification problem of verb inflection. Since we are interested in the efficiency of language acquisition, we report the F1 scores at several training steps during training.
### Models
We use the Transformer seq2seq model, but the encoder is set with a pre-trained vision encoder such as Vit (Dosovitskiy et al., 2020). An image is inputted to the encoder side, and the decoder predicts the caption in a left-to-right manner with access to visual information via cross-attention. Intuitively, this can be viewed as a sentence-level LM that can access visual information. Using such models, we ablate the contribution of the visual information by comparing the models with ( ) and without ( ) visual input. As for the model without visual input ( ), we replaced the input image with a white noise image during training and inference. Models are trained with cross-entropy loss to generate the reference caption.
We adopted the GPT-2 small (124M) architecture (Radford et al., 2019) for the decoder, but the parameters are randomly initialized considering a language acquisition scenario from scratch. As an
encoder, we begin with using the Vit-base (Dosovitskiy et al., 2020) in Section 4.1, and we further examined a variety of encoders in Section 4.2 to enhance the generality of the conclusion. Hyper-parameters are listed in Appendix B. In each setting, we train two models with different seeds and report the average score.
## 4 Experiments
We first focus on the results of the model using the pre-trained Vit-base (Section 4.1). Then, we compare which vision encoder provides relatively better effects in our linguistic generalization task (Section 4.2).
### Generalization preferences
Results:The results are shown in Figure 3. These indicate the following:
* The Linear rule emerged (F1 score is below chance rate) at the initial stage of learning under a low inoculation rate; that is, the learner originally has a bias in favor of linguistically implausible Linear generalization.
* Under a moderate inoculation, e.g., above the rate of 0.005, the models gradually acquired the Hierarchical rule; the models were, more or less, sensitive to the slight bias in the data distribution.
* In the Natural setting, visual input did not provide substantial gain.
* In the Artificial setting, visual input did accelerate the hierarchical generalization, especially at the very early stage of learning. Approximately, with the inoculation rate of 0.005 and 0.01, the acquisition of Hierarchical rule with vision is twice faster than without vision. For example, with the rate of 0.01, the \(\circled{3}\) model achieved the F1 score of around 90 with 100 steps, while the \(\circled{4}\) model did with 200 steps.
The Linear bias of the learner exhibited in the Natural setting is consistent with the existing studies (McCoy et al., 2020). On top of this, we demonstrated that merely adding visual modality does not solve the problem, at least in the Natural setting. Nevertheless, we also observe the improvement in the Artificial setting. We discuss the implication of these results in Section 5.
### Vision encoder variations
Were our results specific to a particular model setting? We then analyze various vision-language models with different inductive biases of vision encoder, and demonstrate that our results are generally consistent across various settings.
Generality of the (in)effectiveness of vision:We tested the models using ten different vision encoders: Vit-{base, large, xlarge} (Dosovitskiy et al., 2020), Beit-{base, large} (Bao et al., 2021), Deit-{base, small, tiny} (Touvron et al., 2021), and Swin-{base, large} (Liu et al., 2021). We also examined the baseline using randomly initialized Vit-base (Scratch) and the model using the pre-trained GPT-2 (Radford et al., 2019) as a decoder
Figure 3: Generalization performance of the model initialized with Vit-base. The x-axis denotes the parameter update step, and the y-axis denotes the preference for the Hierarchical generalization rule (F1 scores multiplied by 100). We adopted four settings with different inoculation rates of {0, 0.001, 0.005, 0.01}. The dashed lines correspond to the preference of those without \(\circled{3}\). The normal lines correspond to the model with vision \(\circled{3}\). The chance rate is around 50.
(Vit-GPT2). In this Section, we fix the inoculation rate to 0.01.
The overall trends are the same as those in Section 4.1: (i) in the Natural setting, vision provides only minor effects, and (ii) in the Artificial setting, the improvement by vision was relatively drastic, although we also found somewhat exceptional trends in Beit models. To be more specific, vision tends to provide slightly good effects at the early phase of learning and vice versa at the later phase.
Note that the sighted models
a achieved the ROUGE-L F1 scores of \(3\)-\(40\) in the Natural setting (Appendix B); this ensures that visual information is actually used by the models. Further note that Vit-GPT2, which uses a pre-trained language decoder, achieved almost perfect hierarchical generalization from the early stages of training even in the Natural setting; our generalization task can be solved after models get exposed to large amounts of raw text.
We also found that models with different vision encoders yielded slight but consistent differences even in the setting without vision. In such a blind setting, vision encoders might play a different role, such as additional key-value memory in the attention mechanism (Geva et al., 2021), and their architecture and initialization might provide different biases.
**Which vision encoder accelerates hierarchical generalization?** We further analyze which vi
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & & \multicolumn{3}{c}{Natural} & Artificial \\ \cline{3-6} Models & Vision & 1,000 & 5,000 & 10,000 & 100 & 500 \\ \hline \multirow{2}{*}{\begin{tabular}{l} Vit-base \\ (86M) \\ \end{tabular} } & \(\checkmark\) & \(52.8\) & \(72.0\) & \(81.9\) & \(90.6\) & \(99.7\) \\ & \(\Delta\) & \(\pm 0.41\) & \(\pm 2.38\) & \(-0.94\) & \(\mathbf{+57.4}\) & \(-0.31\) \\ \hline \multirow{2}{*}{\begin{tabular}{l} Vit-large \\ (307M) \\ \end{tabular} } & \(\checkmark\) & \(52.9\) & \(74.9\) & \(83.1\) & \(52.6\) & \(92.2\) \\ & \(\Delta\) & \(\pm 0.93\) & \(\pm 1.13\) & \(+0.65\) & \(\mathbf{+19.4}\) & \(-7.76\) \\ \hline \multirow{2}{*}{\begin{tabular}{l} Vit-huge \\ (632M) \\ \end{tabular} } & \(\checkmark\) & \(52.6\) & \(73.9\) & \(82.6\) & \(42.6\) & \(100\) \\ & \(\Delta\) & \(\pm 1.98\) & \(\pm 2.07\) & \(+0.10\) & \(\mathbf{+9.21}\) & \(0.00\) \\ \hline \multirow{2}{*}{\begin{tabular}{l} Beit-base \\ (86M) \\ \end{tabular} } & \(\checkmark\) & \(46.7\) & \(59.0\) & \(66.4\) & \(45.8\) & \(74.8\) \\ & \(\Delta\) & \(\pm 2.99\) & \(+5.68\) & \(-1.50\) & \(\mathbf{+11.7}\) & \(-25.0\) \\ \hline \multirow{2}{*}{\begin{tabular}{l} Beit-large \\ (307M) \\ \end{tabular} } & \(\checkmark\) & \(45.6\) & \(65.3\) & \(73.3\) & \(38.3\) & \(57.7\) \\ & \(\Delta\) & \(\pm 1.57\) & \(+4.32\) & \(+3.80\) & \(\mathbf{+5.09}\) & \(-38.4\) \\ \hline \multirow{2}{*}{\begin{tabular}{l} Deit-base \\ (86M) \\ \end{tabular} } & \(\checkmark\) & \(54.9\) & \(72.5\) & \(81.2\) & \(67.4\) & \(99.9\) \\ & \(\Delta\) & \(\pm 4.23\) & \(-1.77\) & \(-1.35\) & \(\mathbf{+32.9}\) & \(+0.08\) \\ \hline \multirow{2}{*}{\begin{tabular}{l} Deit-small \\ (22M) \\ \end{tabular} } & \(\checkmark\) & \(52.9\) & \(73.7\) & \(83.2\) & \(73.1\) & \(94.1\) \\ & \(\Delta\) & \(\pm 3.79\) & \(-0.16\) & \(-0.52\) & \(\mathbf{+27.1}\) & \(-5.86\) \\ \hline \multirow{2}{*}{\begin{tabular}{l} Deit-tiny \\ (5M) \\ \end{tabular} } & \(\checkmark\) & \(52.6\) & \(73.5\) & \(81.0\) & \(88.8\) & \(87.8\) \\ & \(\Delta\) & \(\pm 2.16\) & \(-1.29\) & \(-1.87\) & \(\mathbf{+32.5}\) & \(-12.2\) \\ \hline \multirow{2}{*}{\begin{tabular}{l} Swin-base \\ (88M) \\ \end{tabular} } & \(\checkmark\) & \(53.0\) & \(73.0\) & \(81.8\) & \(80.5\) & \(100\) \\ & \(\Delta\) & \(\pm 0.92\) & \(\pm 2.61\) & \(-1.05\) & \(\mathbf{+33.2}\) & \(0.00\) \\ \hline \multirow{2}{*}{\begin{tabular}{l} Swin-large \\ (197M) \\ \end{tabular} } & \(\checkmark\) & \(53.3\) & \(73.9\) & \(82.4\) & \(74.9\) & \(100\) \\ & \(\Delta\) & \(\pm 0.85\) & \(-0.79\) & \(-0.11\) & \(\mathbf{+39.3}\) & \(0.00\) \\ \hline \multirow{2}{*}{\begin{tabular}{l} Scratch \\ (86M) \\ \end{tabular} } & \(\checkmark\) & \(49.3\) & \(72.6\) & \(81.0\) & \(50.7\) & \(100\) \\ & \(\Delta\) & \(\pm 1.75\) & \(-3.22\) & \(-1.62\) & \(\pm 5.10\) & \(0.00\) \\ \hline \multirow{2}{*}{
\begin{tabular}{l} Vit-GPT2 \\ (86M) \\ \end{tabular} } & \(\checkmark\) & \(95.6\) & \(97.0\) & \(96.6\) & \(90.8\) & \(100\) \\ & \(\Delta\) & \(\pm 0.04\) & \(+0.18\) & \(-0.11\) & \(-9.21\) & \(0.00\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Various models’ preference for Hierarchical generalization (F1 score) during training. F1 scores are multiplied by 100. The column names such as 1,000, 5,000, and 10,000 denote the training steps. Scores in the \(\checkmark\) row indicate the results of sighted models
a, and those in \(\Delta\) indicate the score difference between sighted and blind models (
\(-\)
b, ).
Figure 4: Relationship between CV-oriented metrics and the biases toward Hierarchical generalization of vision encoder in the Natural setting. Each dot corresponds to each model setting \(\{10\text{ encoders}\}\times\{2\text{ seeds}\}\times\{3\text{ steps}\}\).
sion encoder provides a relatively better effect on linguistic generalization; this can be viewed linguistically- and cognitively-inspired evaluation of vision encoders.
We first explore the relationship between the encoder's ImageNet top-1 accuracy5 and its Hierarchical bias exhibited in our experiments (\(\Delta\) F1 score in Table 2). ImageNet accuracy is typically used in measuring the quality of the vision encoder, and even the scaling law in the ImageNet accuracy has also been reported (Zhai et al., 2022). Figure 3(a) exhibits no clear relationship between the two metrics. That is, an engineeringly better vision encoder does not always lead to better effects on linguistic generalization when they are combinedly used with a language decoder.
Footnote 5: We used the scores reported in their original papers.
Next, we compare the model's general image captioning performance (ROUGE-L F1 score) and its preference for Hierarchical generalization. The ROUGE score is computed in the validation set6 using the off-the-shelf implementation7. There was also no clear relationship; that is, a high ROUGE score does not entail that the model successfully achieves Hierarchical syntactic generalization.
Footnote 6: Hold-out 1000 Ambrosuous instances that do not overlap with the training data.
Footnote 7: [https://huggingface.co/spaces/evaluate-metric/rouge](https://huggingface.co/spaces/evaluate-metric/rouge)
## 5 Discussion and limitations
Mixed results in Natural and Artificial settings:Does vision accelerate the hierarchical generalization of neural LMs? This question can be decomposed into the following sub-questions:
* Does the image have features that are more helpful for syntactic generalization than the text has?
* Do neural models have the ability to use such features under our training scenario?
In this sense, almost no advantage of vision in the Natural setting suggests at least two possibilities: (i) vision is not helpful for efficient language acquisition (Q1--No, Q2--No), or (ii) vision is potentially helpful in human language acquisition, but our scenario of training neural learners lacks some proper biases that are working only in human scenarios, e.g., learners' prior biases or training/data settings toward vision-language grounding (Q1--Yes, Q2--No).
If one can accept that the Artificial setting is a proper abstraction of the grounding problem, the positive results in the Artificial setting suggest that the interpretation (ii) is plausible. That is, under proper abstraction, the vision indeed accelerates hierarchical linguistic generalization, but the problem is how the learner can acquire such an ability to abstract the "meaning" of image and text, and at least the models we examined might not have such an ability. This view is in light of the considerations articulated by Gleitman and Gleitman (1992) and Dupoux (2018). Of course, this is one of the hypotheses; we hope that this study encourages further investigations.
Words beyond the image content:What kind of difficulties exists specifically in the Natural data? One of the difficulties we observed is that the caption has information that is **not** in the image; this might cause confusion in terms of the visual grounding of the sentence. For example, the first example in Table 3 has a caption _the walls over the toilet need a small cabinet_; in this case, the _cabinet_ is not in the image, although it is not directly relevant to the subject-verb agreement. The second example's caption in Table 3 also mentions the objects beyond the image; here, the word _boys_ does not refer to the boy in this image, but any boys around the world having similar eyes with him. This image is potentially confusing in terms of the number agreement since the grammatical subject _boys_ is in plural form, but the image shows one boy.
Formulation of vision-language modeling:We focused on a specific type of vision-language model --image captioning models. However, there will be other formulations involving vision-language interaction, such as text-to-image models (Ramesh et al., 2021) and discrimination models like CLIP (Radford et al., 2021). Investigating the inductive bias related to such a
\begin{table}
\begin{tabular}{p{142.3pt}|p{142.3pt}} \hline \hline & _the walls over the toilet need a small cabinet_ \\ \hline & _boys with eyes like that drive me crazy_ \\ \hline \hline \end{tabular}
\end{table}
Table 3: Examples exhibiting some challenging features specific to the Natural data.
difference in task formulation would be one of our future works.
Coverage of the experiments:We only focused on a specific linguistic phenomenon, subject-verb agreement. Although the subject and verb are core components of a sentence, extending the experimental settings to cover a broad range of linguistic phenomena would be needed to enhance the generality of the conclusion. One issue is that almost all the dataset for linguistic probing, such as BLiMP Warstadt et al. (2020), has only a text part; it is not obvious how to use such material to evaluate vision-language models. Developing a wide coverage benchmark for evaluating linguistic knowledge in the vision-language model is another interesting direction.
## 6 Conclusions
We have conducted two complementary experiments (a noisy, realistic image-text setting and a simplified, artificial one) toward the advantage of vision in the syntactic generalization of LMs. Our results have exhibited that vision accelerated a proper linguistic generalization in the simplified, artificial setting, but LMs struggled with the proper generalization in the noisy, realistic setting. These mixed results have indicated several possibilities; for example, an image can potentially boost language acquisition, but learners' additional visual/linguistic prior knowledge should be needed to robustly make use of _raw_ images for efficient language acquisition.
## Acknowledgement
This work was partially supported by JST CREST Grant Number JPMJCR20D2, Japan. We thank Kentaro Inui, Keisuke Sakaguchi, Yohei Oseki, Goro Kobayashi, Sho Yokoi, Jun Suzuki, members in Tohoku NLP Group, and those who commented on our early work in YANS 2022 for their general feedback on this research direction.
|
2306.12794 | Overview of Robust and Multilingual Automatic Evaluation Metrics for
Open-Domain Dialogue Systems at DSTC 11 Track 4 | The advent and fast development of neural networks have revolutionized the
research on dialogue systems and subsequently have triggered various challenges
regarding their automatic evaluation. Automatic evaluation of open-domain
dialogue systems as an open challenge has been the center of the attention of
many researchers. Despite the consistent efforts to improve automatic metrics'
correlations with human evaluation, there have been very few attempts to assess
their robustness over multiple domains and dimensions. Also, their focus is
mainly on the English language. All of these challenges prompt the development
of automatic evaluation metrics that are reliable in various domains,
dimensions, and languages. This track in the 11th Dialogue System Technology
Challenge (DSTC11) is part of the ongoing effort to promote robust and
multilingual automatic evaluation metrics. This article describes the datasets
and baselines provided to participants and discusses the submission and result
details of the two proposed subtasks. | Mario Rodríguez-Cantelar, Chen Zhang, Chengguang Tang, Ke Shi, Sarik Ghazarian, João Sedoc, Luis Fernando D'Haro, Alexander Rudnicky | 2023-06-22T10:50:23Z | http://arxiv.org/abs/2306.12794v3 | # Overview of Robust and Multilingual Automatic Evaluation Metrics
###### Abstract
The advent and fast development of neural networks have revolutionized the research on dialogue systems and subsequently have triggered various challenges regarding their automatic evaluation. Automatic evaluation of open-domain dialogue systems as an open challenge has been the center of the attention of many researchers. Despite the consistent efforts to improve automatic metrics' correlations with human evaluation, there have been very few attempts to assess their robustness over multiple domains and dimensions. Also, their focus is mainly on the English language. All of these challenges prompt the development of automatic evaluation metrics that are reliable in various domains, dimensions, and languages. This track in the 11th Dialogue System Technology Challenge (DSTC11) is part of the ongoing effort to promote robust and multilingual automatic evaluation metrics. This article describes the datasets and baselines provided to participants and discusses the submission and result details of the two proposed subtasks.
## 1 Introduction
Recent advances in large-scale neural language models Devlin et al. (2019); Radford et al. (2019); Zhang et al. (2020) have led to significant attention in dialogue systems, especially in the open domain category. Significant research efforts are dedicated to boost the robustness of dialogue systems, that is, improving their capability to perform well across multiple domains, dimensions, and handling humans' diverse expressions of the same ideas (e.g., paraphrasing or back-translation).
Automatic evaluation is an indispensable component for speeding up the development of robust dialogue systems. Common metrics are based on word overlap, such as BLEU Papineni et al. (2002) and ROUGE Lin (2004), which mainly focus on matching syntactic information with a set of golden references. Unfortunately, such metrics correlate poorly with human judgments Liu et al. (2016) as in open-domain dialogue, there can be limitless feasible responses w.r.t. a dialogue context.
Alternatively, recently developed model-based metrics such as BERTscore Sun et al. (2022), BLEURT Sellam et al. (2020), FED Mehri and Eskenazi (2020), and MDD-Eval Zhang et al. (2022), which take advantage of the strong semantic representation capability of pre-trained transformer language models, perform the evaluation at semantic and partially pragmatic levels. Some of them do not even need golden references as input. Regrettably, despite their improvement over the word-overlap metrics, these metrics are not perfect; that is, their correlation with human evaluation is still not strong. Moreover, most of them perform well only on a particular dimension (e.g., engagingness or coherence) Zhang et al. (2022), or specific to a single domain. In addition, their performance may be highly dependent on the datasets used for training and evaluation Yeh et al. (2021).
Due to the lack of robust automatic evaluation metrics Mehri and Eskenazi (2020), researchers have to resort to the time-consuming and cost-intensive human evaluation process to analyze the performance of their model and benchmark their proposed methods against baselines.
Furthermore, to the best of our knowledge, none of the existing metrics have been thoroughly tested in a multilingual setting. Metric generalization across different languages is highly desirable, as it
helps the transformation of state-of-the-art English-only dialogue systems into highly capable multilingual systems. Although multilingual pre-trained language models may exist and can be potentially used for training multilingual dialogue systems, human-annotations or high-quality dialogue datasets for languages other than English are very scarce or even nonexistent in the case of some low-resource languages. To address this problem, we take advantage of recent advances in neural machine translation and paraphrasing systems. Using existing high-quality services and models, it is possible to create new datasets for different languages and perform back-translation or paraphrasing to create additional data in the original language to improve and evaluate the robustness of existing metrics. To this end, we propose two subtasks in our track, and their details are listed as follows:
### Track Details
This track consists of two tasks which are explained in more detail below.
Participants will develop effective open-ended and multilingual automatic dialogue evaluation metrics that perform similarly when evaluated in a new language. Participants will develop effective open-ended automatic dialogue evaluation metrics that perform robustly when evaluated over paraphrased/back-translated sentences in English. For both tasks, proposed metrics are expected to show the following two important properties, as indicated in [1]:
1. Correlated to human judgments - the metrics should produce evaluation scores that well correlate to human judgments (scores) across multiple languages or alternative responses (i.e., back-translated or paraphrased).
2. Explainable - the metrics should provide constructive and explicit feedback to the generative models in terms of the quality of their generated responses. For instance, if a generative model contradicts itself, the evaluation metrics should signal such behavior.
Participants can propose their own metrics or optionally improve the deep AM-FM [13] baseline evaluation model provided by us. A leaderboard on the ChatEval platform1 was provided to check the performance of their different proposed models compared to those submitted by other researchers.
Footnote 1: [https://chateval.org/dstcl1](https://chateval.org/dstcl1)
For each evaluation task, Spearman's correlation was used to compare the proposed evaluation metrics against human judgments. A final average score was calculated to rank the submitted metric models. Additional instructions to participants were provided through the Github repository2 and by email on the main DSTC distribution list.
Footnote 2: [https://github.com/Mario-RC/dstcl1_t](https://github.com/Mario-RC/dstcl1_t) rack4_robust_multilingual_metrics
## 2 Task 1: Multilingual Automatic Metrics
In this task, the goal for participants is to propose effective automatic dialogue evaluation metrics that exhibit the properties mentioned above (Section 1.1) and perform well in a multilingual setup [1]. In concrete, participants were asked to propose a single multilingual model that could provide high correlations with human-annotations when evaluated in multilingual dialogues (development set in Section 2.1) and perform well in the hidden multilingual test set. Participants were required to use pre-trained multilingual models and train them to predict multidimensional quality metrics using self-supervised techniques and, optionally, fine-tune their system over a subset of the development data.
Finally, participants evaluated their models on the development and test sets, expecting to show similar performance in terms of correlations with human-annotations across three languages: English, Spanish, and Chinese. Only development and test sets have human-annotations, and only the test sets were manually translated or paraphrased/back-translated to guarantee the correlations with the original human-annotations on the English data.
### Datasets
**Datasets summary** Table 1 shows the three clusters of datasets we used or created during the competition. The table shows information about the number of data used to train, develop, and test the proposed metrics. All these datasets clusters were available in English, Spanish, or Chinese and were back-translated into English. CHANEL and CDIAL include open-domain human-human conversations, while DSTC10 includes human-annotations on human-chatbot interactions. The type of annotations or metadata and how each clus
ter was used (training/development/test) are indicated in the last three rows.
Table 7 (Appendix A) provides a brief summary of all the statistics of the train, development, and test datasets. The datasets statistics including their number of utterances, avg. number of utterances in each conversation, avg. number of context/response words, type of annotations (turn or dialogue level), number of criteria, number of provided annotations, and type of dialogue systems used for generating responses are shown.
**Train** As training set, we used the data released during the [email protected](Rudnicky et al., 2020) workshop organized by Johns Hopkins University. This cluster consisted of a total of 18 well-known human-human dialogue datasets pre-processed and distributed in a standard format. The total number of dialogues was 393k (approximately 3M turns). An additional advantage of the data in this cluster is that they have been automatically translated back and forth using the same high-quality MS Azure translation service.5
Footnote 3: [https://github.com/CHANEL-JSALT-2020/datasets](https://github.com/CHANEL-JSALT-2020/datasets)
Footnote 4: [https://www.clsp.jhu.edu/chaval-cha-t-dialogue-modeling-and-evaluation/](https://www.clsp.jhu.edu/chaval-cha-t-dialogue-modeling-and-evaluation/)
Footnote 5: [https://azure.microsoft.com/en-us/pr](https://azure.microsoft.com/en-us/pr)
oducts/cognitive-services/translator/
**Development** As development set, the organizers provided data from two clusters of datasets: DSTC10 and CDIAL.
The first one was collected during DSTC10 Track 5 (Zhang et al., 2022), consisting of more than 35k turn-level human-annotations, which were automatically translated into Spanish and Chinese, and then back-translated into English using MS Azure services.
Second, we used datasets provided by THUCOAI6 group (Conversational AI groups from Tsinghua University), naming this cluster of datasets CDIAL. It contains open-domain human-human dialogues. They are originally in Chinese and include 3,470 dialogues (approximately 130k turns). Furthermore, we provided Chinese to English translations through the SotA Tencent MT7 system.
Footnote 6: [https://github.com/thu-coai](https://github.com/thu-coai)
Furthermore, Tencent AI manually annotated \(\sim\) 3k random H-H turns (\(\sim\)1k dialogues) of CDIAL in Chinese (at turn-and dialogue-level).
It is important to note that the development data is intended to help participants verify the multilingual and robustness capabilities of their models in terms of correlations with human-annotations.
**Test** Furthermore, in order to check the generalization capabilities of the proposed metrics from the participant, the test data included new English, Chinese, and Spanish data of human-chatbot interactions (Appendix B).
A new Human-Chatbot English dataset (HCEnglish) with \(\sim\)2k turns (\(\sim\)60 dialogues) with three different SotA chatbots (ChatGPT (Radford et al., 2018), GPT-3.5 (Brown et al., 2020), and BlenderBot 3 (Shuster et al., 2022) (Giorgi et al., 2023)). This dataset was manually annotated (turn level and dialogue level) using Amazon Mechanical Turk (AMT), then translated from English to Chinese and Spanish using MS Azure.
In addition, a new Human-Chatbot Chinese dataset (HCCChinese) consisting of \(\sim\)5k turns (\(\sim\)500 dialogues) was generated with three different SotA chatbots (Chinese DialoGPT, Microsoft's Xiaoice (Zhou et al., 2020) and Baidu's Plato-XL (Bao et al., 2022)). This dataset was manually annotated (turn and dialogue level) by Tencent AI, and then translated from Chinese to English using the Tencent MT system.
Third, hidden data from the DSTC10 data was used for Spanish with a total of \(\sim\)1500 turns (\(\sim\)700 dialogues). Existing turn-level annotations were used, as well as Spanish translations and English back-translations created using MS Azure, which were subsequently manually reviewed.
Table 2 shows the number of turns and dialogues for each test dataset for each language. The DSTC10 datasets did not have annotations at dialogue-level.
**Metadata** Since the quality of translated sentences can play an important role in the estimation of metric scores, quality annotations between the original sentence and its respective translation were delivered for each turn of all datasets. Machine translation Quality Estimation (QE) metric scores were given to participants using the QE COMET8(Rei et al., 2020) system. In addition, for task 1, the cosine similarity between the original sentence and the translated sentence. Thanks to this information, participants could optionally discard dialogues or turns that potentially did not get a high translation quality estimation, therefore reducing potential noise but also allowing the creation of more robust metric systems.
Footnote 8: [https://github.com/Unbabel/COMET](https://github.com/Unbabel/COMET)
In addition, toxicity and sentiment analysis metadata were provided for the original turns in both the CHANEL and DSTC10 datasets for filtering and dialogue curation purposes, as well as to avoid potential biases. These metadata allowed participants to have a better reference of the dataset quality, being of great help for them to decide whether or not to use these original turn and their translations in the training of their evaluation models and, optionally, fine-tune multilingual pre-trained models allowing better performance on the proposed dialogue-oriented tasks.
**Data Format** All data given follow a unified data format to make the storage, handling, and retrieval easier for the participants. Detailed guidelines are available in the track repository.9
Footnote 9: [https://github.com/Mario-RC/dstcl1_track4_robust_multilingual_metrics/blob/main/dstcl1/track4-datasets-format.md](https://github.com/Mario-RC/dstcl1_track4_robust_multilingual_metrics/blob/main/dstcl1/track4-datasets-format.md)
**Dimensions** For HCEnglish, Amazon Mechanical Turk (AMT) was used to collect annotations for each of the dimensions evaluated in the test data. Our annotations restricted the users to location US, >97% approval rate, >1000 HITs done, and a convenience pool of workers used for NLP evaluation tasks.10 This pool included workers from the pipeline paper [22] and cloudresearch. The average compensation was \(\sim\) 15%/hr. We included text-based attention checks at the dialogue-level as well as and annotator agreement (both with an expert as well as between crowd workers [some from the DSTC10 dataset]) time-based filters on the turn-level.
Footnote 10: Without the convenience pool our annotator agreement was near random.
For the HCChinese data, we leveraged the power of Tencent MT11 to perform the English-to-Chinese translation of the corpus, followed by training a team of six professional Chinese annotators to annotate the dialogues. The entire annotation process spanned a month and incurred costs of approximately 6,194 US dollars, which is in line with the expenses associated with other evaluation datasets. The average cost of annotating each dialogue was 2.36 US dollars. Finally, the average correlation coefficient for Adequacy scored by six annotators is 0.79, and 0.67 for Fluency.
Footnote 11: [https://cloud.tencent.com/product/tmt](https://cloud.tencent.com/product/tmt)
### Dimensions Evaluated
Since open-domain dialogue systems have multi-facet nature, the evaluation can be accomplished from different perspectives. Since this is the case in both development and test data of task 1 (multilingual) and task 2 (robust), we include the following dimensions at turn-level and dialogue-level annotations [10]:
* **Turn-level dimensions**:
**Appropriateness** - The response is appropriate given the preceding dialogue.
**Content Richness** - The response is informative, with long sentences including multiple entities and conceptual or emotional words.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Dataset Name** & **CHANEL** & **DSTC10** & **CDIAL** \\ \hline \#datasets & 18 & 7 & 3 \\ \hline \multirow{2}{*}{Language} & English, Spanish/Chinese, & English, Spanish/Chinese, & English, Spanish/Chinese, \\ & and English back-translation & and English back-translation & and English back-translation \\ \hline Dialogues Type & Human-Human Open-Domain & Human-Chatbot Open-Domain & Human-Human Open-Domain \\ \hline \#dialogues/utterances & + 390.000 / + 3.000.000 & + 18.000 / + 55.000 & + 3.470 /+130.000 \\ \hline \multirow{2}{*}{Annotations} & Sentiment analysis and Toxicity & Sentiment analysis and Toxicity & \multirow{2}{*}{Turn /dialogue level human scores} \\ & Turn /dialogue level human scores & & \\ \hline \multirow{2}{*}{Task 1 Set} & \multirow{2}{*}{Public: Train} & Public: Dev, Test & \multirow{2}{*}{Public: Train, Dev} \\ & & Hidden: Automatic Translations & & \\ \hline \multirow{2}{*}{Task 2 Set} & \multirow{2}{*}{Public: Train} & Public: Dev, Test & \multirow{2}{*}{—} \\ & & Hidden: Manually back-translated/paraphrased & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of train/development/test datasets.
\begin{table}
\begin{tabular}{l c c c c|c c c|c c c|c c} \hline \hline Language & \multicolumn{3}{c}{**EN**} & \multicolumn{3}{c}{**ZH**} & \multicolumn{3}{c}{**ES**} & \multirow{2}{*}{Global} \\ Dataset & **HCEnglish** & **HCChinese** & **DSTC10** & **Total** & **HCEnglish** & **HCChinese** & **DSTC10** & **Total** & **HCEnglish** & **DSTC10** & **Total** \\ \hline Turns & 1700 & 478 & 114 & 2292 & 364 & 1672 & 123 & 2159 & 55 & 333 & 388 & 4839 \\ Dialogues & 59 & 40 & - & 99 & 15 & 160 & - & 175 & 3 & - & 3 & 277 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Summary statistics of the test dataset used for **task 1** at turn and dialogue level, and separated by language.
**Grammatical Correctness** - Responses are free of grammatical and semantic errors.
**Relevance** - Responses are on-topic with the immediate dialogue history.
* **Dialogue-level dimensions**:
**Coherence** - Throughout the dialogue, is the system maintaining a good conversation flow.
**Engageness/Likeability** - Throughout the dialogue, the system displays a likeable personality.
**Informativeness** - Throughout the dialogue, the system provides unique and non-generic information.
**Overall** - The overall quality of and satisfaction with the dialogue.
Furthermore, when choosing the test dimensions, the annotations available in the train and development data were taken into account to keep them balanced and homogeneous.
The dimensions chosen at the turn level show how much the responses are appropriate, informative including multiple entities and conceptual or emotional words, free of grammatical and semantic errors, and on-topic with the immediate dialogue history. The dimensions chosen at the dialogue level show how much the system maintains a good conversation flow, engages well with the user, provides unique and non-generic information, and the overall quality of the system.
Table 3 summarizes the dimensions for each test data set. As can be seen, the DSTC10 set only has human turn-level annotations.
### Baseline
We provide a multilingual variant of deep AM-FM (Zhang et al., 2021)(used previously during Track5 at DSTC10) as the baseline model. The formulation of both AM and FM remains unchanged except that we switch their original English-based pre-trained language models to multilingual models. For the adequacy metric (AM), we use XLM-R12(Conneau et al., 2020) to extract sentence-level embeddings of both the response and the last sentence in the corresponding dialogue context. Then, the cosine similarity of the two embeddings is the AM score assigned to the corresponding response. For the fluency metric (FM), we adopt the multilingual GPT-213 as the backbone language model. The conditional probability of the response w.r.t. the context given by the multilingual GPT-2 model serves as the FM score of the response. The final AM-FM score is the arithmetic mean of both metric scores. All information related to the baseline model, such as code and data, can be found in this GitHub repository.14
Footnote 12: [https://huggingface.co/sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens](https://huggingface.co/sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens)
### Participants
In Task 1, 4 teams participated, which provided a total of 16 submissions. Participants were asked to provide a brief description of the system for their proposals. The two system descriptions provided by the participants are shown below:
**Team 4** Their approach utilizes two submetric groups, XLM-R and ChatGPT, for evaluating dialogue responses. The XLM-R group employs the XLM-Roberta-Large encoder model, consisting of NSP (Next Sentence Prediction), VSP (Valid Sentence Prediction), MLM (Masked Language Modeling), and ENG (Engagement) submetrics. The NSP submetric ensembles three models trained on English and multilingual data, while the VSP model
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Sets** & \multicolumn{4}{c}{**Dimensions**} \\ \hline \hline
**DSTC10** & & & & \\ DSTC10-turn & A & CR & GC & R \\ ChatEval-turn & A & & & \\ JSALT-turn & A & & & \\ \hline
**HCCinese** & & & & \\ HCCChinese-dial & C & EL & I & O \\ HCC Chinese-turn & & CR & GC & R \\ \hline
**HCEnglish** & & & & \\ \hline
**HCEnglish-dial** & **C** & **EL** & **I** & **O** \\
**HCEnglish-turn** & **A** & **CR** & **GC** & **R** \\ \hline
**Test data** & & & & \\ \hline Test-dial & C & EL & I & O \\ Test-turn & A & CR & GC & R \\ \hline \hline \end{tabular} \(C\): Coherence \(I\): Informativeness \(A\): Appropriateness \(GC\): Grantal Correctness \(R\): Relevance
\end{table}
Table 3: Summary of the dimensions (human-annotations) available for each dataset used in the test data, both at the turn and dialogue level.
combines different models. The ENG submetric uses an ensemble of encoder models trained on the ENDEX engagement dataset [20]. The MLM submetric utilizes the pre-trained XLM-R-large model with a Language Modeling head. The ChatGPT group prompts gpt-3.5-turbo to evaluate responses based on the dimensions of the DSTC11 test, with submetrics for dialogue and turn level. Weighted sums of the submetrics are calculated, with the weights learned from a subset of the dev dataset. For the test set, four variations were submitted, including weighted sums of XLM-R and ChatGPT, direct mapping of ChatGPT, and a weighted sum of all models.
In addition, Team 4 used the metadata provided. During their tests performed for task 1 they discovered that increasing the machine translated data affected the performance of the trained models. Therefore, they made use of the quality estimations computed with the COMET MTQE model to use only the best translated dialogues.
For task 2, they trained their models using the least similar, and separately the most similar ones, based on cosine similarity and Levenshtein distance. They found that there was a good correlation between using the paraphrase score and their model performance, with lower scores bringing higher performance and vice versa. They deduced that the lower-scored responses were more diverse and therefore more informative for training.
**Team 7** Their Parallel Corpus Alignment Framework enhances model evaluation on parallel corpora, focusing on Robust and Multilingual Automatic Evaluation Metrics for Open-Domain Dialogue systems. By utilizing xlm-roberta-large and bert-base as baseline models, they leverage representations from different languages, paraphrases, and translations to align parallel corpora in the semantic space. Through contrastive learning and multi-dataset distillation, they strengthen the model's scoring robustness and evaluation capability across various data domains.
### Results
Table 4 shows the results on test data for task 1 at turn and dialogue level. To calculate the scores in the table, the following procedure was followed: 1. Data are separated in each language (English, Chinese, and Spanish); 2. Then, for each language separately, Spearman's correlation coefficients are calculated for each dimension independently; 3. Next, we calculate the mean of the correlations of the dimensions in each language (columns EN, ZH, and ES); 4. Finally, we calculate the final mean (Global column) of the language columns.
To rank each team, the best submission was used according to the calculated global score. Teams 4, 7 and 5 were the best performers at turn-level. Regarding dialogue-level, team 4 was the best performer, followed by the baseline model and then by team 3. In particular, the performance of team 4 is outstanding in all languages.
This shows that team's 4 model is very effective not only at the global multilingual level, but also in each language separately, showing a very high performance in Spanish, followed by English and then Chinese. This highlights the need of a multilingual metric capable of performing in Chinese to match the results obtained in Spanish or English.
At dialogue-level, team 4 also demonstrated a very high correlation. Having a wide margin of advantage over team 5 and the base model. It should be noted that for Spanish at the dialogue-level, the amount of data was scarse, then producing non-statistical significant results and making difficult to analyze the reason for so high correlation results.
## 3 Task 2: Robust Evaluation Metrics
In this task, the goal of the participants was to propose robust metrics for automatic evaluation of English dialogues that exhibit previously mentioned properties (subsection 1.1) while being robust when dealing with paraphrased/back-translated English sentences. Here, the expected performance for the
\begin{table}
\begin{tabular}{l c c c c c} & \multicolumn{4}{c}{**Turn-level**} \\
**Team** & **EN** & **ZH** & **ES** & **Global** & **Rank** \\ \hline Baseline & 0.2940 & 0.0753 & 0.1826 & 0.1840 & 4 \\ Team 2 & 0.1469 & 0.1054 & 0.0808 & 0.1110 & 5 \\ Team 4 & 0.4818 & 0.3936 & 0.5890 & **0.4881** & **1** \\ Team 5 & 0.3702 & 0.0701 & 0.1983 & _0.2129_ & \(3\) \\ Team 7 & 0.2214 & 0.3112 & 0.5644 & 0.3657 & 2 \\ \hline \hline \multicolumn{5}{c}{**Dialogue-level**} \\
**Team** & **EN** & **ZH** & **ES** & **Global** & **Rank** \\ \hline Baseline & 0.2414 & 0.4648 & 0.8080 & 0.5047 & 2 \\ Team 4 & 0.5342 & 0.7133 & 0.8080 & **0.6852** & **1** \\ Team 5 & 0.1865 & 0.1356 & 0.6830 & _0.3350_ & \(3\) \\ \end{tabular}
\end{table}
Table 4: Spearman’s correlations of the baseline and average correlations of each team’s metrics on **turn-level** and **dialogue-level** test sets for **task 1**. The first position is shown in bold, the second in underline and the third in italics.
proposed metrics was that they could be on par with the correlations with human-annotations obtained over the original sentences. As robustness criteria proposed, paraphrased/back-translated sentences should have the same semantic meaning as the original sentence but different wording. Task 2 was only evaluated for the English language.
Participants had the opportunity to evaluate their models with developmental data composed of paraphrased/back-translated sentences and their respective human annotations.
### Datasets
**Train, development, and test** For task 2, the same task 1 datasets were used. However, to evaluate robustness, paraphrases and back-translated data were used. Thus, for task 2, the original datasets data was provided, in addition to the back-translations and paraphrases of the original sentences, but not the translations to other languages. Table 5 shows the number of turns and dialogues for each test data set sent to the participants. The DSTC10 datasets did not have annotations at dialogue-level.
For creating semantically similar sentences, we relied on two options: back-translations and a paraphraser model. For back-translations we used MS Azure MT services or Tencent MT system. Then for the paraphraser model, we used PARROT15(Damodaran, 2021). Multiple paraphrases were generated for all the original English sentences in each dataset.
Footnote 15: [https://github.com/jsedoc/Parrot_Par](https://github.com/jsedoc/Parrot_Par) aphraser
For this task, paraphrases were preferable to back-translations. The reason is that current translation systems have a very high quality, so back-translations are often too similar to the original sentence, or even identical, not meeting in this case the robustness criterion proposed in task 2.
**Metadata** For this specific task, participants received as metadata the Levenshtein16 distance calculated for all paraphrases generated from the original sentences, in all datasets. For task 1, QE annotations were given using the same COMET model. In this case, they were calculated between the original sentence and its respective paraphrases separately, and between the original sentence and respective back-translation.
Footnote 16: The Levenshtein distance is a numerical measure indicating the similarity between two strings. A higher Levenshtein distance signifies a greater difference between the two strings.
Moreover, participants were given the Cosine Similarity calculation between the original sentence and its respective paraphrases, as well as between the original sentence and its back-translation. Finally, participants were notified of the provided metadata, as well as the toxicity and sentiment analysis annotations, for them to filter potentially biased or noised sentences.
**Dimensions** Human-annotations for development and test data were the same as for task 1.
### Dimensions Evaluated
As the data for task 2 are the same as those in task 1, the nature of the data is common in both tasks. Therefore, the dimensions used to evaluate the models, both at the turn and dialogue level, are shared between the two tasks.
### Baseline
The same baseline was used for task 2, as for task 12.3. In this case, paraphrases were used instead of multilingual sentences to evaluate robustness.
### Participants
For this task, a total of 5 teams participated and sent a total of 21 submissions. Participants were asked to provide a brief description of their systems. Team 4 and 7 used the same models as for task 1, therefore they are descripted in Section 2.4. Below, we provide detailed description for teams 3 and 6.
**Team 3** To address the variability of metrics in evaluating different dimensions and mitigate overfitting on scarce human-annotated data, they propose IDEL. This approach combines multiple metrics to achieve a higher correlation with human judgment across all dimensions. To avoid overfitting, they employed a list-wise learning-to-rank objective, leveraging the relative positions of examples rather than absolute coordinates. Furthermore, they utilized the LLaMa 65B dataset and the incontext-learning method for direct evaluation of examples, considering their context.
**Team 6** Their approach focused on predicting turn-level qualities. They utilized pre-trained Large
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Dataset** & **HCEnglish** & **DSTC10** & **Total** \\ \hline Turns & 1701 & 404 & 2105 \\ Dialogues & 59 & - & 59 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Summary statistics of **task 2** test datasets at the turn and dialogue level.
Language Models (LLMs) with manually designed prompts and two selected dialogues as few-shot examples to adapt the LLM output. Additionally, they built a feed-forward neural network (FNN) using frozen LLM representations as features to predict the desired metrics. Another submission employed the ChatGPT API with optimized prompts and dynamically obtained dialogue examples. Hyperparameters were selected based on manual annotations of 157 testing examples. However, for grammaticality metric scores, randomly generated scores were submitted due to uninformative constant scores predicted by the LLM.
### Results
Team results for turn and dialogue levels on the test data for task 2 are provided in Table 6. To calculate the scores and provide the ranking, the following procedure was followed: a. Calculate the Spearman's correlation coefficients for each dimension independently; b. Calculate the mean of the correlations of the dimensions; c. Calculate the mean (Global column) of the language columns.
The best presentation according to the overall score calculated was used to rank each team. Teams 4, 6 and 7 were the best performers at the turn-level. At dialogue-level, the baseline model provided the best performance, followed by team 4 and team 3.
Considering team 4 results, both in task 1 and task 2, it can be considered their model as the overall best in the competition, being good at multilingual level as well as in robustness. However, the performance of the baseline model at dialogue level is far superior to that of team 4, showing there is still room for improvement.
## 4 Conclusions and Future Work
This paper presents a comprehensive overview of Track 4 on "Robust and Multilingual Automatic Evaluation Metrics for Open-Domain Dialogue Systems" organized as part of the 11th Dialogue System Technology Challenge (DSTC11). The track was divided into two subtasks addressing an important problems in Dialogue Systems: the design of automatic evaluation metrics for multilingual dialogues and dialogue robustness when dealing with paraphrases or back-translations.
Each task was divided at turn and dialogue level. At the turn level, 5 teams actively participated and at the dialogue level 3 teams participated. Having some of the teams participated at both levels. Some of the teams obtained interesting results that effectively contribute to the state-of-the-art of automatic evaluation models for multilingual dialogues. However, the results at the language level show a disparate performance among the different languages, giving room for improvement in the evaluation of other languages. The overall performance of the participants was satisfactory, with some teams outperforming the baseline model both in language and globally, as well as at the turn and dialogue levels. However, we can see that the automatic evaluation is still an open problem as correlation scores are still below 0.7 in the best of the cases.
The second task was also subdivided at turn and dialogue level. At the turn level, 5 teams actively participated and at dialogue-level 3 teams, with some of the teams having participated at both levels. At the turn level, several teams outperformed the baseline model. However, no team was able to outperform the baseline model at dialogue-level, showing that there is still room for improvement.
As future work, we plan to increase the number of databases, as well as to provide better baseline models. We also want to include a larger number of dimensions so that the evaluations performed are more complete, covering more different aspects of the dialogue. For task 1, it is planned to extend the number of available languages, to create multilingual models with a wider spectrum, thus widening the scope of the competition and attracting more participants who are fluent in other languages. For task 2 we want to propose higher quality paraphrases, such as those generated with models like GPT-4 (OpenAI, 2023).
\begin{table}
\begin{tabular}{c c c|c c c} & \multicolumn{2}{c}{**Turn-level**} & \multicolumn{3}{c}{**Dialogue-level**} \\
**Team** & **Global** & **Rank** & **Team** & **Global** & **Rank** \\ \hline Baseline & 0.3387 & 4 & Baseline & **0.4800** & **1** \\ Team 1 & 0.1537 & 6 & Team 1 & 0.1111 & 4 \\ Team 3 & 0.2697 & 5 & Team 3 & _0.2196_ & \(3\) \\ Team 4 & **0.4890** & **1** & Team 4 & 0.3031 & 2 \\ Team 6 & 0.4190 & \(\frac{2}{3}\) & & & \\ Team 7 & _0.3833_ & \(3\) & & & \\ \end{tabular}
\end{table}
Table 6: Spearman’s correlations of the baseline and average correlations of each team’s metrics on **turn-level** and **dialogue-level** test sets for **task 2**. The first position is shown in bold, the second in underline and the third in italics.
## Acknowledgements
This work is supported by project BEWOR (PID2021-126061OB-C43) funded by MCIN/AEI/10.13039/501100011033 and, as appropriate, by "ERDF A way of making Europe", by the "European Union", and by the European Commission through Project ASTOUND (101071191 -- HORIZON-EIC-2021-PATHEDERCHALLENGES-01). We gratefully acknowledge valuable efforts from Tencent AI Lab who supports Chinese translation and annotation of datasets by funding and infrastructure. Thanks to THU-CoAI (Conversational AI groups from Tsinghua University) for providing their Chinese datasets as part of the challenge data. Thanks to Unbabel for providing the COMET MTQE scores annotations as part of the challenge data. This contribution was supported by national funds through Fundacao para a Ciencia e a Tecnologia (FCT) with references PRT/BD/152198/2021 and UIDB/50021/2020, and by the P2020 program MAIA led by Unbabel (LISBOA-01-0247-FEDER-045909). We also give thanks to MS Azure services (especially to Irving Kwong) for their sponsorship to continue processing new datasets for the research community. This research project is supported by the NYU ChatEval Team led by Joao Sedoc. This research project is supported in part by a grant from Amazon to Alexander Rudnicky, Carnegie Mellon University. Thanks to Karthik Ganesan, Sarik Ghazarian, James Hagerty, Zhang Chen and Alex Rudnicky for developing the baseline model as part of the challenge tasks.
|
2310.18239 | Fine-Tuning Language Models Using Formal Methods Feedback | Although pre-trained language models encode generic knowledge beneficial for
planning and control, they may fail to generate appropriate control policies
for domain-specific tasks. Existing fine-tuning methods use human feedback to
address this limitation, however, sourcing human feedback is labor intensive
and costly. We present a fully automated approach to fine-tune pre-trained
language models for applications in autonomous systems, bridging the gap
between generic knowledge and domain-specific requirements while reducing cost.
The method synthesizes automaton-based controllers from pre-trained models
guided by natural language task descriptions. These controllers are verifiable
against independently provided specifications within a world model, which can
be abstract or obtained from a high-fidelity simulator. Controllers with high
compliance with the desired specifications receive higher ranks, guiding the
iterative fine-tuning process. We provide quantitative evidences, primarily in
autonomous driving, to demonstrate the method's effectiveness across multiple
tasks. The results indicate an improvement in percentage of specifications
satisfied by the controller from 60% to 90%. | Yunhao Yang, Neel P. Bhatt, Tyler Ingebrand, William Ward, Steven Carr, Zhangyang Wang, Ufuk Topcu | 2023-10-27T16:24:24Z | http://arxiv.org/abs/2310.18239v1 | # Fine-Tuning Language Models Using Formal Methods Feedback
###### Abstract
Although pre-trained language models encode generic knowledge beneficial for planning and control, they may fail to generate appropriate control policies for domain-specific tasks. Existing fine-tuning methods use human feedback to address this limitation, however, sourcing human feedback is labor intensive and costly. We present a fully automated approach to fine-tune pre-trained language models for applications in autonomous systems, bridging the gap between generic knowledge and domain-specific requirements while reducing cost. The method synthesizes automaton-based controllers from pre-trained models guided by natural language task descriptions. These controllers are verifiable against independently provided specifications within a world model, which can be abstract or obtained from a high-fidelity simulator. Controllers with high compliance with the desired specifications receive higher ranks, guiding the iterative fine-tuning process. We provide quantitative evidences, primarily in autonomous driving, to demonstrate the method's effectiveness across multiple tasks. The results indicate an improvement in percentage of specifications satisfied by the controller from 60% to 90%.
## 1 Introduction
Pre-trained language models encode rich world knowledge that is useful for planning and control. Recent works use pre-trained models to synthesize control policies for autonomous systems such as in autonomous driving (Seff et al., 2023), surgical robotics (Janssen et al., 2023), and aircraft operation (Tikayat Ray et al., 2023). The control policies yield high-level actions that an agent should take in order to satisfy objectives specified via natural language prompts.
However, in specific domains, pre-trained models may fail to generate appropriate control policies. For instance, an autonomous driving system may require knowledge about traffic rules and conventions specific to a given country. Such specific rules and conventions may be beyond the knowledge encoded in the pre-trained model.
To address this shortcoming, several works use human feedback for fine-tuning pre-trained models and to incorporate required domain knowledge (Stiennon et al., 2020; Christiano et al., 2017; Rafailov et al., 2023). Human feedback evaluates the extent to which the output of a pre-trained model aligns with the desired objectives. For example, the provision of a binary ranking of like or dislike for each model output can act as a feedback source. This feedback from human expertise enables fine-tuning of the pre-trained model and allows implicit incorporation of domain-specific knowledge. However, obtaining feedback from humans is labor-intensive and costly.
We investigate how similar feedback can be automatically obtained using artifacts from formal methods. Suppose we have a world model, which is either abstract or obtained from a high-fidelity simulator, and a set of specifications. We can verify, either formally or empirically, if a controller generated by the language model meets the specifications (Yang et al., 2022). The measure of compliance can act as a source of feedback for fine-tuning, similar to human feedback. Since this procedure is automated, such feedback is less labor-intensive and cheaper.
We develop a method to fine-tune pre-trained models based on automated feedback using artifacts from formal methods. The proposed method synthesizes an automaton-based controller from the pre-trained model given a natural language task description (Yang et al., 2022). Such an automaton-based controller is formally verifiable against independently provided specifications (e.g., a driving rule book (Censi et al., 2019)) when implemented in a specific world model. We can obtain the number of specifications satisfied by each controller and use it for ranking. We then iteratively fine-tune the pre-trained model using this ranking as a feedback source.
If the world model is obtained from a high-fidelity simulator rather than an abstract model, we collect trajectories from the simulator. The trajectories are sequences of state-action pairs which can be checked against the provided
specifications. A controller satisfying a larger number of specifications when executed in the simulator is assigned a higher rank. We use the obtained ranks for fine-tuning.
To demonstrate the performance of the proposed method, we provide experimental results covering multiple tasks in an autonomous driving system, although applicability is not limited to this domain. The quantitative results indicate a significant improvement in the percentage of specifications satisfied, from 60% to above 90%, confirming that the proposed method can effectively fine-tune the pre-trained model. Furthermore, we show the real-world applicability of the controller synthesized from the fine-tuned model, providing evidence for the necessity of the proposed method.
## 2 Related Work
Fine-tuning from Human Feedback.Reinforcement learning from human feedback (RLHF) is a preference alignment strategy that learns a reward model from human preferences and then fine-tunes the pre-trained language model using reinforcement learning (Stiennon et al., 2020). In some works, before fine-tuning begins, humans compare the accuracy of multiple responses to a single input and indicate which is preferred, generating a data set of preferences that is used to train a reward function (Stiennon et al., 2020; Ouyang et al., 2022). Other methods optimize the reward function and fine-tune the language model simultaneously. As the model generates outputs, a human indicates which output is preferred, sending new feedback for the reward function to learn, thus impacting the model's accuracy (Christiano et al., 2017).
Direct preference optimization (DPO) is a preference alignment strategy that implicitly optimizes the same objective as RLHF without explicitly learning a reward model or using reinforcement learning. DPO optimizes model outputs directly from human feedback data using a modified maximum likelihood objective, reducing the number of training stages and improving stability (Rafailov et al., 2023).
However, all of the above works rely on humans to provide feedback on which outputs are preferred. Obtaining an excessive amount of human feedback is labor intensive. In contrast, the method we propose automatically ranks the outputs from language models. Hence we can obtain an unlimited number of data points to fine-tune the language model.
Fine-tuning from Generated OutputsSome methods fine-tune a language model using the outputs of another model. For example, a language model can learn how to generate common sense phrases (Zhou et al., 2023) or output chain-of-thought reasoning (Li et al., 2023) using responses from a model that already exhibits the desired behavior. Other methods train a language model using the model's own outputs by identifying high-quality statements and feeding them back into the model as examples of correct responses (Bhagavatula et al., 2023; Jung et al., 2023). One approach combines both methods, first fine-tuning using the outputs of a separate pre-trained model, and then fine-tuning again using the model's own filtered outputs (Jung et al., 2023). Another strategy is to modify the backpropagation process so that only certain parameters are updated (Chen et al., 2020).
These methods are not capable of fine-tuning domain-specific language models since all the generated outputs from itself or other models lack domain-specific knowledge as well. In contrast, the method we proposed can fine-tune the language model to satisfy domain-specific requirements.
Formal Methods and Verification on Language Models.Existing works convert natural language to formal language, which can be used for verification (Baral et al., 2011; Sadoun et al., 2013; Ghosh et al., 2016). Recent works show that language models can be trained to convert natural language to formal language, with applications in representing mathematics, generating proofs, and creating assurance cases (Hahn et al., 2022; Wu et al., 2022; First et al., 2023; Chen et al., 2023). One method is to design input prompts that include task-specific information (e.g., definitions, response templates, and detailed examples) that enable a language model to divide a goal into individual steps and develop formal constraints on the system (Chen et al., 2023). Other methods iteratively refine the input prompt to the language model based on counter-examples until the outputs pass formal verification (Jha et al., 2023; Yang et al., 2022). Although these works utilize formal methods, there are still humans in the loop, while our proposed method aims to be fully automated without any human intervention.
## 3 Preliminaries
**Automaton-Based Model for System or Environment.** A model is an abstract representation that encodes the static and dynamic information of a system or an environment.
Figure 1: Examples of an automaton-based model (top) and a controller (bottom).
We use a transition system to build the model.
A transition system \(\mathcal{M}\coloneqq\langle\Gamma_{\mathcal{M}},Q_{\mathcal{M}},\delta_{\mathcal{M }},\lambda_{\mathcal{M}}\rangle\) consists of a set of output symbols \(\Gamma_{\mathcal{M}}\), a set of states \(Q_{\mathcal{M}}\), a non-deterministic transition function \(Q_{\mathcal{M}}\times Q_{\mathcal{M}}\to\{0,1\}\), and a labeling function \(\lambda_{\mathcal{M}}:Q_{\mathcal{M}}\to\Gamma_{\mathcal{M}}\).
We introduce a set of atomic proposition \(P\) such that \(\Gamma_{\mathcal{M}}\coloneqq 2^{P}\), i.e., a symbol \(\sigma\in\Gamma_{\mathcal{M}}\) is the set of atomic propositions in \(P\) that evaluate to _True_. Each symbol \(\sigma\) captures the system or environment behavior. We present an example in Figure 1. We will leverage the fact that automaton-based structures are formally verifiable in the proposed method.
Automaton-Based Controller.A controller is a system component responsible for making decisions and taking actions based on the system's state. A controller can be mathematically represented as a mapping from the system's current state to an action, which is executable in the task environment. We use a _finite state automaton_ (FSA) to build a controller for a sequential decision-making task.
A FSA is a tuple \(\mathcal{A}=\langle\Sigma,A,Q,q_{0},\delta\rangle\) where \(\Sigma\) and \(A\) are the sets of input and output symbols, \(q_{0}\in Q\) is the initial state, and \(\delta:Q\times\Sigma\times A\times Q\to\{0,1\}\) is a non-deterministic transition function. The transition function is a membership function--a transition exists when it evaluates to \(1\).
Each input symbol \(\sigma\in\Sigma\) is composed of the atomic propositions from \(P\), which is the set of atomic propositions we introduced for the model. We introduce another set of atomic propositions \(P_{A}\) for the output alphabets \(A\coloneqq 2^{P_{A}}\). We also allow for a "no operation/empty" symbol \(\epsilon\in A\). Note that the input symbols comprise all possible dynamics of the environment or system in which the controller operates, and the output symbols comprise all the actions allowed by the controller. See Figure 1 for an example.
## 4 Fine-Tuning Pre-trained Language Model for Autonomous System
We develop a method for fine-tuning pre-trained language models for specific control tasks, named _direct preference optimization via automated feedback_ (**DPO-AF**). The method first obtains human-provided information regarding the autonomous system. It then constructs a model that encodes the information about the system. Next, we query the pre-trained language model on a particular system control task and get multiple responses from the language model via sampling. We construct automata from the responses and apply verification methods to check how many user-provided specifications each automaton satisfies. We rank the responses by the number of satisfied specifications. Last, we send the prompt and ranked responses to the DPO algorithm to fine-tune the language model.
DPO-AF does not require repeated feedback from humans. Therefore, we can obtain an unlimited number of prompt-response pairs until the language model converges.
### Automaton-Based Representation for Natural Language and Autonomous System
Modeling the Autonomous System.DPO-AF starts by constructing an automaton-based model encoding the information about the autonomous system. Such information is obtained from external sources such as human experts or system operation manuals. The information includes but is not limited to a set of propositions that describe the system's behaviors and a set of control signals (actions) that can affect the system's states. We encode the set of behaviors in an atomic proposition set \(P\) and the set of actions in an atomic proposition set \(P_{A}\).
Figure 3: This diagram depicts the method of ranking responses by formal verification of the induced automata. We present the sample automata in Figures 5 and 7.
Figure 2: The overall pipeline of fine-tuning a language model for autonomous systems via automated feedback. We mark the inputs to the pipeline in purple and the output in blue.
Recall that a model consists of a set of states, a set of symbols, a transition function, and a label function. As we defined \(P\) and \(P_{A}\), we build \(2^{|P|}\) states whose label is \(\sigma\in 2^{P}\) respectively. \(|P|\) is the number of propositions in \(P\). Next, for every two states \(p_{i}\) and \(p_{j}\), we check whether the system supports the transition between the label of \(p_{i}\) and the label of \(p_{j}\). If the system supports such a transition, we add it into the transition function.
Finally, we remove the states with no incoming and outgoing transitions. However, from a conservative perspective, we can build transitions for every pair of states and not remove any states. The conservative approach can avoid potential missing transitions but will significantly increase the computation cost for formal verification.
To illustrate the procedure, suppose there is a traffic light system operating in the order of red-green-yellow-red. We have the proposition set \(P=\{green,yellow,red\}\) and transitions (green to red), (red to yellow), and (yellow to green). Hence, we only keep three states with labels \(green,yellow,red\) respectively and remove all the states with other labels (e.g., \(green\wedge yellow\)).
```
inputAtomicProposition Set \(P\), System \(\mathcal{S}\) output\(\Gamma_{\mathcal{M}},Q_{\mathcal{M}},\delta_{\mathcal{M}},\lambda_{\mathcal{M}}\) \(\Gamma_{\mathcal{M}}=2^{P}\) for\(\sigma\) in \(\Gamma_{\mathcal{M}}\) \(Q_{\mathcal{M}}=Q_{\mathcal{M}}+p_{new}\) \(\lambda_{\mathcal{M}}(p_{new})=\sigma\) endfor for\((p_{i},p_{j})\) in \(Q_{\mathcal{M}}\) if\(p_{i}\to p_{j}\) is allowed by \(\mathcal{S}\) \(\lambda_{\mathcal{M}}(p_{i},p_{j})=1\) endif endfor \(Q_{\mathcal{M}}=Q_{\mathcal{M}}\setminus\{p_{i}|\forall_{j}\lambda_{\mathcal{M }}(p_{i},p_{j})=\lambda_{\mathcal{M}}(p_{j},p_{i})=0\}\)
```
**Algorithm 1**An algorithm for system modeling
Task Prompt Engineering.Prior to fine-tuning the language model, we collect a prompt dataset. The prompt dataset consists of the queries on the control tasks that operate in the autonomous system.
Then, we define a prompt engineering procedure to extract relative task knowledge from the language model. For each prompt in the prompt dataset, we first use the following format to obtain the responses from the language model:
```
1Definethestepsfortaskdescription
21steponedescription
32steptwodescription
4...
```
**Algorithm 2**The language model
We rephrase the step description so that the propositions and actions are consistent with the model. Therefore, we avoid failing the verification process due to language ambiguity, i.e., different phrases with the same meaning.
Note that DPO-AF also aims to fine-tune the language model to output steps that can be easily aligned to the propositions and actions. Therefore, the expected responses from the fine-tuned language model should have the following properties: 1. The language model can easily and correctly align the textual step descriptions to the given propositions and actions. 2. The aligned step descriptions satisfy the user-provided specifications.
To check the second property, we need to construct an automaton-based controller from the textual step descriptions. Then, we implement the controller in the model and verify it against the specifications.
Controller Construction.We follow the method GLM2FSA (Yang et al., 2022) to construct an FSA-based controller to encode the textual step descriptions. Specifically, we start from the aligned textual step descriptions and apply semantic parsing to break the steps into a list of verb phrases. Recall that a controller consists of a set of states \(Q\), an initial state \(q_{0}\), input symbols \(\Sigma\), output symbols \(A\), and a transition function \(\delta\). We use the verb phrases to define the input and output symbols according to the grammar from GLM2FSA. Then, we build one state corresponding to each step, with the state corresponding to the first step as the initial state. Last, we follow the GLM2FSA algorithm to build the transition rules. We present a step-by-step illustrative example in Section 5.1.
### Automated Feedback
Given a set of specifications, we provide two ways of checking whether the controllers constructed from the language model's outputs satisfy each specification. For each output, the method generates feedback consisting of the number or
percentage of specifications being satisfied.
Formal Verification.Formal verification requires an automaton-based model, an automaton-based controller, and a set of logical specifications. So far, we have constructed the model and the controller. The specifications include the expectation of task achievement or safety requirements, represented in temporal logic (e.g., linear temporal logic (Pnueli, 1977)). The temporal logic specifications are logic formulas over propositions \(P\cup P_{A}\). We describe it in detail in the Appendix. These specifications are either provided by the task designer or extracted from existing rule books.
In the verification procedure, we first implement the controller in the model. Mathematically, we define a product automaton \(\mathfrak{P}=\mathcal{M}\otimes\mathcal{C}\) describing the interactions of the controller \(\mathcal{C}\) with the model \(\mathcal{M}\), i.e., how the controller's actions change the model's states and how the model's states affect the controller's decision-making on its next action. Note that the verification procedure implicitly assumes that all the actions can be successfully operated and hence lead to the corresponding states of the controller and the model.
We run a model checker (e.g., NuSMV (Cimatti et al., 2002)) to verify if the product automaton satisfies each specification,
\[\mathcal{M}\otimes\mathcal{C}\models\Phi. \tag{1}\]
We verify the product automaton against each specification for all the possible initial states. If the verification fails, the model checker returns a counter-example. The counter-example is a trace--a sequence of states--violates the specifications. The NuSMV model checker returns the sequence of states from the product automaton along with the output symbols. Mathematically, the traces are in a format of \((p_{1},q_{1},c_{2}\cup a_{1}),(p_{2},q_{2},c_{2}\cup a_{2}),...\) where \(p_{i}\in Q_{\mathcal{M}},q_{i}\in Q,c_{i}=\lambda_{\mathcal{M}}(p_{i}),a_{i} \in A\) such that \(\delta(q_{i},\lambda_{\mathcal{M}}(p_{i}),a_{i},q_{i+1})=1\).
We present the definitions of temporal logic and product automaton in the Appendix.
Empirical Evaluation.In some scenarios, obtaining models for autonomous systems may be hard. We propose using empirical evaluation to account for the scenarios where models are not present. Empirical evaluation requires an autonomous system \(\mathcal{S}\), a constructed controller \(\mathcal{C}\), an atomic proposition set \(P\), a set of actions \(P_{A}\), and a grounding method \(\mathbf{G}\). Specifically, \(\mathbf{G}:\mathcal{C}\times\mathcal{S}\rightarrow(2^{P}\times 2^{P_{A}})^{N}\) operates the controller directly in the system and returns a sequence of propositions and actions describing the operation. \(N\) is the max length of the sequence. The sequence is evaluated as follows:
\[\mathbf{G}(\mathcal{C},\mathcal{S})=(2^{P}\times 2^{P_{A}})^{N}\models\Phi. \tag{2}\]
After evaluating every sequence against the specifications, we get the percentage of sequences, \(\mathbb{P}_{\Phi}\), which satisfy each specification:
\[\mathbb{P}_{\Phi}=\frac{\text{number of sequences satisfying }\Phi}{\text{total number of sequences}}.\]
### Fine-Tuning via Verification Feedback
Collection of the Language Model's Outputs.Once we select the autonomous system and obtain the model for the system, we can query the model for instructions on tasks that are operable in the system, following the format described in the previous section. Different responses for the same input task can be sampled from the language model. Then, we can rank these responses and fine-tune the language model to output the best response according to the system model.
Ranking the Outputs and Fine-Tuning the Language Model.We apply the verification feedback method for every two responses from the language model associated with the same task prompt to rank the preferences of the two responses. As a result, we obtain a data point \((x,y_{w},y_{l})\), where \(x\) is the input prompt, \(y_{w}\) is the preferred response and \(y_{l}\) is the unpreferred response. For a given set of specifications, we construct a controller from each response and verify it against each of the specifications. The response satisfying more specifications is preferred. If we have collected \(N\) tasks and \(m\) responses per task. Then, we will have a maximum number of \(N\times C_{2}(m)\) data points, where \(C_{i}(j)\) means \(j\) choose \(i\).
Then, we feed the pairs of responses, along with their prompt, to the DPO algorithm. The DPO algorithm fine-tunes the parameters of the language model accordingly. During fine-tuning, we use low-rank approximation to reduce computational complexity (Hu et al., 2021).
## 5 Experimental Results
To validate the proposed method, we apply DPO-AF on Llama2-7B for controlling an autonomous driving system. We first provide a demonstration of how we obtain the verification feedback. Then, we present quantitative results to show the effectiveness of DPO-AF at the mathematical level. Next, we use an autonomous driving simulator, Carla (Dosovitskiy et al., 2017), to show Llama's performance enhancement at the operation level. Lastly, we provide evidence that the generated controller can be transferred to real-life, indicating the applicability of our approach.
### Example Demonstration
Examples of System Modeling.To obtain formal verification feedback for the language model's outputs, we first construct automaton-based models that encode the informa
tion of the autonomous driving system. Such information includes the objects from the environment and potential environment dynamics that can be perceived by the autonomous vehicle. Note that the models are externally provided, either from human expertise or system manuals.
Figure 5 and 6 show the automaton-based models encoding the information on a regular traffic light intersection and a wide median (which we present in Figure 4). We construct one model for each scenario in the autonomous driving system. We integrate these models together to form a universal model representing the entire system. In this way, we can later implement the constructed controllers into the model for formal verification. We present models for other scenarios in the autonomous driving system in the Appendix.
Examples of Externally Provided Specifications.For verification purposes, we generate a set of traffic rules in the form of temporal logic. We denote the traffic rules as _specifications_. Some examples from the set of specifications in the temporal logic formula are presented below:
\[\Phi_{1} =\square(\text{pedestrian}\rightarrow(\lozenge\text{stop})),\] \[\Phi_{2} =\square(\neg\text{turn left}\vee(\neg\text{opposite car}\lor\text{ green left-turn light}),\] \[\Phi_{3} =\square(\neg\text{green traffic light}\rightarrow\neg\text{go straight}),\] \[\Phi_{4} =\square(\text{stop sign}\rightarrow\lozenge\text{stop}),\] \[\Phi_{5} =\square\neg\text{turn right}\vee\neg(\text{car from left}\lor\text{pedestrian at right}),\]
We present the full set of specifications in the Appendix.
From the provided models and specifications, we can extract a set of atomic propositions and a set of actions. We add the English vocabulary from the model's input symbols to the set of atomic propositions. We add any vocabularies from the temporal logic formulas that are not already in the proposition set to the action set. Now, we have obtained a set of atomic propositions and allowable actions from the model and specifications. The propositions include \(\{\) green traffic light, green left-turn light, flashing left-turn light, opposite car, car from left, car from right, pedestrian at left, pedestrian at right, pedestrian in front, stop sign \(\}\), and the actions are \(\{\) stop, turn left, turn right, go straight \(\}\).
Once we have the models representing the autonomous system and a set of specifications, we can construct controllers for various tasks that can operate in the system and verify whether the controllers satisfy the specifications when operating in the system.
Example on Controller Construction: Right Turn.In this example, we use a simple task "turn right at the traffic light" to demonstrate the controller construction and verification. Then, we compare the controllers constructed using the language model prior to and after the fine-tuning stage. Hence we can give a straightforward demonstration of how DPO-AF improves the language model.
We start by constructing a controller using the language model before fine-tuning it. First, we query the language model on the steps of turning right at the traffic light in the following format:
```
1Stepsfor*turnrightattrafficlight*
21Lookstraightaheadandwatchfortraffic
Figure 4: Illustration of two sample scenarios from the autonomous driving system. The left figure is an intersection with the traffic light. We encode this scenario in a model in Figure 5. The right figure is an intersection with a wide median. We encode it in a model in Figure 6.
Figure 5: An automaton-based model represents a vehicle’s environment dynamics at a regular traffic signal at an intersection. TL represents “traffic light,” and ped represents “pedestrian.”
Figure 6: An automaton-based model representing the environment dynamics of a yield-based wide median. \(\sigma_{1}=\text{car}\) from left and \(\sigma_{2}=\text{car}\) from right.
light.
3. If the traffic light turns green, start moving forward.
4. As you approach the intersection, look to your left for oncoming traffic.
5. If there is no traffic coming from your left, check pedestrians on your right.
6. If it is safe, turn your vehicle right.
Next, we again query the language model to align the step instructions to the defined propositions and actions:
```
1RephrasethefollowingstepstoalignthedefinedBooleanPropositions(greentrafficlight,carfromleft,......)andActions(stop,turnleft,turnright,gostraight);
21. Lookstraightaheadandwatchfortraffic.
3...light.
4. If it is safe, turn your vehicle right.
5
6. Observe the state of the greentrafficlight.
7. If the greentrafficlightison,executetheactiongostraight.
8. As you approach the intersection, observethestateofthecarfromleft.
9. If the car from left is not present, checkthestateofthepedestrianatright.
10. If the pedestrianatright is not present,executetheactionturnright.
```
By doing so, we have aligned the vocabularies from the initial responses to the defined propositions and actions. Note that such alignment may also introduce mistakes that violate more specifications, but reducing the probability of an alignment mistake is also a goal for DPO-AF.
Then, we apply semantic parsing to break the sentence into verb phrases and keywords (e.g., if) and then shorten the phrases for presentation purposes:
```
1.<observetrafficlight>.
2.<if><greentrafficlight>,<gostraight>.
3.<observecarfromleft>.
4.<if><nocarfromleft>,<checkpedestrianatright>.
5.<if>nopedestrianatright>,<turnright>.
```
Last, we follow the algorithm GLM2FSA (Yang et al., 2022) to construct an FSA representing the steps of this task, as presented in the left of Figure 7.
```
1Stepsfor"turnrightattrafficlight"
21.Observethetrafficlightinfrontofyou.
32.Checkfortheleftapproachingcarandrightsidepedestrian.
43.Ifnocarfromtheleftisapproachingandnopedestrianontheright,proceedtoturnright.
```
Example on Formal VerificationWe first implement both controllers in the automaton-based model presented in Figure 5, i.e., construct a product automaton for each controller and the model.
Second, we verify both product automata against the set of provided specifications. During the verification step, the model checker finds that the controller obtained before fine-tuning fails the specification \(\Phi_{5}\). The model checker returns a counter-example on states \((p_{0},q_{3}),(p_{4},q_{4}),(p_{1},q_{5})\).
This counter-example captures an edge case: The traffic light turns back to red and a car is coming from the left immediately after the agent is checking or waiting for pedestrians. In this scenario, the agent does not check for the traffic light and cars from left again and directly turns right, which can lead to an accident. We argue that this edge case can hardly be caught by human inspection but can be found by the model checker. Hence we highlight this counter-example to indicate the necessity of formal verification.
In contrast, the controller obtained after fine-tuning satisfies all the specifications. Through this right-turn example, we observe the language model's enhancement through DPO-AF. We present more controller construction and verification examples in the Appendix.
### Quantitative Evaluation
Fine-tuning via DPO.DPO fine-tunes a language model to output responses that match desired specifications. DPO requires a data set where each data point has the form \((x,y_{w},y_{l})\), where \(x\) is a user input (i.e., "Steps for turn right at the traffic light"), \(y_{w}\) and \(y_{l}\) are the language model's text responses such that the user prefers \(y_{w}\) over \(y_{l}\). In our experiments, the preferred response is the one whose FSA-based representation satisfies more of the specifications than the other response. We collect approximately 3000 data points to fine-tune the language model. After fine-tuning,
Figure 7: Automaton-based controllers for the task “turn right at the traffic light.” The left controller is obtained before fine-tuning the language model, and the right controller is obtained after the fine-tuning. TL represents “traffic light.”
the language model shows a preference for the responses as indicated in the training dataset.
We measure the DPO training performance via three metrics: DPO loss, accuracy, and marginal preference. Loss refers to the modified maximum likelihood loss function from the DPO algorithm, which is minimized via gradient descent. Accuracy measures how often the model prefers the correct response over the incorrect response. Accuracy is the mean over the dataset of \(\mathbb{I}(P(y_{w}|x,\theta)>P(y_{l}|x,\theta))\), where \(\mathbb{I}\) is the indicator function returning one if the input is true and zero otherwise, and \(\theta\) is the current values of the model parameters. Marginal preference measures how strongly the model prefers the correct output compared to the original reference model. Marginal preference is calculated as the mean over the dataset of \((log(P(y_{w}|x,\theta))-log(P(y_{w}|x,\theta_{ref})))-(log(P(y_{l}|x,\theta))- log(P(y_{l}|x,\theta_{ref})))\). Zero indicates indifference, positive values indicate stronger preferences for the favored answer, and negative values indicate preference for the less preferred response.
We show the fine-tuning performance on the Llama2-7B model over the three metrics in Figure 8. Note that the variance between random seeds is relatively small because the model starts with the same parameters, and only the order of the data changes between seeds.
Evaluation via Formal Verification.We provide an additional metric to evaluate the proposed DPO-AF. During the fine-tuning procedure, we save a checkpoint language model for every 20 epochs. For each checkpoint language model, we query it for various autonomous driving tasks and obtain the task controllers. Then, we verify the controllers against 15 provided specifications (presented in the Appendix) following the formal verification method in Section 4.2. Thus, we obtain the number of specifications being satisfied for each controller.
Figure 9 shows the relationship between the number of satisfied specifications and the number of epochs of DPO training. Simultaneously, we divide the results into two categories--training and validation--depending on whether the task is included in the training dataset. Hence, we have shown the relationships between the numbers of satisfied specifications and epochs for both training data and validation data.
For both training and validation data, we observe an increase in the number of specifications satisfied as we fine-tune for more epochs. This result indicates that our approach can improve the language model's ability to satisfy critical requirements. Therefore, our approach can act as a starting point to guide the design process for real-world implementations of autonomous driving systems.
Justification for Overfitting.We design the method DPO-AF to fine-tune language models for solving domain-specific tasks rather than enhancing the language model in general. Therefore, we do expect some degree of overfitting on the language model to the domain-specific knowledge and vocabulary. In our experiments, we fine-tune the language model specifically for tasks operated in autonomous driving systems. A certain degree of overfitting provides stronger guarantees that the generated outcomes satisfy critical specifications.
Empirical Evaluation in a Simulated System.We have presented another approach to obtain feedback via empirical evaluation in Section 4.2. We will show consistency between feedback from empirical evaluation and formal verification.
As we obtain the controllers through the proposed method, we operate the controllers in the Carla simulator to collect operation data. Carla is a simulator for the autonomous driving system. During each operation of each controller, we
Figure 8: This figure shows fine-tuning statistics for Llama2-7B optimized for an autonomous driving system. All plots show the mean over five seeds. Shaded areas indicate maximum and minimum values. Plots from left to right show the DPO losses, accuracies, and marginal preferences over different epochs, respectively.
obtain a sequence of propositions and actions--in the form of \((2^{P}\times 2^{P_{A}})^{N}\). The propositions come from the information returned by the autonomous system, and the actions come from the controller. The Carla simulator allows for the extraction of system information. We present visual demonstrations of extracting the propositions from the system in Figure 10. Then, we verify the sequence against the provided specifications, as we described in Section 4.2 under Empirical Evaluation. We operate the controllers multiple times in the system and verify the sequences against the specifications. For each specification, we get a percentage of the number of sequences satisfying this specification.
Figure 11 compares these percentages obtained before fine-tuning and after fine-tuning. Note that we run multiple controllers and collect multiple sequences for each controller. We show the results for the first five specifications as presented in Section 5.1.
We observe that the percentages after fine-tuning are consistently higher than before fine-tuning among all five specifications, which means all the specifications have a higher probability of being satisfied for a given execution after fine-tuning. In Figure 9, we show that outputs from the fine-tuned model (at epoch 200) satisfy more specifications compared to the pre-trained model (at epoch 0). Hence, we obtain consistent feedback from the formal verification and empirical evaluation. Therefore, if we are unable to obtain automaton-based models for the system, empirical evaluation is a substitute for formal verification and is able to provide feedback consistent with formal verification.
From another perspective, this result provides additional evidence to show the effectiveness of the method DPO-AF, as it improves the probability of all the specifications being satisfied during operation.
### Grounding Controller to Real-World System
We have the verified controllers constructed from the language model's outputs. In this section, we will show the real-world applicability of the generated controllers.
Note that the controllers make decisions solely based on the input symbols, which are composed of atomic propositions that describe environment observations. Furthermore, our models of the autonomous driving system enforce these propositions to be visual observations of the driving environment. The following statements hold true:
1. The controller's decisions are solely based on visual observations collected from the environment.
2. Suppose we have a vision model to perceive the environment. Consider a scenario wherein the vision model performs consistently in simulation and reality, e.g., object detection accuracies are approximately equal. The controllers behave consistently in simulation and reality in such a scenario.
3. If Statement 2 holds and if the controllers satisfy the critical specifications in simulation, then the controllers also satisfy the specifications in reality.
From the three statements, we know that if we show the vision model performs consistently in the simulation and reality, we can directly transfer the controllers to the real
Figure 11: Percentage \(\mathbb{P}_{\Phi}\) of each specification \(\Phi\) being satisfied during actual operations in the system.
Figure 10: Visual demonstration of obtaining system information while operating the controllers. We use Carla to simulate the autonomous driving system.
Figure 9: The number of specifications satisfied through formal verification vs. the epoch of DPO training.
world system with the same degree of formal guarantee (e.g., safety). To do so, we select a current state-of-the-art object detection model--Grounded SAM (Kirillov et al., 2023; Liu et al., 2023)--to examine whether it performs consistently in both settings.
We extract images from the Carla simulator to build a simulation dataset and use NuImage (Caesar et al., 2020) as the real-world driving dataset. Then, we apply the Grounded SAM to detect objects within the images from the two datasets and record the detection accuracies. We present some sample detection results in Figure 13. Next, we group the detection accuracies by the vision model's confidence score and obtain the confidence-accuracy mapping. This step follows the confidence calibration method proposed in an existing paper (Yang et al., 2023). We argue that if the vision model's detection accuracies in both settings are approximately the same under all the confidence levels, then we say it performs consistently. We show the confidence-accuracy mappings in Figure 12.
Through experimental results, we show that the vision model performs consistently in the simulation and reality. Therefore, we can directly transfer the controllers constructed from the language model to real-world driving systems. This real-world applicability emphasizes the necessity and importance of the proposed approach because DPO-AF can fine-tune the language model to satisfy safety specifications, which are critical in real-world driving systems.
## 6 Conclusions
We develop a method of fine-tuning pre-trained language models via automated feedback for domain-specific tasks, such as control tasks in autonomous systems. The method converts the outputs from the pre-trained language model to automaton-based controllers. Then, it verifies how many of the externally provided specifications are satisfied by each controller. We rank the pre-trained language model's outputs by the number of satisfied specifications and feed these ranked outputs to the DPO algorithm for fine-tuning. We substitute human feedback with automated feedback using formal methods, which significantly decreases labor intensity.
We provide empirical evidence on a simulated autonomous driving system to demonstrate the effectiveness of the proposed method: The fine-tuned language model satisfies more specifications compared with the model before fine-tuning. We additionally show the potential real-world applicability of the controllers, which justifies the necessity of enforcing the language model to produce controllers that satisfy safety-critical specifications.
|
2304.05491 | Model Selection for independent not identically distributed observations
based on Rényi's pseudodistances | Model selection criteria are rules used to select the best statistical model
among a set of candidate models, striking a trade-off between goodness of fit
and model complexity. Most popular model selection criteria measure the
goodness of fit trough the model log-likelihood function, yielding to
non-robust criteria. This paper presents a new family of robust model selection
criteria for independent but not identically distributed observations
(i.n.i.d.o.) based on the R\'enyi's pseudodistance (RP). The RP-based model
selection criterion is indexed with a tuning parameter $\alpha$ controlling the
trade-off between efficiency and robustness. Some theoretical results about the
RP criterion are derived and the theory is applied to the multiple linear
regression model, obtaining explicit expressions of the model selection
criterion. Moreover, restricted models are considered and explicit expressions
under the multiple linear regression model with nested models are accordingly
derived. Finally, a simulation study empirically illustrates the robustness
advantage of the method. | Angel Felipe, Maria Jaenada, Pedro Miranda, Leandro Pardo | 2023-04-11T20:53:46Z | http://arxiv.org/abs/2304.05491v1 | Model Selection for independent not identically distributed observations based on Renyi's pseudodistances
###### Abstract
Model selection criteria are rules used to select the best statistical model among a set of candidate models, striking a trade-off between goodness of fit and model complexity. Most popular model selection criteria measure the goodness of fit trough the model log-likelihood function, yielding to non-robust criteria. This paper presents a new family of robust model selection criteria for independent but not identically distributed observations (i.n.i.d.o.) based on the Renyi's pseudodistance (RP). The RP-based model selection criterion is indexed with a tuning parameter \(\alpha\) controlling the trade-off between efficiency and robustness. Some theoretical results about the RP criterion are derived and the theory is applied to the multiple linear regression model, obtaining explicit expressions of the model selection criterion. Moreover, restricted models are considered and explicit expressions under the multiple linear regression model with nested models are accordingly derived. Finally, a simulation study empirically illustrates the robustness advantage of the method.
_Keywords:_ Renyi's pseudodistance, robustness, restricted model, multiple linear regression model.
## 1 Introduction
Consider a set of real-life observations coming from an unknown distribution to be statistically modeled. Different candidate models may be assumed to fit the data and so a natural question arises as to how to choose the model that best fits the data. If the assumed model is too simple, with few number of parameters, it may not capture some important patterns and relationships in the data. In contrast, if the assumed model is too complex with large number of parameters, the estimated model parameters may over-fit the observed data (including possible sample noise), then resulting in a poor performance when the model is applied to new data. A model selection criterion is a rule used to select a statistical model among a set of candidates based on the observed data. It defines an objective criterion function quantifying the compromise
between goodness of fit and model complexity, typically measured through an expected dissimilarity or divergence. Then, the dissimilarity measure needs to be minimized to select the model with the best trade-off. In other words, model selection criteria rely on a measure of fairness between a candidate model and the true model (i.e., the probability distribution generating the data).
The Akaike information criterion (AIC) is one of the most widely known and used in statistical practice model selection criterion. It was developed by Akaike [1, 2] as the first model selection criterion in the statistical literature. The AIC estimates the expected Kullback-Leibler divergence [20] between the true model underlying the data and a fitted candidate model, and selects the model with minimum AIC. Of course, the true model underlying the data is generally unknown and so an empirical estimate obtained from the observed data is used.
Following similar ideas than the AIC, several other model selection criteria have been proposed in the literature. For example, Schwarz in [24] developed the "Bayesian information criterion" (BIC), which imposes a stronger penalty for model complexity than AIC. Also derived from AIC, Hurvich and Tsai [13, 14, 15] studied the bias problem of the AIC and corrected it with a new criterion called "Corrected Akaike information criterion" (AIC\({}_{C}\)). This criterion tries to cope with the fact that the AIC is only asymptotically unbiased and hence, the bias may be important when the sample size is not large enough and the number of parameters is large. Indeed, under small samples sizes the AIC tends to overfitting the observed data. Konishi and Kitagawa [19] extended the framework in which AIC has been developed to a general framework, including other estimation methods than maximum likelihood to fit the assumed candidate model. The resulting model selection criterion was called the "generalized information criterion" (GIC). The penalty term of GIC reduces to that of "Takeuchi information criterion" (TIC) developed by Takeuchi in [25] when the fitting method is maximum likelihood. Finally, Bozdogon [5] proposed another variant of AIC, called CAIC, that corrected its lack of consistency. Interesting surveys about model selection criteria can be found in [23, 9].
Most of the previous procedures measure the fairness in terms of the Kullback-Leibler divergence. However, some other divergence measures have been explored, extending the methods with better robustness properties. For example, [22] considered the density power divergence (DPD) [3] to define a robust model selection criterion. Similarly, Toma et al. [26] introduced another robust criterion for model selection based on the Renyi pseudodistance (RP) [18].
All the previous criteria assume that the observations are independent and identically distributed. A new problem appears if the observations are independent but not identically distributed (i.n.i.d.o.). In this context, Kurata and Hamada [21] considered a criterion based on DPD, extending the theory of [22]. The main purpose of this paper is to introduce a new robust model selection criterion in the context of i.n.i.d.o. based on RP, thus extending the methods of [26].
The rest of the paper goes as follows. In Section 2 we introduce RP for i.n.i.d.o. and we present some theoretical results necessary for next sections.
The criterion based on RP is considered in Section 3 and an application to multiple linear regression model (MLRM) is presented. Section 4 studies the restricted case, where some additional conditions on the parameter space are imposed. The corresponding explicit expressions for the MLRM comparing a model with many parameters to other with a reduced number of parameters are derived. In Section 5 a simulation study illustrates the robustness of the proposed criterion and compare it with other model selection criteria. Section 6 deals with a real data example. Some final conclusions are presented in Section 7.
## 2 Renyi's pseudodistance for independent but not identically distributed observations
Let \(Y_{1},...,Y_{n}\) be i.n.i.d.o. observations, where each \(Y_{i}\) has true probability distribution function \(G_{i},i=1,...,n,\) and probability density function \(g_{i},i=1,...,n,\) respectively. For inferential purposes, it is assumed that the true density function \(g_{i}\) could belong to a parametric family of densities, \(f_{i}(y,\boldsymbol{\theta}),i=1,...,n,\) with \(\boldsymbol{\theta}\in\Theta\subset\mathbb{R}^{p}\) a common model parameter for all the density functions. In the following, we shall denote by \(F_{i}(y,\boldsymbol{\theta})\) the distribution function associated to the density function \(f_{i}(y,\boldsymbol{\theta}),i=1,...,n.\)
The value of \(\boldsymbol{\theta}\) that best fits the original distributions \(g_{1},...,g_{n},\) would naturally minimize some kind of distance between the true and assumed densities, \((g_{1}(y),...,g_{n}(y))\) and \((f_{1}(y,\boldsymbol{\theta}),...,f_{n}(y,\boldsymbol{\theta})).\) Here, we will use the family of RP divergence measures defined in [18] as measure of closeness between both sets of densities.
**Definition 1**: _Consider \(f(\cdot,\boldsymbol{\theta}),g(\cdot)\) two probability density functions. The_ **Renyi's pseudodistance** _(RP) between \(f\) and \(g\) of tuning parameter \(\alpha>0\) is defined by_
\[\begin{split} R_{\alpha}\left(f(\cdot,\boldsymbol{\theta}),g( \cdot)\right)=&\frac{1}{\alpha+1}\log\left(\int f(y,\boldsymbol{ \theta})^{\alpha+1}dy\right)-\frac{1}{\alpha}\log\left(\int f(y,\boldsymbol{ \theta})^{\alpha}g(y)dy\right)\\ &+\frac{1}{\alpha\left(\alpha+1\right)}\log\left(\int g(y)^{ \alpha+1}dy\right).\end{split} \tag{1}\]
The tuning parameter \(\alpha\) controls the trade-off between efficiency and robustness. Hence, for small values of \(\alpha\) (in the limit \(\alpha=0\)), the corresponding results will be more efficient while less robust. On the other hand, for large values of \(\alpha,\) the results will lead to robustness but with a loss of efficiency.
The RP divergence defined in Eq. (1) is always positive and it only reaches the zero when both densities coincide. Then, the best model parameter value approximating the underlying distribution would naturally minimize Eq. (1) in \(\boldsymbol{\theta}\in\boldsymbol{\Theta}.\) Indeed, if the true distribution \(g\) belongs to the assumed parametric
model with true parameter \(\mathbf{\theta}_{0},\) the global minimizer of the RP is necessarily \(\mathbf{\theta}=\mathbf{\theta}_{0}.\)
At \(\alpha=0,\) the corresponding **Renyi's pseudodistance** between \(f\) and \(g\) can be defined by taking continuous limits as follows
\[R_{0}\left(f(\cdot,\mathbf{\theta}),g(\cdot)\right) = \lim_{\alpha\downarrow 0}R_{\alpha}\left(f(y,\mathbf{\theta}),g(y) \right)=\int g(y)\log\frac{g(y)}{f(y,\mathbf{\theta})}dy \tag{2}\] \[= \int g(y)\log g(y)dy-\int g(y)logf(y,\mathbf{\theta})dy.\]
Hence, \(R_{0}\left(f(\cdot,\mathbf{\theta}),g(\cdot)\right)\) coincides with the Kullback-Leibler divergence measure between \(g\) and \(f\). The RP have been applied in many different statistical models with very promising results in terms of robustness with a small loss of efficiency. For example, [12] considered the RP divergence under the name of \(\gamma\)-cross entropy. Additionally, Toma and Leoni-Auban [27] defined new robust and efficient measures based on RP. In [7], Wald-type tests based on RP were developed in the context of MLRM, and were extended later in [17] for the generalized multiple regression model. Moreover, in [17] a robust approach for comparing two dependent normal populations via a Wald-type test based on RP was carried out. In [16] the restricted MRPE was considered and their asymptotic properties studied; moreover, an application to Rao-type tests based on the restricted RP was there developed.
Note that the last term in Eq. (1) does not depend on \(\mathbf{\theta}.\) Hence, the minimizer of the RP measure can be obtained, for \(\alpha>0,\) by minimizing the surrogate function
\[\frac{1}{\alpha+1}\log\left(\int f_{i}(y,\mathbf{\theta})^{\alpha+1}dy\right)- \frac{1}{\alpha}\log\left(\int f(y,\mathbf{\theta})^{\alpha}g(y)dy\right). \tag{3}\]
The above expression can be rewritten using logarithm properties as
\[-\frac{1}{\alpha}\log\frac{\int f(y,\mathbf{\theta})^{\alpha}g(y)dy}{\left(\int f (y,\mathbf{\theta})^{\alpha+1}dy\right)^{\frac{\alpha}{\alpha+1}}},\]
and thus minimizing \(R_{\alpha}(f(\cdot,\mathbf{\theta}),g(\cdot))\) in \(\mathbf{\theta},\) for \(\alpha>0,\) is equivalent to minimize
\[V_{\alpha}^{\ast}\left(\mathbf{\theta}\right)=-\frac{\int f(y,\mathbf{\theta})^{ \alpha}g(y)dy}{\left(\int f(y,\mathbf{\theta})^{\alpha+1}dy\right)^{\frac{\alpha} {\alpha+1}}}. \tag{4}\]
Similarly, for \(\alpha=0,\) we have that the first term in Eq. (2) does not depend on \(\mathbf{\theta}\) and hence, minimizing \(R_{0}\left(f(\cdot,\mathbf{\theta}),g(\cdot)\right)\) is equivalent to minimizing
\[V_{0}^{\ast}\left(\mathbf{\theta}\right)=-\int g(y)logf(y,\mathbf{\theta})dy. \tag{5}\]
However, now Expression (4) does not tend to Expression (5) when \(\alpha\to 0.\) In order to recover such convergence, and then extend the classical results based
on Kullback-Leibler divergence, we slightly modify Expression (4) as
\[V_{\alpha}\left(\boldsymbol{\theta}\right)=-\frac{\int f(y,\boldsymbol{\theta})^{ \alpha}g(y)dy}{\alpha\left(\int f(y,\boldsymbol{\theta})^{\alpha+1}dy\right)^ {\frac{\alpha}{\alpha+1}}}+\frac{1}{\alpha}, \tag{6}\]
where the value of \(\boldsymbol{\theta}\) minimizing (4) is the same as for minimizing (6). Next lemma proves the required convergence of the objective functions.
**Lemma 2**: _For any two density function \(f(\cdot,\boldsymbol{\theta})\) and \(g(\cdot),\) the following convergence holds_
\[\lim_{\alpha\to 0}V_{i,\alpha}(\boldsymbol{\theta})=V_{i,0}(\boldsymbol{\theta}).\]
**Proof.** First, note that
\[\lim_{\alpha\to 0}\left(-\frac{\int f(y,\boldsymbol{\theta})^{\alpha}g(y)dy}{ \alpha\left(\int f(y,\boldsymbol{\theta})^{\alpha+1}dy\right)^{\frac{\alpha}{ \alpha+1}}}+\frac{1}{\alpha}\right) \tag{7}\]
leads to an indeterminate \((0/0)\). Let us denote
\[z(\alpha)=\left(\int f(y,\boldsymbol{\theta})^{\alpha+1}dy\right)^{\frac{ \alpha}{\alpha+1}}.\]
Taking derivatives on its logarithm
\[\log z(\alpha)=\frac{\alpha}{\alpha+1}\log\left(\int f(y,\boldsymbol{\theta} )^{\alpha+1}dy\right),\]
we obtain, after some algebra, that \(\frac{\partial\log z(\alpha)}{\partial\alpha}=\frac{1}{z(\alpha)}\frac{ \partial z(\alpha)}{\partial\alpha}.\) On the other hand, the derivative of the function \(\log z(\alpha)\) is given by
\[\frac{\partial\log z(\alpha)}{\partial\alpha}=\frac{1}{(\alpha+1)^{2}}\log \left(\int f(y,\boldsymbol{\theta})^{\alpha+1}dy\right)+\frac{\alpha}{\alpha+ 1}\frac{\left(\int f(y,\boldsymbol{\theta})^{\alpha+1}\log f(y,\boldsymbol{ \theta})dy\right)}{\left(\int f(y,\boldsymbol{\theta})^{\alpha+1}dy\right)},\]
and solving the above equation we have that
\[\frac{\partial z(\alpha)}{\partial\alpha}= \left[\frac{1}{(\alpha+1)^{2}}\log\left(\int f(y,\boldsymbol{ \theta})^{\alpha+1}dy\right)+\frac{\alpha}{\alpha+1}\frac{\left(\int f(y, \boldsymbol{\theta})^{\alpha+1}\log f(y,\boldsymbol{\theta})dy\right)}{\left( \int f(y,\boldsymbol{\theta})^{\alpha+1}dy\right)}\right]\] \[\times\left(\int f(y,\boldsymbol{\theta})^{\alpha+1}dy\right)^{ \frac{\alpha}{\alpha+1}}.\]
Hence, applying L'Hopital rule in (7), we obtain that
\[\lim_{\alpha\to 0}-\frac{\int f(y,\boldsymbol{\theta})^{\alpha}g(y)dy}{ \alpha\left(\int f(y,\boldsymbol{\theta})^{\alpha+1}dy\right)^{\frac{\alpha}{ \alpha+1}}}+\frac{1}{\alpha}=\lim_{\alpha\to 0}\frac{-\int f(y, \boldsymbol{\theta})^{\alpha}g(y)\log f(y,\boldsymbol{\theta})dy+\frac{ \partial z(\alpha)}{\partial\alpha}}{z-\alpha\frac{\partial z(\alpha)}{ \partial\alpha}}.\]
Finally,
* \(\lim_{\alpha\to 0}\int f(y,\mathbf{\theta})^{\alpha}g(y)\log f(y,\mathbf{\theta})dy=\int g(y) \log f(y,\mathbf{\theta})dy.\)
* \(\lim_{\alpha\to 0}\frac{\partial z(\alpha)}{\partial\alpha}=\frac{1}{1}\log 1+ \frac{0}{1}\frac{\int f(y,\mathbf{\theta})\log f(y,\mathbf{\theta})dy}{1}=0.\)
* \(\lim_{\alpha\to 0}z=1^{0}=1.\)
Hence, the result holds.
Now, let us denote \(V_{i,\alpha}(\mathbf{\theta})\) the corresponding objective functions for each pair of distributions \((f_{i}(y,\mathbf{\theta}),g_{i}(y)),i=1,...,n,\) as given in (6). As all densities \(f_{i}(y,\mathbf{\theta})\) share a common parameter, the model parameter that best approximates the different underlying densities should minimize the weighted objective function, giving equal weighting to all functions \(V_{i,\alpha}(\mathbf{\theta}).\) Hence, we consider
\[H_{\alpha}(\mathbf{\theta})=\frac{1}{n}\sum_{i=1}^{n}V_{i,\alpha}(\mathbf{\theta})= \frac{1}{n}\sum_{i=1}^{n}\left[-\frac{\int f_{i}(y,\mathbf{\theta})^{\alpha}g_{i} (y)dy}{\alpha\left(\int f_{i}(y,\mathbf{\theta})^{\alpha+1}dy\right)^{\frac{ \alpha}{\alpha+1}}}+\frac{1}{\alpha}\right]. \tag{8}\]
**Definition 3**: _Consider \((g_{1}(y),...,g_{n}(y))\) and \((f_{1}(y,\mathbf{\theta}),...,f_{n}(y,\mathbf{\theta})),\)\(n\) pairs of true and assumed densities for i.n.i.d.o. random variables \(Y_{i},i=1,...,n.\) For any \(\alpha\geq 0,\) the value \(\mathbf{\theta}_{\mathbf{g},\alpha}\) satisfying_
\[\mathbf{\theta}_{\mathbf{g},\alpha}=\arg\min_{\mathbf{\theta}}\frac{1}{n}\sum_{i=1}^{n} \left[-\frac{\int f_{i}(y,\mathbf{\theta})^{\alpha}g_{i}(y)dy}{\alpha\left(\int f _{i}(y,\mathbf{\theta})^{\alpha+1}dy\right)^{\frac{\alpha}{\alpha+1}}}+\frac{1}{ \alpha}\right]=\arg\min_{\mathbf{\theta}}\frac{1}{n}\sum_{i=1}^{n}V_{i,\alpha}(\bm {\theta}).\]
_is called the_ **best-fitting parameter according to RP**_._
In the following we shall assume that there exists an open subset \(\mathbf{\Theta_{0}}\subset\mathbf{\Theta}\) that contains the best-fitting parameter \(\mathbf{\theta}_{\mathbf{g},\alpha}.\)
For any fixed \(i=1,...,n,\) the true distribution \(g_{i}\) of the random variable \(Y_{i}\) is usually unknown in practice and thus \(\mathbf{\theta}_{\mathbf{g},\alpha}\) must be empirically estimated. As we only have one observation of each variable \(Y_{i},\) the best way to estimate \(g_{i}\) based on the observation \(y_{i}\) is assuming that the distribution is degenerate in \(y_{i}.\) We will denote this degenerate distribution by \(\widehat{g}_{i}.\) Therefore, the empirical estimate of the RP divergence with \(\alpha>0,\) given in Eq. (1) is
\[R_{\alpha}\left(f_{i}(Y_{i},\mathbf{\theta}),\widehat{g}_{i}\right)=\frac{1}{ \alpha+1}\log\left(\int f_{i}(y,\mathbf{\theta})^{\alpha+1}dy\right)-\frac{1}{ \alpha}\log f_{i}(Y_{i},\mathbf{\theta})^{\alpha}+k, \tag{9}\]
and similarly the empirical estimate of the RP for \(\alpha=0,\) stated in (2), yields to
\[R_{0}\left(f_{i}(Y_{i},\mathbf{\theta}),\widehat{g}_{i}\right)=-\log f_{i}(Y_{i}, \mathbf{\theta})+k, \tag{10}\]
where \(k\) in (9) and (10) denotes a constant that does not depend on \(\mathbf{\theta}.\) As discussed earlier, the best estimator of the model parameter \(\mathbf{\theta},\) based on the RP divergence should minimize its empirical estimate. But again, minimizing the estimated RP, \(R_{\alpha}\left(f_{i}(Y_{i},\mathbf{\theta}),\widehat{g}_{i}\right),\) for \(\alpha>0,\) is equivalent to minimizing
\[\widehat{V}_{i,\alpha}\left(Y_{i},\mathbf{\theta}\right)=-\frac{f_{i}(Y_{i},\mathbf{ \theta})^{\alpha}}{\alpha\left(\int f_{i}(y,\mathbf{\theta})^{\alpha+1}dy\right)^{ \frac{\alpha}{\alpha+1}}}+\frac{1}{\alpha}. \tag{11}\]
and for \(\alpha=0,\) we can proceed the same way and conclude that minimizing \(R_{0}\left(f_{i}(Y_{i},\mathbf{\theta}),\widehat{g}_{i}\right)\) in \(\mathbf{\theta},\) is equivalent to minimizing
\[\widehat{V}_{0,\alpha}\left(Y_{i},\mathbf{\theta}\right)=-\log f(Y_{i},\mathbf{\theta}). \tag{12}\]
Now, all the available information about the true value of the parameter comes from the set observed data, and so to obtain the best estimation fitting jointly all the observations we should consider the weighted objective function given for for \(\alpha>0\) as
\[\begin{split} H_{n,\alpha}(\mathbf{\theta})=&\frac{1} {n}\sum\limits_{i=1}^{n}\left[-\frac{f_{i}(Y_{i},\mathbf{\theta})^{\alpha}}{\alpha L _{\alpha}^{i}\left(\mathbf{\theta}\right)}+\frac{1}{\alpha}\right]\\ =&\frac{1}{n}\sum\limits_{i=1}^{n}\widehat{V}_{i, \alpha}(Y_{i},\mathbf{\theta}).\end{split} \tag{13}\]
with
\[L_{\alpha}^{i}\left(\mathbf{\theta}\right)=\left(\int f_{i}(y,\mathbf{\theta})^{ \alpha+1}dy\right)^{\frac{\alpha}{\alpha+1}},\]
and correspondingly,
\[H_{n,0}(\mathbf{\theta})=\lim\limits_{\alpha\to 0}H_{n,\alpha}(\mathbf{\theta})= \frac{1}{n}\sum\limits_{i=1}^{n}\widehat{V}_{i,0}(Y_{i},\mathbf{\theta}). \tag{14}\]
Remark at this point that the expected values of the estimates are indeed the theoretical objective functions
\[V_{i,\alpha}(\mathbf{\theta})=E_{Y_{i}}\left[\widehat{V}_{i,\alpha}(Y_{i},\mathbf{ \theta})\right],\quad H_{\alpha}(\mathbf{\theta})=E_{Y_{1},...,Y_{n}}\left[H_{n, \alpha}(\mathbf{\theta})\right].\]
**Definition 4**: _Given \(Y_{1},...,Y_{n}\) be i.n.i.d.o. and \(\alpha>0,\) the_ **minimum RP estimator (MRPE)**_, \(\widehat{\mathbf{\theta}}_{\alpha},\) is given by_
\[\widehat{\mathbf{\theta}}_{\alpha}=\arg\min\limits_{\mathbf{\theta}\in\mathbf{\Theta}}H_ {n,\alpha}(\mathbf{\theta}), \tag{15}\]
_with \(H_{n,\alpha}(\mathbf{\theta})\) defined in (13) for \(\alpha>0\) and in (14) for \(\alpha=0.\)_
Note that at \(\alpha=0,\) we recover the maximum likelihood estimator (MLE) of the model and so the MRPE family includes the classical estimator as a particular case.
As the MRPE, \(\widehat{\mathbf{\theta}}_{\alpha},\) is a minimum of a differentiable function, it must annul the first derivatives of the function \(H_{n,\alpha}(\mathbf{\theta})\)
\[\frac{1}{n}\sum\limits_{i=1}^{n}\frac{\partial\widehat{V}_{i,\alpha}(Y_{i}; \mathbf{\theta})}{\partial\theta_{j}}=0,\ \ \ j=1,...,p.\]
That is, the estimation equations of the MRPE are
\[\frac{1}{n}\sum\limits_{i=1}^{n}\frac{1}{\alpha L_{\alpha}^{i}\left(\mathbf{ \theta}\right)^{2}}\left(\alpha f_{i}(Y_{i},\mathbf{\theta})^{\alpha}u_{j}(Y_{i}, \mathbf{\theta})L_{\alpha}^{i}\left(\mathbf{\theta}\right)-\frac{\partial L_{\alpha}^ {i}\left(\mathbf{\theta}\right)}{\partial\theta_{j}}f_{i}(Y_{i},\mathbf{\theta})^{ \alpha}\right)=0,\ \ \ j=1,...,p,\]
with
\[u_{j}(y,\boldsymbol{\theta})=\frac{\partial\log(f_{i}(y,\boldsymbol{\theta}))}{ \partial\theta_{j}},\]
and
\[\frac{\partial L_{\alpha}^{i}\left(\boldsymbol{\theta}\right)}{ \partial\theta_{j}} =\frac{\alpha}{\alpha+1}\left(\int f_{i}(y,\boldsymbol{\theta})^{ \alpha+1}dy\right)^{\frac{\alpha}{\alpha+1}-1}(\alpha+1)\int f_{i}(y, \boldsymbol{\theta})^{\alpha+1}u_{j}(y,\boldsymbol{\theta})dy\] \[=\alpha\left(\int f_{i}(y,\boldsymbol{\theta})^{\alpha+1}dy \right)^{\frac{\alpha}{\alpha+1}-1}\int f_{i}(y,\boldsymbol{\theta})^{\alpha+ 1}u_{j}(y,\boldsymbol{\theta})dy,i=1,...,n.\]
It is interesting to observe that if \(Y_{1},...,Y_{n}\) are independent and identically distributed (i.i.d.) random variables, the MRPE \(\widehat{\boldsymbol{\theta}}_{\alpha}\) coincides with the estimator \(\widehat{\boldsymbol{\theta}}_{\alpha}^{*}\) proposed in [6].
We next study the asymptotic distribution of the MRPE, \(\widehat{\boldsymbol{\theta}}_{\alpha}.\) For notation simplicity, let us define the matrices \(\boldsymbol{\Psi}_{n,\alpha}\left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right)\) and \(\boldsymbol{\Omega}_{n,\alpha}\left(\boldsymbol{\theta}_{\boldsymbol{g}, \alpha}\right)\) as follows:
\[\boldsymbol{\Psi}_{n,\alpha}\left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha }\right)=\frac{1}{n}\sum\limits_{i=1}^{n}\boldsymbol{J}_{\alpha}^{(i)}\left( \boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right), \tag{16}\]
with
\[\boldsymbol{J}_{\alpha}^{(i)}\left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha }\right)=\left(E_{Y_{i}}\left[\frac{\partial^{2}\widehat{V}_{i,\alpha}(Y_{i}; \boldsymbol{\theta})}{\partial\theta_{j}\partial\theta_{k}}\right]_{ \boldsymbol{\theta}=\boldsymbol{\theta}_{\boldsymbol{g},\alpha}}\right)_{j,k=1,...,p},i=1,...,n,\]
and
\[\boldsymbol{\Omega}_{n,\alpha}\left(\boldsymbol{\theta}_{\boldsymbol{g}, \alpha}\right)=\frac{1}{n}\sum\limits_{i=1}^{n}Var_{Y_{i}}\left[\left(\frac{ \partial\widehat{V}_{i,\alpha}(Y_{i};\boldsymbol{\theta})}{\partial\theta_{j }}\right)_{j=1,...,p}\right]_{\boldsymbol{\theta}=\boldsymbol{\theta}_{ \boldsymbol{g},\alpha}},\,i=1,...,n. \tag{17}\]
Additionally, let \(\lambda_{1},...,\lambda_{n}\) be the eigenvalues of \(\boldsymbol{\Omega}_{n,\alpha}\left(\boldsymbol{\theta}_{\boldsymbol{g}, \alpha}\right).\) From now on, we will assume that \(\inf_{n}\lambda_{n}>0,\) so that \(\boldsymbol{\Omega}_{n,\alpha}\left(\boldsymbol{\theta}_{\boldsymbol{g}, \alpha}\right)\) can be inverted.
We consider the following regularity conditions:
**C1.**: The support, \(\mathcal{X},\) of the density functions \(f_{i}(y,\boldsymbol{\theta})\) is the same for all \(i\) and it does not depend on \(\boldsymbol{\theta}.\) Besides, the true probability density functions \(g_{1},...,g_{n}\) have the same support \(\mathcal{X}.\)
**C2.**: For almost all \(y\in\mathcal{X}\) the density \(f_{i}(y,\boldsymbol{\theta})\) admits all third derivatives with respect to \(\boldsymbol{\theta}\in\boldsymbol{\Theta}\) and \(i=1,...,n.\)
**C3.**: For \(i=1,2,...,n\) the integrals
\[\int f_{i}(y,\boldsymbol{\theta})^{1+\alpha}dy\]
can be differentiated thrice with respect to \(\boldsymbol{\theta}\) and we can interchange integration and differentiation. As a consequence of this condition, it follows that
\[\frac{\partial V_{i,\alpha}(\mathbf{\theta})}{\partial\mathbf{\theta}}=E_{Y_{i}}\left[ \frac{\partial\widehat{V}_{i,\alpha}(Y_{i},\mathbf{\theta})}{\partial\mathbf{\theta}} \right],\quad\frac{\partial^{2}V_{i,\alpha}(\mathbf{\theta})}{\partial\mathbf{\theta} \partial\mathbf{\theta}^{T}}=E_{Y_{i}}\left[\frac{\partial^{2}\widehat{V}_{i, \alpha}(Y_{i},\mathbf{\theta})}{\partial\mathbf{\theta}\partial\mathbf{\theta}^{T}}\right] =\mathbf{J}_{\alpha}^{(i)}(\mathbf{\theta}).\]
**C4.**: For \(i=1,2,...,n\) the matrices \(\mathbf{J}_{\alpha}^{(i)}\left(\mathbf{\theta}_{\mathbf{g},\alpha}\right)\) are positive definite.
**C5.**: There exist functions \(M_{jkl}^{(i)}\) and constants \(m_{jkl}\) such that
\[\left|\frac{\partial^{3}\widehat{V}_{i,\alpha}(y;\mathbf{\theta})}{\partial\theta _{j}\partial\theta_{k}\partial\theta_{l}}\right|\leq M_{jkl}^{(i)}\left(y \right),\qquad\forall\mathbf{\theta}\in\mathbf{\Theta},\ \forall j,k,l\]
and
\[E_{Y}\left[M_{jkl}^{(i)}\left(Y\right)\right]=m_{jkl}<\infty,\qquad\forall \mathbf{\theta}\in\mathbf{\Theta},\ \forall j,k,l.\]
**C6.**: For all \(j,k,l\) and \(\mathbf{\theta}\in\mathbf{\Theta}\), the sequences \(\left\{\frac{\partial\widehat{V}_{i,\alpha}(Y_{i},\mathbf{\theta})}{\partial \theta_{j}}\right\}_{j=1,...,p},\left\{\frac{\partial^{2}\widehat{V}_{i, \alpha}(Y_{i},\mathbf{\theta})}{\partial\theta_{j}\partial\theta_{k}}\right\}_{j,k=1,..,p}\) and \(\left\{\frac{\partial^{3}\widehat{V}_{i,\alpha}(Y_{i},\mathbf{\theta})}{\partial \theta_{j}\partial\theta_{k}\partial l}\right\}_{j,k,l=1,..,p}\) are uniformly integrable in the Cesaro sense, i.e.
\[\lim_{n\rightarrow\infty}\left(\sup_{n>1}\frac{1}{n}\sum\limits_ {i=1}^{n}E_{Y_{i}}\left[\left|\frac{\partial\widehat{V}_{i,\alpha}(Y_{i},\mathbf{ \theta})}{\partial\theta_{j}}\right|I_{\left\{\frac{\partial V_{i,\alpha}(Y_{i },\mathbf{\theta})}{\partial\theta_{j}}>n\right\}}(Y_{i})\right]\right) =0,\] \[\lim_{n\rightarrow\infty}\left(\sup_{n>1}\frac{1}{n}\sum\limits_ {i=1}^{n}E_{Y_{i}}\left[\left|\frac{\partial^{2}\widehat{V}_{i,\alpha}(Y_{i}, \mathbf{\theta})}{\partial\theta_{j}\partial\theta_{k}}\right|I_{\left\{\frac{ \partial^{2}V_{i,\alpha}(Y_{i},\mathbf{\theta})}{\partial\theta_{j}\partial\theta_{ k}\partial\theta_{l}}>n\right\}}(Y_{i})\right]\right) =0,\] \[\lim_{n\rightarrow\infty}\left(\sup_{n>1}\frac{1}{n}\sum\limits_ {i=1}^{n}E_{Y_{i}}\left[\left|\frac{\partial^{3}\widehat{V}_{i,\alpha}(Y_{i}, \mathbf{\theta})}{\partial\theta_{j}\partial\theta_{k}\partial\theta_{l}}\right|I_ {\left\{\frac{\partial^{3}V_{i,\alpha}(Y_{i},\mathbf{\theta})}{\partial\theta_{j} \partial\theta_{k}\partial\theta_{l}}>n\right\}}(Y_{i})\right]\right) =0.\]
**C7.**: For all \(\varepsilon>0\)
\[\lim_{n\rightarrow\infty}\left\{\frac{1}{n}\sum\limits_{i=1}^{n}E_{Y_{i}} \left[\left\|\mathbf{\Omega}_{n}^{-\frac{1}{2}}\left(\mathbf{\theta}\right)\frac{ \partial\widehat{V}_{i,\alpha}(Y_{i},\mathbf{\theta})}{\partial\mathbf{\theta}}\right\| _{2}^{2}I_{\left\{\left\|\mathbf{\Omega}_{n}^{-\frac{1}{2}}(\mathbf{\theta})\frac{ \partial\widehat{V}_{i,\alpha}(Y_{i},\mathbf{\theta})}{\partial\mathbf{\theta}}\right\| _{2}^{2}\right\}}(Y_{i})\right]>\varepsilon\sqrt{n}\right\}=0.\]
Now, the following result, whose proof can be seen in [7], holds.
**Theorem 5**: _Suppose the previous regularity conditions_ **C1- C7** _hold. Then,_
\[\sqrt{n}\mathbf{\Omega}_{n,\alpha}\left(\mathbf{\theta}_{\mathbf{g},\alpha}\right)^{- \frac{1}{2}}\mathbf{\Psi}_{n,\alpha}\left(\mathbf{\theta}_{\mathbf{g},\alpha}\right)\left( \widehat{\mathbf{\theta}}_{\alpha}-\mathbf{\theta}_{\mathbf{g},\alpha}\right)\underset{n \rightarrow\infty}{\overset{L}{\rightarrow}}N(\mathbf{0}_{p},\mathbf{I}_{p}), \tag{18}\]
_being \(\mathbf{I}_{p}\) the p-dimensional identity matrix._
### Example: The MPRE under the MLRM
Consider \(\left(Y_{1},...,Y_{n}\right)\) a set of random variables, related to the explanatory variables \(\left(\mathbf{X}_{1},...,\mathbf{X}_{n}\right)\) through the MLRM,
\[Y_{i}=\mathbf{X}_{i}^{T}\mathbf{\beta}+\varepsilon_{i},\quad i=1,\ldots,n, \tag{19}\]
where the errors \(\varepsilon_{i}^{\prime}s\) are i.i.d. normal random variables with mean zero and variance \(\sigma^{2}\), \(\mathbf{X}_{i}^{T}=\left(X_{i1},...,X_{ip}\right)\) is the vector of independent variables corresponding to the \(i\)-th condition and \(\mathbf{\beta}=\left(\beta_{1},...,\beta_{p}\right)^{T}\) is the vector of regression coefficients to be estimated. We will consider that, for each \(i\), \(\mathbf{X}_{i}\) is fixed, yielding to i.n.i.d.o. \(Y_{i}^{\prime}s\), with \(Y_{i}\sim\mathcal{N}(\mathbf{X}_{i}^{T}\mathbf{\beta},\sigma^{2})\).
We next derive the explicit expression of the MRPE for the parameters \(\mathbf{\theta}=\left(\mathbf{\beta},\sigma\right)\). With the previous notation, the assumed density functions are \(f_{i}\left(y,\mathbf{\beta},\sigma\right)\equiv\mathcal{N}(\mathbf{X}_{i}^{T}\mathbf{ \beta},\sigma^{2})\) and then, using Eq. (6), we have that for \(\alpha>0\),
\[\widehat{V}_{i,\alpha}(Y_{i};\mathbf{\beta},\sigma) =-\frac{\frac{1}{\left(2\pi\right)^{\alpha/2}\sigma^{\alpha}} \exp\left(\frac{-\alpha\left(Y_{i}-\mathbf{X}_{i}^{T}\mathbf{\beta}\right)^{2}}{2 \sigma^{2}}\right)}{\alpha\left(\left(2\pi\right)^{\alpha/2}\sigma^{\alpha} \sqrt{1+\alpha}\right)^{-\frac{\alpha}{\alpha+1}}}+\frac{1}{\alpha} \tag{20}\] \[=-\frac{1}{\alpha}\left(\frac{1+\alpha}{2\pi}\right)^{\frac{ \alpha}{2\left(\alpha+1\right)}}\sigma^{-\frac{\alpha}{\alpha+1}}\exp\left(- \frac{\alpha}{2}\left(\frac{Y_{i}-\mathbf{X}_{i}^{T}\mathbf{\beta}}{\sigma}\right)^{ 2}\right)+\frac{1}{\alpha}.\]
and thus, the MRPE for \(\alpha>0\) is obtained minimizing the averaged objective function
\[H_{n,\alpha}(\mathbf{\theta}) =\frac{1}{n}\sum_{i=1}^{n}\widehat{V}_{i,\alpha}(Y_{i};\mathbf{\beta },\sigma)\] \[=-\frac{1}{\alpha}\left(\frac{1+\alpha}{2\pi}\right)^{\frac{ \alpha}{2\left(\alpha+1\right)}}\frac{1}{n}\sum_{i=1}^{n}\sigma^{-\frac{\alpha }{\alpha+1}}\exp\left(-\frac{\alpha}{2}\left(\frac{Y_{i}-\mathbf{X}_{i}^{T}\mathbf{ \beta}}{\sigma}\right)^{2}\right)+\frac{1}{\alpha}.\]
Ignoring all constant terms, we have that the MRPE for the MLRM is given, for \(\alpha>0\), as
\[\left(\widehat{\mathbf{\beta}}_{\alpha},\widehat{\sigma}_{\alpha}\right)=\arg \min_{\mathbf{\beta},\sigma}\sum_{i=1}^{n}-\sigma^{-\frac{\alpha}{\alpha+1}}\exp \left(-\frac{\alpha}{2}\left(\frac{Y_{i}-\mathbf{X}_{i}^{T}\mathbf{\beta}}{\sigma} \right)^{2}\right).\]
Moreover, taking derivatives with respect to \(\mathbf{\beta}\) and \(\sigma\), the estimation equations of \(\widehat{\mathbf{\beta}}_{\alpha}\) and \(\widehat{\sigma}_{\alpha}\) are
\[\begin{array}{l}\sum\limits_{i=1}^{n}\exp\left(-\frac{\alpha}{2}\left(\frac{ Y_{i}-\mathbf{X}_{i}^{T}\mathbf{\beta}}{\sigma}\right)^{2}\right)\left(\frac{Y_{i}-\mathbf{X}_{i}^ {T}\mathbf{\beta}}{\sigma}\right)\mathbf{X}_{i}=\mathbf{0}_{p}\\ \sum\limits_{i=1}^{n}\exp\left(-\frac{\alpha}{2}\left(\frac{Y_{i}-\mathbf{X}_{i}^{T }\mathbf{\beta}}{\sigma}\right)^{2}\right)\left\{\left(\frac{Y_{i}-\mathbf{X}_{i}^{T} \mathbf{\beta}}{\sigma}\right)^{2}-\frac{1}{1+\alpha}\right\}=0\end{array}, \tag{21}\]
which is exactly the same system as the one obtained in [7]. For \(\alpha=0\), if we denote \(\mathbb{X}=(\mathbf{X}_{1},...,\mathbf{X}_{n})_{n\times p}^{T}\) and \(\mathbf{Y}=(Y_{1},...,Y_{n})\), we get the MLE of \(\widehat{\mathbf{\beta}}_{0}\) and \(\widehat{\sigma}_{0}\), i.e.
\[\widehat{\mathbf{\beta}}_{0}=\left(\mathbb{X}^{T}\mathbb{X}\right)^{-1}\mathbb{X} ^{T}\mathbf{Y}\ \ \mbox{and}\ \ \widehat{\sigma}_{0}^{2}=\frac{1}{n}\underset{i=1}{\overset{n}{\sum}}\left(Y_{ i}-\mathbf{X}_{i}^{T}\widehat{\mathbf{\beta}}_{0}\right)^{2}.\]
Finally, from the results in [7], it can be seen that matrices \(\mathbf{\Psi}_{n,\alpha}\left(\mathbf{\beta},\sigma\right)\) and \(\mathbf{\Omega}_{n,\alpha}\left(\mathbf{\beta},\sigma\right)\) are given by
\[\mathbf{\Psi}_{n,\alpha}\left(\mathbf{\beta},\sigma\right) = \frac{1}{n}\underset{i=1}{\overset{n}{\sum}}\mathbf{J}^{(i)} \left(\mathbf{\beta},\sigma^{2}\right)\] \[= k\sigma^{-\frac{3\alpha+2}{\alpha+1}}\left(\alpha+1\right)^{- \frac{3}{2}}\left[\begin{array}{cc}\frac{1}{n}\mathbb{X}^{T}\mathbb{X}&0\\ 0&\frac{2}{\alpha+1}\end{array}\right]\] \[= K_{1}\left(\alpha+1\right)^{-\frac{3}{2}}\left[\begin{array}{ cc}\frac{1}{n}\mathbb{X}^{T}\mathbb{X}&0\\ 0&\frac{2}{\alpha+1}\end{array}\right],\]
and
\[\mathbf{\Omega}_{n,\alpha}\left(\mathbf{\beta},\sigma\right) = \frac{1}{n}\underset{i=1}{\overset{n}{\sum}}Var_{Y_{i}}\left[ \left(\frac{\partial V_{i,\alpha}(Y_{i};\mathbf{\beta},\sigma^{2})}{\partial\theta _{j}}\right)_{j=1,...,k}\right]\] \[= K_{1}^{2}\sigma^{2}\frac{1}{\left(2\alpha+1\right)^{3/2}}\left[ \begin{array}{cc}\frac{1}{n}\mathbb{X}^{T}\mathbb{X}&\mathbf{0}\\ \mathbf{0}&\frac{(3\alpha^{2}+4\alpha+2)}{(\alpha+1)^{2}(2\alpha+1)}\end{array} \right].\]
with
\[k=\frac{1}{\alpha}\left(\frac{1+\alpha}{2\pi}\right)^{\frac{\alpha}{2(\alpha+ 1)}},\ \ \ K_{1}=k\sigma^{-\frac{3\alpha+2}{\alpha+1}}. \tag{22}\]
Therefore, for \(\alpha=0\) we get the Fisher information matrix for \((\mathbf{\beta},\sigma)\) in both matrices, i.e.
\[\mathbf{\Psi}_{n,0}\left(\mathbf{\beta},\sigma\right)=\left[\begin{array}{cc}\frac{ 1}{\sigma^{2}}\frac{1}{n}\mathbb{X}^{T}\mathbb{X}&0\\ 0&\frac{2}{\sigma^{2}}\end{array}\right],\]
and
## 3 Model selection criterion based on RP
In this section we present the model selection criterion based on RP. Let us consider a collection of \(l\) candidate models
\[\left\{\mathbf{M}^{(s)}=\left(M_{1}^{(s)},...,M_{n}^{(s)}\right)\right\}_{s\in\{1,...,l\}} \tag{23}\]
such that each \(\mathbf{M}^{(s)}\) is characterized by the parametric density functions
\[\mathbf{f}(\cdot,\mathbf{\theta}_{s})=\left(f_{1}(\cdot,\mathbf{\theta}_{s}),...,f_{n}( \cdot,\mathbf{\theta}_{s})\right),\ \mathbf{\theta}_{s}\in\mathbf{\Theta}_{s}\subset\mathbb{R}^{p_{s}},\]
with associated distribution functions \(\mathbf{F}(\cdot,\mathbf{\theta}_{s})=\left(F_{1}(\mathbf{\theta}_{s}),...,F_{n}(\cdot,\mathbf{ \theta}_{s})\right),\) where \(\mathbf{\theta}_{s}\) is common for all density functions in model \(s.\) That is, each candidate model would represent a parametric family defined by a common parameter, which may contain different number of parameters. Based on the random sample \(Y_{1},...,Y_{n},\) we need to select the best model from the collection \(\{\mathbf{M}^{(s)}\}_{s\in\{1,...,l\}}\) according to some suitable selection criterion. For such purpose, for each assumed model \(\mathbf{M}^{(s)},\) we should first determine the best parameter \(\mathbf{\theta}_{s}\) fitting the sample and subsequently select the best fitted model from the collection. Then, given a set of observations, the model selection is performed in two steps: we first fit all the candidates models to the data, and then select the model with best trade-off between goodness of fit and complexity in terms of RP.
We next describe the first step of the model selection algorithm. Let consider a fixed parametric model \(\mathbf{M}^{(s)}\) modeling the true distribution underlying. If the true distribution was known, the parameter that best fits the model \(\mathbf{M}^{(s)},\) denoted by \(\mathbf{\theta}_{\mathbf{g},\alpha}^{s},\) can be obtained by maximizing the theoretical averaged objective function \(H_{\alpha}(\mathbf{\theta})\) defined in Eq. (8) under the \(s\)-model.
Following the discussion in Section 2, if the true distribution underlying is unknown but we have a random sample \(Y_{1},...,Y_{n},\) the best estimate of the true parameter based on the sample from the RP approach is the MRPE defined in (15).
Once all candidate models are fitted to the observed data (or to the true distribution, if it is known), we should select the model with the best trade-off between fitness and complexity. Therefore, we need a measure of fairness between the best candidate for each model and the true distribution. The goodness of fit of a certain model \(\mathbf{M}^{(s)}\) with associated densities \(\mathbf{f}(\cdot,\mathbf{\theta}_{g}^{s})\) and the best-fitting parameter \(\mathbf{\theta}_{g}^{s}\) based on the RP can be quantified by the averaged objective function \(H_{\alpha}(\mathbf{\theta}_{g}^{s})\) given in Eq. (8).
As the true distribution is generally unknown, \(\mathbf{\theta}_{g}^{s}\) is estimated by \(\widehat{\mathbf{\theta}}_{\alpha}^{s}\). Hence, we can estimate \(H_{\alpha}(\mathbf{\theta}_{g}^{s})\) by \(H_{\alpha}(\widehat{\mathbf{\theta}}_{\alpha}^{s}).\) But again \(H_{\alpha}\) needs to be estimated, and the natural estimator is \(H_{n,\alpha}(\widehat{\mathbf{\theta}}_{\alpha}^{s}).\) However, as the sample is used both for estimating the parameter and for estimating \(H_{\alpha},\) it does not hold that
\[E_{Y_{1},...,Y_{n}}\left[H_{n,\alpha}\left(\widehat{\mathbf{\theta}}_{\alpha}^{s} \right)\right]\neq E_{Y_{1},...,Y_{n}}\left[H_{\alpha}\left(\widehat{\mathbf{ \theta}}_{\alpha}^{s}\right)\right].\]
Moreover, the estimation bias would depend on the model and consequently, we need to add a term correcting the bias caused by the model assumption.
The AIC criterion selects the model that minimizes
\[-2\sum_{i=1}^{n}\log f_{i}(y_{i},\mathbf{\theta})+2p=2H_{n,0}\left(\mathbf{\theta} \right)+2p,\]
where \(2p\) is the term correcting the bias. Following the same idea, we define the \(RP_{NH}-\)Criterion as follows:
**Definition 6**: _Let \(\left\{\left(M_{1}^{\left(s\right)},...,M_{n}^{\left(s\right)}\right)\right\}_{s \in\left\{1,...,l\right\}}\) be \(l\) candidate models for the i.n.i.d.o. \(Y_{1},...,Y_{n}\). The selected model \(\left(M_{1}^{\ast},...,M_{n}^{\ast}\right)\) according the \(RP_{NH}-\)**Criterion** is the one satisfying_
\[\left(M_{1}^{\ast},...,M_{n}^{\ast}\right)=\min_{s\in\left\{1,...,l\right\}}RP _{NH}\left(M_{1}^{\left(s\right)},...,M_{n}^{\left(s\right)},\widehat{\mathbf{ \theta}}_{\alpha}^{s}\right),\]
_where_
\[RP_{NH}\left(M_{1}^{\left(s\right)},...,M_{n}^{\left(s\right)},\widehat{\mathbf{ \theta}}_{\alpha}^{s}\right)=H_{n,\alpha}\left(\widehat{\mathbf{\theta}}_{\alpha }^{s}\right)+\frac{1}{n}trace\left(\mathbf{\Omega_{n}}\left(\widehat{\mathbf{\theta}} _{\alpha}^{s}\right)\mathbf{\Psi}_{n}^{-1}\left(\widehat{\mathbf{\theta}}_{\alpha}^{ s}\right)\right). \tag{24}\]
We can observe that
\[\lim_{\alpha\to 0}RP_{NH}\left(M_{1}^{\left(s\right)},...,M_{n}^{\left(s \right)},\widehat{\mathbf{\theta}}_{\alpha}^{s}\right)=-\frac{1}{n}\sum_{i=1}^{n} \log f_{i}(Y_{i},\mathbf{\theta})+\frac{p}{n},\]
and hence we recover AIC criterion up to the multiplicative constant \(2n.\)
In order to justify the \(RP_{NH}-\)Criterion, we shall establish that the estimated function \(RP_{NH}\left(M_{1}^{\left(s\right)},...,M_{n}^{\left(s\right)}\right)\) quantifying the loss of choosing a model is an unbiased estimator of it theoretical version, \(E_{Y_{1},...,Y_{n}}\left[H_{\alpha}\left(\widehat{\mathbf{\theta}}_{\alpha}^{s} \right)\right].\) For this purpose, we shall assume the following additional regularity condition:
**C8.** The matrices \(\mathbf{\Psi}_{n}^{-1}\left(\mathbf{\theta}\right)\) and \(\mathbf{\Omega}_{n}\left(\mathbf{\theta}\right)\) are continuous for arbitrary \(\mathbf{\theta}\in\mathbf{\Theta}.\)
**Theorem 7**: _Assume that conditions_ **C1-C8** _hold. Then,_
\[E_{Y_{1},...,Y_{n}}\left[RP_{NH}\left(M_{1}^{\left(s\right)},...,M_{n}^{\left( s\right)},\widehat{\mathbf{\theta}}_{\alpha}^{s}\right)\right]=E_{Y_{1},...,Y_{n}} \left[H_{\alpha}\left(\widehat{\mathbf{\theta}}_{\alpha}^{s}\right)\right],\forall s =1,...,l.\]
**Proof.** Consider a fixed \(s=1,...,l.\) A Taylor expansion of \(V_{i,\alpha}\left(\mathbf{\theta}\right)\) defined in Eq. (6) around \(\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\) and evaluated at \(\widehat{\mathbf{\theta}}_{\alpha}^{s}\) gives
\[V_{i,\alpha}\left(\widehat{\mathbf{\theta}}_{\alpha}^{s}\right) = V_{i,\alpha}\left(\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right)+\left( \frac{\partial V_{i,\alpha}\left(\mathbf{\theta}\right)}{\partial\mathbf{\theta}} \right)_{\mathbf{\theta}=\mathbf{\theta}_{\mathbf{g},\alpha}^{s}}\left(\widehat{\mathbf{ \theta}}_{\alpha}^{s}-\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right)\] \[+\frac{1}{2}\left(\widehat{\mathbf{\theta}}_{\alpha}^{s}-\mathbf{\theta}_ {\mathbf{g},\alpha}^{s}\right)^{T}\left(\frac{\partial^{2}V_{i,\alpha}\left(\mathbf{ \theta}\right)}{\partial\mathbf{\theta}\ \partial\mathbf{\theta}^{T}}\right)_{\mathbf{\theta}=\mathbf{\theta}_{\mathbf{g},\alpha}^{s}} \left(\widehat{\mathbf{\theta}}_{\alpha}^{s}-\mathbf{\theta}_{\mathbf{g},\alpha}^{s} \right)+o\left(\left\|\widehat{\mathbf{\theta}}_{\alpha}^{s}-\mathbf{\theta}_{\mathbf{g}, \alpha}^{s}\right\|^{2}\right)\] \[= V_{i,\alpha}\left(\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right)+\left( \frac{\partial V_{i,\alpha}\left(\mathbf{\theta}\right)}{\partial\mathbf{\theta}} \right)_{\mathbf{\theta}=\mathbf{\theta}_{\mathbf{g},\alpha}^{s}}\left(\widehat{\mathbf{ \theta}}_{\alpha}^{s}-\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right)\] \[+\frac{1}{2}\left(\widehat{\mathbf{\theta}}_{\alpha}^{s}-\mathbf{\theta}_ {\mathbf{g},\alpha}^{s}\right)^{T}\mathbf{J}_{\tau}^{\left(i\right)}\left(\mathbf{\theta}_ {\mathbf{g},\alpha}^{s}\right)\left(\widehat{\mathbf{\theta}}_{\alpha}^{s}-\mathbf{ \theta}_{\mathbf{g},\alpha}^{s}\right)+o\left(\left\|\widehat{\mathbf{\theta}}_{\alpha }^{s}-\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right\|^{2}\right).\]
Summing over \(i\) and dividing by \(n,\) taking into account that \(\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\) maximizes \(H_{\alpha}(\mathbf{\theta}),\) we get
\[H_{\alpha}(\widehat{\mathbf{\theta}}_{\alpha}^{s})=H_{\alpha}\left(\mathbf{\theta}_{\mathbf{g}, \alpha}^{s}\right)-\frac{1}{2}\left(\widehat{\mathbf{\theta}}_{\alpha}^{s}-\mathbf{ \theta}_{\mathbf{g},\alpha}^{s}\right)^{T}\mathbf{\Psi}_{n}\left(\mathbf{\theta}_{\mathbf{g}, \alpha}^{s}\right)\left(\widehat{\mathbf{\theta}}_{\alpha}^{s}-\mathbf{\theta}_{\mathbf{g },\alpha}^{s}\right)+o\left(\left\|\widehat{\mathbf{\theta}}_{\alpha}^{s}-\mathbf{ \theta}_{\mathbf{g},\alpha}^{s}\right\|^{2}\right)\]
and hence,
\[E_{Y_{1},...,Y_{n}}\left[nH_{\alpha}\left(\widehat{\mathbf{\theta}}_{\alpha}^{s} \right)\right]=nH_{\alpha}\left(\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right)-\frac{ 1}{2}E_{Y_{1},...,Y_{n}}\left[\sqrt{n}\left(\widehat{\mathbf{\theta}}_{\alpha}^{s }-\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right)^{T}\mathbf{\Psi}_{n}\left(\mathbf{\theta}_{ \mathbf{g},\alpha}^{s}\right)\sqrt{n}\left(\widehat{\mathbf{\theta}}_{\alpha}^{s}- \mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right)\right]+o_{p}(1). \tag{25}\]
But by Eq. (18), and applying Corollary 2.1 in [10], we have
\[\sqrt{n}\left(\widehat{\mathbf{\theta}}_{\alpha}^{s}-\mathbf{\theta}_{\mathbf{g},\alpha} ^{s}\right)^{T}\mathbf{\Psi}_{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right)\sqrt {n}\left(\widehat{\mathbf{\theta}}_{\alpha}^{s}-\mathbf{\theta}_{\mathbf{g},\alpha}^{s} \right)_{n\longrightarrow\infty}\sum_{i=1}^{k}\lambda_{i}(\mathbf{\theta}_{\mathbf{g}, \alpha}^{s})Z_{i}^{2},\]
where \(\lambda_{1}(\mathbf{\theta}_{\mathbf{g},\alpha}^{s}),...,\lambda_{n}(\mathbf{\theta}_{\bm {g},\alpha}^{s})\) are the eigenvalues of the matrix
\[\mathbf{\Psi}_{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right)\mathbf{\Psi}_{n}\left( \mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right)^{-1}\mathbf{\Omega}_{n}\left(\mathbf{\theta}_ {\mathbf{g},\alpha}^{s}\right)\mathbf{\Psi}_{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha}^{s} \right)^{-1}=\mathbf{\Omega}_{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right)\mathbf{ \Psi}_{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right)^{-1}\]
and \(Z_{1},...,Z_{k}\) are independent normal random variables with mean zero and variance 1. Therefore,
\[E_{Y_{1},...,Y_{n}} \left[\sqrt{n}\left(\widehat{\mathbf{\theta}}_{\alpha}^{s}-\mathbf{\theta }_{\mathbf{g},\alpha}^{s}\right)^{T}\mathbf{\Psi}_{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha }^{s}\right)\sqrt{n}\left(\widehat{\mathbf{\theta}}_{\alpha}^{s}-\mathbf{\theta}_{\bm {g},\alpha}^{s}\right)\right]\] \[= \sum\nolimits_{i=1}^{k}\lambda_{i}(\mathbf{\theta}_{\mathbf{g},\alpha}^{ s})+o_{P}(1)\] \[= trace\left(\mathbf{\Omega}_{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha}^{s} \right)\mathbf{\Psi}_{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right)^{-1}\right)+o _{P}(1).\]
On the other hand, taking into account that \(\widehat{\mathbf{\theta}}_{\alpha}^{s}\) maximizes \(H_{n,\alpha}\left(\mathbf{\theta}\right),\) a Taylor expansion of \(H_{n,\alpha}\left(\mathbf{\theta}\right)\) at \(\widehat{\mathbf{\theta}}_{\alpha}^{s}\) and evaluated at \(\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\) gives
\[H_{n,\alpha}\left(\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right)=H_{n,\alpha}\left( \widehat{\mathbf{\theta}}_{\alpha}^{s}\right)+\frac{1}{2}\left(\mathbf{\theta}_{\mathbf{g },\alpha}^{s}-\widehat{\mathbf{\theta}}_{\alpha}^{s}\right)^{T}\left(\frac{ \partial^{2}H_{n,\alpha}\left(\mathbf{\theta}\right)}{\partial\mathbf{\theta}\ \partial \mathbf{\theta}^{T}}\right)_{\mathbf{\theta}=\widehat{\mathbf{\theta}}_{\alpha}^{s}}\left( \mathbf{\theta}_{\mathbf{g},\alpha}^{s}-\widehat{\mathbf{\theta}}_{\alpha}^{s}\right)+o \left(\left\|\mathbf{\theta}_{\mathbf{g},\alpha}^{s}-\widehat{\mathbf{\theta}}_{\alpha}^{ s}\right\|^{2}\right).\]
But then, multiplying by \(n\) and considering the expected values,
\[nH_{\alpha}\left(\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right) = E_{Y_{1},...,Y_{n}}\left[nH_{n,\alpha}\left(\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right)\right]=E_{Y_{1},...,Y_{n}}\left[nH_{n,\alpha}\left(\widehat {\mathbf{\theta}}_{\alpha}^{s}\right)\right]\] \[+\frac{1}{2}E_{Y_{1},...,Y_{n}}\left[\sqrt{n}\left(\mathbf{\theta}_{ \mathbf{g},\alpha}^{s}-\widehat{\mathbf{\theta}}_{\alpha}^{s}\right)^{T}\left(\frac{ \partial^{2}H_{n,\alpha}\left(\mathbf{\theta}\right)}{\partial\mathbf{\theta}\ \partial\mathbf{\theta}^{T}}\right)_{\mathbf{\theta}=\widehat{\mathbf{\theta}}_{\alpha}^{s}} \sqrt{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha}^{s}-\widehat{\mathbf{\theta}}_{\alpha}^{s} \right)\right]+o_{p}(1).\]
Besides,
\[\left(\frac{\partial^{2}H_{n,\alpha}\left(\mathbf{\theta}\right)}{\partial\mathbf{ \theta}\ \partial\mathbf{\theta}^{T}}\right)_{\mathbf{\theta}=\widehat{\mathbf{\theta}}_{\alpha}^{s }}\stackrel{{\mathcal{P}}}{{\longrightarrow}}-\mathbf{\Psi}_{n}\left( \mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right). \tag{26}\]
by the continuity of \(\mathbf{\Psi}_{n}.\) Hence, substituting in (25)
\[E_{Y_{1},...,Y_{n}}\left[nH_{\alpha}\left(\widehat{\mathbf{\theta}}_ {\alpha}^{s}\right)\right]\] \[= nH_{\alpha}\left(\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right)-\frac{ 1}{2}E_{Y_{1},...,Y_{n}}\left[\sqrt{n}\left(\widehat{\mathbf{\theta}}_{\alpha}^{s }-\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right)^{T}\mathbf{\Psi}_{n}\left(\mathbf{\theta}_{ \mathbf{g},\alpha}^{s}\right)\sqrt{n}\left(\widehat{\mathbf{\theta}}_{\alpha}^{s}- \mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right)\right]+o_{p}(1)\] \[= E_{Y_{1},...,Y_{n}}\left[nH_{n,\alpha}\left(\widehat{\mathbf{ \theta}}_{\alpha}^{s}\right)\right]-\frac{1}{2}E_{Y_{1},...,Y_{n}}\left[\sqrt {n}\left(\widehat{\mathbf{\theta}}_{\alpha}^{s}-\mathbf{\theta}_{\mathbf{g},\alpha}^{s} \right)^{T}\mathbf{\Psi}_{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right)\sqrt{n} \left(\widehat{\mathbf{\theta}}_{\alpha}^{s}-\mathbf{\theta}_{\mathbf{g},\alpha}^{s} \right)\right]+o_{p}(1)\] \[-\frac{1}{2}E_{Y_{1},...,Y_{n}}\left[\sqrt{n}\left(\mathbf{\theta}_{ \mathbf{g},\alpha}^{s}-\widehat{\mathbf{\theta}}_{\alpha}^{s}\right)^{T}\mathbf{\Psi}_{n} \left(\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right)\sqrt{n}\left(\widehat{\mathbf{\theta} }_{\mathbf{g},\alpha}^{s}-\widehat{\mathbf{\theta}}_{\alpha}^{s}\right)\right]+o_{p}(1)\] \[= E_{Y_{1},...,Y_{n}}\left[nH_{n,\alpha}\left(\widehat{\mathbf{ \theta}}_{\alpha}^{s}\right)\right]-E_{Y_{1},...,Y_{n}}\left[\sqrt{n}\left( \widehat{\mathbf{\theta}}_{\alpha}^{s}-\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right)^{T} \mathbf{\Psi}_{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right)\sqrt{n}\left( \widehat{\mathbf{\theta}}_{\alpha}^{s}-\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right) \right]+o_{p}(1),\]
and thus,
\[E_{Y_{1},...,Y_{n}}\left[H_{\alpha}\left(\widehat{\mathbf{\theta}}_ {\alpha}^{s}\right)\right] = E_{Y_{1},...,Y_{n}}\left[H_{n,\alpha}\left(\widehat{\mathbf{\theta} }_{\alpha}^{s}\right)\right]\] \[-\frac{1}{n}E_{Y_{1},...,Y_{n}}\left[\sqrt{n}\left(\mathbf{\theta}_{ \mathbf{g},\alpha}^{s}-\widehat{\mathbf{\theta}}_{\alpha}^{s}\right)^{T}\mathbf{\Psi}_{n} \left(\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right)\sqrt{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha}^{s}-\widehat{\mathbf{\theta}}_{\alpha}^{s}\right)\right]+o_{p}(1)\] \[= E_{Y_{1},...,Y_{n}}\left[H_{n,\alpha}\left(\widehat{\mathbf{\theta} }_{\alpha}^{s}\right)\right]-\frac{1}{n}trace\left(\mathbf{\Omega}_{n}\left(\mathbf{ \theta}_{\mathbf{g},\alpha}^{s}\right)\mathbf{\Psi}_{n}^{-1}\left(\mathbf{\theta}_{\mathbf{g},\alpha}^{s}\right)\right).\]
Hence, the result holds.
We next develop explicit expressions for the \(RP_{NH}\)-criterion under the MLRM.
### Example: The RP-based model selection under the multiple linear regression model
We consider the MLRM defined in Section 2.1.
\[Y_{i}=\mathbf{X}_{i}^{T}\mathbf{\beta}+\varepsilon_{i},\quad i=1,\ldots,n. \tag{27}\]
We consider several models \(\{(M_{1}^{(s)},...,M_{n}^{(s)})\}_{s=1,...,l}\) where each model differs on the parameter \(\mathbf{\beta}\) considered. For example, consider four explanatory variables \((X_{1},X_{2},X_{3},X_{4})\) and four different models given by
\[(M_{1}^{(1)},...,M_{n}^{(1)})\equiv Y_{i}=\beta_{0}+\beta_{1}X_{1}+\beta_{2}X_{2}+ \beta_{3}X_{3}+\epsilon_{i},\]
\[(M_{1}^{(2)},...,M_{n}^{(2)})\equiv Y_{i}=\beta_{0}+\beta_{1}X_{1}+\beta_{2}X_{2}+ \beta_{4}X_{4}+\epsilon_{i}\]
\[(M_{1}^{(3)},...,M_{n}^{(3)})\equiv Y_{i}=\beta_{0}+\beta_{1}X_{1}+\beta_{3}X_{3}+ \beta_{4}X_{4}+\epsilon_{i},\]
\[(M_{1}^{(4)},...,M_{n}^{(4)})\equiv Y_{i}=\beta_{0}+\beta_{2}X_{2}+\beta_{3}X_{3}+ \beta_{4}X_{4}+\epsilon_{i}.\]
Each of the models has five parameters that need to be estimated. Let us then determine the corresponding values of \(RP_{NH}\left(M_{1}^{\left(s\right)},...,M_{n}^{\left(s\right)},\widehat{\boldsymbol {\theta}}_{\alpha}^{s}\right)\) for \(s=1,2,3,4.\)
As stated in Section 2.1, for each \(s=1,2,3,4,\) the estimators of \(\widehat{\boldsymbol{\beta}}_{\alpha}^{s}\) and \(\widehat{\sigma}_{\alpha}^{s}\) are the solutions of the system
\[\begin{array}{l}\sum\limits_{i=1}^{n}\exp\left(-\frac{\alpha}{2}\left(\frac{ Y_{i}-\boldsymbol{X}_{s,i}^{T}\boldsymbol{\beta}}{\sigma}\right)^{2} \right)\left(\frac{Y_{i}-\boldsymbol{X}_{s,i}^{T}\boldsymbol{\beta}}{\sigma} \right)\boldsymbol{X}_{s,i}=\boldsymbol{0}_{4}\\ \sum\limits_{i=1}^{n}\exp\left(-\frac{\alpha}{2}\left(\frac{Y_{i}-\boldsymbol{X }_{s,i}^{T}\boldsymbol{\beta}}{\sigma}\right)^{2}\right)\left\{\left(\frac{Y_ {i}-\boldsymbol{X}_{s,i}^{T}\boldsymbol{\beta}}{\sigma}\right)^{2}-\frac{1}{ 1+\alpha}\right\}=0\end{array}, \tag{28}\]
where \(X_{s,i}\) corresponds to the values of observation \(i\) restricted to the variables appearing in model \(s.\) Note that, although \(\boldsymbol{\beta}\) has a different meaning for the different models, this is not the case of \(\sigma.\) However, the estimation of \(\sigma\) is different for the different models and so this estimation is denoted for by \(\widehat{\sigma}_{\alpha}^{s}\) for the \(s\)-th model.
At \(\alpha=0,\) we have that the model parameters can be explicitly obtained as
\[\widehat{\boldsymbol{\beta}}_{0}^{s}=\left(\mathbb{X}_{s}^{T}\mathbb{X}_{s} \right)^{-1}\mathbb{X}_{s}^{T}\mathbf{Y}\ \ \text{and}\ \ (\widehat{\sigma}_{0}^{s})^{2}=\frac{1}{n}\underset{i=1}{\sum}\left(Y_{i}- \boldsymbol{X}_{s,i}^{T}\widehat{\boldsymbol{\beta}}_{0}\right)^{2}.\]
Thus, according to Eq. (13),
\[H_{n,\alpha}(\widehat{\boldsymbol{\beta}},\widehat{\sigma})=\frac{1}{\alpha} \frac{1}{n}\sum\limits_{i=1}^{n}-k\widehat{\sigma}^{-\frac{\alpha}{\alpha+1}} \exp\left(-\frac{\alpha}{2}\left(\frac{Y_{i}-\boldsymbol{X}_{i}^{T}\widehat{ \boldsymbol{\beta}}}{\widehat{\sigma}}\right)^{2}\right)+\frac{1}{\alpha},\]
with \(k\) as defined in (22).
Next, let us obtain expressions of \(\boldsymbol{\Psi}_{s,n}\left(\boldsymbol{\beta}^{s},\sigma\right)\) and \(\boldsymbol{\Omega}_{s,n}\left(\boldsymbol{\beta}^{s},\sigma\right).\) Note that these matrices also depend on the model \(s\). Applying again the results of the previous section, we obtain
\[\begin{array}{l}\boldsymbol{\Psi}_{s,n}\left(\boldsymbol{\beta}^{s},\sigma \right)=K_{1}\left(\alpha+1\right)^{-\frac{3}{2}}\left[\begin{array}{cc} \frac{1}{n}\mathbb{X}_{s}^{T}\mathbb{X}_{s}&0\\ 0&\frac{2}{\alpha+1}\end{array}\right],\\ \boldsymbol{\Omega}_{s,n}\left(\boldsymbol{\beta}^{s},\sigma\right)=K_{1}^{2} \sigma^{2}\frac{1}{\left(2\alpha+1\right)^{3/2}}\left[\begin{array}{cc}\frac {1}{n}\mathbb{X}_{s}^{T}\mathbb{X}_{s}&0\\ 0&\frac{\left(3\alpha^{2}+4\alpha+2\right)}{2\left(\alpha+1\right)\left(2 \alpha+1\right)}\end{array}\right],\end{array} \tag{29}\]
where \(K_{1}\) was defined in (22). Note that these matrices have dimension \((p+1)\times(p+1)\) where \(p\) is the dimension of vector \(\boldsymbol{\beta}\) for each model. In our example, \(p=4\) and therefore,
\[\boldsymbol{\Omega}_{n}\left(\widehat{\boldsymbol{\theta}}_{\alpha}^{s} \right)\boldsymbol{\Psi}_{s,n}^{-1}\left(\widehat{\boldsymbol{\beta}}_{\alpha }^{s},\widehat{\sigma}_{\alpha}^{s}\right)=(\widehat{\sigma}_{\alpha}^{s})^ {2}K_{1}\frac{\left(\alpha+1\right)^{\frac{3}{2}}}{\left(2\alpha+1\right)^{ \frac{3}{2}}}\left[\begin{array}{cc}\boldsymbol{I}_{p\times p}&\boldsymbol{0 }\\ \boldsymbol{0}^{T}&\frac{3\alpha^{2}+4\alpha+2}{\left(\alpha+1\right)^{2}\left(2 \alpha+1\right)}\end{array}\right],\]
and hence,
\[trace\left(\boldsymbol{\Omega}_{n}\left(\widehat{\boldsymbol{\theta}}_{\alpha }^{s}\right)\boldsymbol{\Psi}_{s,n}^{-1}\left(\widehat{\boldsymbol{\beta}}_{ \alpha}^{s},\widehat{\sigma}_{\alpha}^{s}\right)\right)=(\widehat{\sigma}_{ \alpha}^{s})^{2}K_{1}\left(p\frac{\left(\alpha+1\right)^{\frac{3}{2}}}{\left(2 \alpha+1\right)^{\frac{3}{2}}}+\frac{\left(\alpha+1\right)^{\frac{1}{2}} \left(3\alpha^{2}+4\alpha+2\right)}{2\left(2\alpha+1\right)^{5/2}}\right).\]
Therefore, applying the \(RP_{NH}-\)Criterion defined in (6)
\[RP_{NH}(M_{1}^{(s)},...,M_{n}^{(s)}, \widehat{\boldsymbol{\beta}}_{\alpha}^{s},\widehat{\sigma}_{\alpha }^{2})\] \[= -\frac{1}{\alpha}\left(\frac{1+\alpha}{2\pi}\right)^{\frac{\alpha} {2(\alpha+1)}}\frac{1}{n}\sum_{i=1}^{n}(\widehat{\sigma}_{\alpha}^{s})^{- \frac{\alpha}{\alpha+1}}\exp\left(-\frac{\alpha}{2}\left(\frac{Y_{i}-\boldsymbol {X}_{s,i}^{T}\widehat{\boldsymbol{\beta}}_{\alpha}^{s}}{\widehat{\sigma}_{ \alpha}^{s}}\right)^{2}\right)\] \[+\frac{1}{\alpha}+\frac{1}{n}(\widehat{\sigma}_{\alpha}^{s})^{2} K_{1}\left(p\frac{\left(\alpha+1\right)^{\frac{3}{2}}}{\left(2\alpha+1\right)^{ \frac{3}{2}}}+\frac{\left(\alpha+1\right)^{\frac{1}{2}}\left(3\alpha^{2}+4 \alpha+2\right)}{2\left(2\alpha+1\right)^{5/2}}\right). \tag{30}\]
Finally, we select the model with minimum, in \(s\), \(RP_{NH}(M_{1}^{(s)},...,M_{n}^{(s)},\widehat{\boldsymbol{\beta}}_{\alpha}^{s},\widehat{\sigma}_{\alpha}^{s})\) as the most appropriate model among the four candidates.
## 4 The restricted model
Let us consider a particular case of the model selection problem. In some situations it is interesting to compare a full model based on \(\boldsymbol{\theta}\in\boldsymbol{\Theta}\subset\mathbb{R}^{p}\), with \(p\) parameters with other restricted models where the parameter has to satisfy additionally linear constraints of the form
\[\left\{\boldsymbol{\theta}\in\boldsymbol{\Theta}/\ \boldsymbol{m}(\boldsymbol{ \theta})=\mathbf{0}_{r}\right\}, \tag{31}\]
where \(\mathbf{0}_{r}\) denotes the null vector of dimension \(r\) with \(r<p\) and \(\boldsymbol{m}:\mathbb{R}^{p}\rightarrow\mathbb{R}^{r}\) is a vector-valued function such that the \(p\times r\) matrix
\[\mathbf{M}\left(\boldsymbol{\theta}\right)=\frac{\partial\boldsymbol{m}^{T}( \boldsymbol{\theta})}{\partial\boldsymbol{\theta}} \tag{32}\]
exists and is continuous in \(\boldsymbol{\theta}\), and \(\text{rank}(\mathbf{M}\left(\boldsymbol{\theta}\right))=r,\forall\boldsymbol{ \theta}\in\boldsymbol{\Theta}.\) Related to the divergence-based restricted estimation, in [4] the restricted minimum density power divergence estimator was defined. Later, in [8] the restricted MRPE for general populations was given.
Given a candidate model, we have already established that the best fitting parameter for this model based on the RP is defined by
\[\boldsymbol{\theta}_{\boldsymbol{g},\alpha}=\arg\min_{\boldsymbol{\theta}\in \boldsymbol{\Theta}\subset\mathbb{R}^{p}}H_{\alpha}(\boldsymbol{\theta}),\]
where \(H_{\alpha}(\boldsymbol{\theta})\) was defined in Eq. (8). On the other hand, applying the same criterion for the restricted model, we obtain that the best-fitting parameter for the restricted model is given by
\[\boldsymbol{\theta}_{\boldsymbol{g},\alpha}^{R}=\arg\min_{\boldsymbol{\theta} \in\boldsymbol{\Theta}\ /\ \boldsymbol{m}(\boldsymbol{\theta})=\mathbf{0}_{r}}H_{\alpha}( \boldsymbol{\theta}).\]
Following similar arguments than in Section 2, we defined the restricted MRPE as follows.
**Definition 8**: _Given \(Y_{1},...,Y_{n}\) be i.n.i.d.o., the_ **restricted MRPE** _(RMRPE), \(\widetilde{\mathbf{\theta}}_{\alpha},\) is given by_
\[\widetilde{\mathbf{\theta}}_{\alpha}=\arg\min_{\mathbf{\theta}\in\Theta/\mathbf{m}(\mathbf{ \theta})=\mathbf{0}_{r}}H_{n,\alpha}(\mathbf{\theta}), \tag{33}\]
_with \(H_{n,\alpha}(\mathbf{\theta})\) defined in (13) for \(\alpha>0\) and in (14) for \(\alpha=0.\)_
Note that
\[H_{n,\alpha}(\widehat{\mathbf{\theta}}_{\alpha})\leq H_{n,\alpha}(\widetilde{\bm {\theta}}_{\alpha}).\]
The following theorem presents a representation of the RMPRE.
**Theorem 9**: _Assume conditions_ **C1**_-_**C8** _and suppose that \(\mathbf{\theta}_{\mathbf{g},\alpha}\) satisfies the conditions of the restricted model. Then,_
\[n^{1/2}(\widetilde{\mathbf{\theta}}_{\alpha}-\mathbf{\theta}_{\mathbf{g},\alpha})=\mathbf{P} ^{*}(\mathbf{\theta}_{\mathbf{g},\alpha})n^{1/2}\left(\frac{\partial H_{n,\alpha}(\bm {\theta})}{\partial\mathbf{\theta}}\right)_{\mathbf{\theta}=\mathbf{\theta}_{\mathbf{g}, \alpha}}+o_{p}(1),\]
_being_
\[\mathbf{P}^{*}(\mathbf{\theta}_{\mathbf{g},\alpha})=\mathbf{Q}_{\alpha}(\mathbf{\theta}_{\mathbf{g}, \alpha})\mathbf{M}(\mathbf{\theta}_{\mathbf{g},\alpha})^{T}\mathbf{\Psi}_{n}\left(\mathbf{\theta }_{\mathbf{g},\alpha}\right)^{-1}-\mathbf{\Psi}_{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha} \right)^{-1}, \tag{34}\]
_with_
\[\mathbf{Q}_{\alpha}(\mathbf{\theta}_{\mathbf{g},\alpha})=\mathbf{\Psi}_{n}\left(\mathbf{\theta} _{\mathbf{g},\alpha}\right)^{-1}\mathbf{M}(\mathbf{\theta}_{\mathbf{g},\alpha})\left[\mathbf{M}( \mathbf{\theta}_{\mathbf{g},\alpha})^{T}\mathbf{\Psi}_{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha }\right)^{-1}\mathbf{M}(\mathbf{\theta}_{\mathbf{g},\alpha})\right]^{-1}. \tag{35}\]
**Proof.** The RMRPE estimator of \(\mathbf{\theta},\)\(\widetilde{\mathbf{\theta}}_{\alpha},\) must satisfy
\[\left\{\begin{array}{c}\left(\frac{\partial H_{n,\alpha}(\mathbf{\theta})}{ \partial\mathbf{\theta}}\right)_{\mathbf{\theta}=\widetilde{\mathbf{\theta}}_{\alpha}}+ \mathbf{M}(\widetilde{\mathbf{\theta}}_{\alpha})\mathbf{\lambda}_{n}=\mathbf{0}_{p},\\ \mathbf{m}(\widetilde{\mathbf{\theta}}_{\alpha})=\mathbf{0}_{r},\end{array}\right\} \Leftrightarrow\left\{\begin{array}{c}\left(\frac{\partial H_{n,\alpha}(\mathbf{ \theta})}{\partial\mathbf{\theta}}\right)_{\mathbf{\theta}=\widetilde{\mathbf{\theta}}_{ \alpha}}=-\mathbf{M}(\widetilde{\mathbf{\theta}}_{\alpha})\mathbf{\lambda}_{n}\\ \mathbf{m}(\widetilde{\mathbf{\theta}}_{\alpha})=\mathbf{0}_{r}\end{array}\right., \tag{36}\]
where \(\mathbf{\lambda}_{n}\) is a vector of Lagrangian multipliers. Now, applying Eq. (18), we can write \(\widetilde{\mathbf{\theta}}_{\alpha}=\mathbf{\theta}_{\mathbf{g},\alpha}+\mathbf{t}n^{-1/2},\) where \(||\mathbf{t}||<c,\) for some \(0<c<\infty\). We have, applying Taylor,
\[\left(\frac{\partial H_{n,\alpha}(\mathbf{\theta})}{\partial\mathbf{ \theta}}\right)_{\mathbf{\theta}=\widetilde{\mathbf{\theta}}_{\alpha}}= \left(\frac{\partial H_{n,\alpha}(\mathbf{\theta})}{\partial\mathbf{ \theta}}\right)_{\mathbf{\theta}=\mathbf{\theta}_{\mathbf{g},\alpha}}+\left(\frac{\partial ^{2}H_{n,\alpha}(\mathbf{\theta})}{\partial\mathbf{\theta}\partial\mathbf{\theta}^{T}} \right)_{\mathbf{\theta}=\mathbf{\theta}_{\mathbf{g},\alpha}}(\widetilde{\mathbf{\theta}}_{ \alpha}-\mathbf{\theta}_{\mathbf{g},\alpha})\] \[+o(||\widetilde{\mathbf{\theta}}_{\alpha}-\mathbf{\theta}_{\mathbf{g}, \alpha}||^{2}),\]
and hence
\[n^{1/2}\left(\frac{\partial H_{n,\alpha}(\mathbf{\theta})}{\partial \mathbf{\theta}}\right)_{\mathbf{\theta}=\widetilde{\mathbf{\theta}}_{\alpha}}= n^{1/2}\left(\frac{\partial H_{n,\alpha}(\mathbf{\theta})}{\partial\mathbf{ \theta}}\right)_{\mathbf{\theta}=\mathbf{\theta}_{\mathbf{g},\alpha}}\] \[+\left(\frac{\partial^{2}H_{n,\alpha}(\mathbf{\theta})}{\partial\mathbf{ \theta}\partial\mathbf{\theta}^{T}}\right)_{\mathbf{\theta}=\mathbf{\theta}_{\mathbf{g}, \alpha}}n^{1/2}(\widetilde{\mathbf{\theta}}_{\alpha}-\mathbf{\theta}_{\mathbf{g},\alpha}) +o(n^{1/2}||\widetilde{\mathbf{\theta}}_{\alpha}-\mathbf{\theta}_{\mathbf{g},\alpha}||^{2 }).\]
However,
\[o(n^{1/2}||\widetilde{\boldsymbol{\theta}}_{\alpha}-\boldsymbol{\theta}_{\boldsymbol {g},\alpha}||^{2})=o(n^{1/2}||\boldsymbol{t}||^{2}/n)=o(n^{-1/2}||\boldsymbol {t}||^{2})=o(O_{p}(1))=o_{p}(1).\]
Now,
\[\left(\frac{\partial^{2}H_{n,\alpha}(\boldsymbol{\theta})}{ \partial\theta_{j}\partial\theta_{k}}\right)_{\boldsymbol{\theta}=\boldsymbol {\theta}_{\boldsymbol{g},\alpha}}=\frac{1}{n}\sum\limits_{i=1}^{n}\left(\frac {\partial^{2}\hat{V}_{i}(Y_{i};\boldsymbol{\theta})}{\partial\theta_{j} \partial\theta_{k}}\right)_{\boldsymbol{\theta}=\boldsymbol{\theta}_{\boldsymbol {g},\alpha}}\] \[\stackrel{{ P}}{{\longrightarrow}}\frac{1}{n}\sum \limits_{i=1}^{n}E_{Y_{i}}\left[\left(\frac{\partial^{2}\hat{V}_{i}(Y; \boldsymbol{\theta})}{\partial\theta_{j}\partial\theta_{k}}\right)_{ \boldsymbol{\theta}=\boldsymbol{\theta}_{\boldsymbol{g},\alpha}}\right]= \left(\boldsymbol{\Psi}_{n}\left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha }\right)\right)_{jk}.\]
Therefore,
\[n^{1/2}\left(\frac{\partial H_{n,\alpha}(\boldsymbol{\theta})}{\partial \boldsymbol{\theta}}\right)_{\boldsymbol{\theta}=\widetilde{\boldsymbol{ \theta}}_{\alpha}}=n^{1/2}\left(\frac{\partial H_{n,\alpha}(\boldsymbol{ \theta})}{\partial\boldsymbol{\theta}}\right)_{\boldsymbol{\theta}= \boldsymbol{\theta}_{\boldsymbol{g},\alpha}}+\boldsymbol{\Psi}_{n}\left( \boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right)n^{1/2}(\widetilde{ \boldsymbol{\theta}}_{\alpha}-\boldsymbol{\theta}_{\boldsymbol{g},\alpha})+o _{p}(1). \tag{37}\]
As the RMRPE \(\widetilde{\boldsymbol{\theta}}_{\alpha}\) must satisfy the conditions in (36), and in view of (37) we have
\[n^{1/2}\left(\frac{\partial H_{n,\alpha}(\boldsymbol{\theta})}{\partial \boldsymbol{\theta}}\right)_{\boldsymbol{\theta}=\boldsymbol{\theta}_{ \boldsymbol{g},\alpha}}=-\boldsymbol{\Psi}_{n}\left(\boldsymbol{\theta}_{ \boldsymbol{g},\alpha}\right)n^{1/2}(\widetilde{\boldsymbol{\theta}}_{\alpha}- \boldsymbol{\theta}_{\boldsymbol{g},\alpha})-\boldsymbol{M}(\widetilde{ \boldsymbol{\theta}}_{\alpha})n^{1/2}\boldsymbol{\lambda}_{n}+o_{p}(1).\]
And applying the continuity of \(\boldsymbol{M}\), this can be written as
\[-\boldsymbol{\Psi}_{n}\left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha} \right)n^{1/2}(\widetilde{\boldsymbol{\theta}}_{\alpha}-\boldsymbol{\theta}_ {\boldsymbol{g},\alpha})-\boldsymbol{M}(\boldsymbol{\theta}_{\boldsymbol{g}, \alpha})n^{1/2}\boldsymbol{\lambda}_{n}=n^{1/2}\left(\frac{\partial H_{n, \alpha}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}}\right)_{ \boldsymbol{\theta}=\boldsymbol{\theta}_{\boldsymbol{g},\alpha}}+o_{p}(1). \tag{38}\]
On the other hand, applying Taylor to \(\boldsymbol{m}\), we obtain
\[n^{1/2}\boldsymbol{m}(\widetilde{\boldsymbol{\theta}}_{\alpha})=n^{1/2} \boldsymbol{m}(\boldsymbol{\theta}_{\boldsymbol{g},\alpha})+\boldsymbol{M}( \boldsymbol{\theta}_{\boldsymbol{g},\alpha})^{T}n^{1/2}(\widetilde{ \boldsymbol{\theta}}_{\alpha}-\boldsymbol{\theta}_{\boldsymbol{g},\alpha})+o_{p} (1). \tag{39}\]
From (39) and applying that \(\boldsymbol{m}(\widetilde{\boldsymbol{\theta}}_{\alpha})=\boldsymbol{0}_{r}, \boldsymbol{m}(\boldsymbol{\theta}_{\boldsymbol{g},\alpha})=\boldsymbol{0}_{r}\), it follows that
\[\boldsymbol{M}(\boldsymbol{\theta}_{\boldsymbol{g},\alpha})^{T}n^{1/2}( \widetilde{\boldsymbol{\theta}}_{\alpha}-\boldsymbol{\theta}_{\boldsymbol{g },\alpha})+o_{p}(1)=\boldsymbol{0}_{r}. \tag{40}\]
Now, we can express equations (38) and (40) in matrix form as
\[\left(\begin{array}{cc}-\boldsymbol{\Psi}_{n}\left(\boldsymbol{\theta}_{ \boldsymbol{g},\alpha}\right)&-\boldsymbol{M}(\boldsymbol{\theta}_{ \boldsymbol{g},\alpha})\\ \boldsymbol{M}(\boldsymbol{\theta}_{\boldsymbol{g},\alpha})^{T}&\boldsymbol{0 }_{r\times r}\end{array}\right)\left(\begin{array}{c}n^{1/2}(\widetilde{ \boldsymbol{\theta}}_{\alpha}-\boldsymbol{\theta}_{\boldsymbol{g},\alpha})\\ n^{1/2}\boldsymbol{\lambda}_{n}\end{array}\right)=\left(\begin{array}{c}n^{1 /2}\left(\frac{\partial H_{n}(\boldsymbol{\theta})}{\partial\boldsymbol{ \theta}}\right)_{\boldsymbol{\theta}=\boldsymbol{\theta}_{\boldsymbol{g}, \alpha}}\\ \boldsymbol{0}_{r}\end{array}\right)+o_{p}(1).\]
Therefore,
\[\left(\begin{array}{cc}-\boldsymbol{\Psi}_{n}\left(\boldsymbol{\theta}_{ \boldsymbol{g},\alpha}\right)&-\boldsymbol{M}(\boldsymbol{\theta}_{ \boldsymbol{g},\alpha})\\ \boldsymbol{M}(\boldsymbol{\theta}_{\boldsymbol{g},\alpha})^{T}&\boldsymbol{0 }_{r\times r}\end{array}\right)\left(\begin{array}{c}n^{1/2}(\widetilde{ \boldsymbol{\theta}}_{\alpha}-\boldsymbol{\theta}_{\boldsymbol{g},\alpha})\\ n^{1/2}\boldsymbol{\lambda}_{n}\end{array}\right)=\left(\begin{array}{c}n^{1 /2}\left(\frac{\partial H_{n}(\boldsymbol{\theta})}{\partial\boldsymbol{ \theta}}\right)_{\boldsymbol{\theta}=\boldsymbol{\theta}_{\boldsymbol{g}, \alpha}}\\ \boldsymbol{0}_{r}\end{array}\right)+o_{p}(1).\]
Therefore,
\[\left(\begin{array}{cc}-\boldsymbol{\Psi}_{n}\left(\boldsymbol{\theta}_{ \boldsymbol{g},\alpha}\right)&-\boldsymbol{M}(\boldsymbol{\theta}_{\boldsymbol{g},\alpha})\\ \boldsymbol{M}(\boldsymbol{\theta}_{\boldsymbol{g},\alpha})^{T}&\boldsymbol{0 }_{r\times r}\end{array}\right)\left(\begin{array}{c}n^{1/2}(\widetilde{ \boldsymbol{\theta}}_{\alpha}-\boldsymbol{\theta}_{\boldsymbol{g},\alpha})\\ n^{1/2}\boldsymbol{\lambda}_{n}\end{array}\right)=\left(\begin{array}{c}n^{1 /2}\left(\frac{\partial H_{n}(\boldsymbol{\theta})}{\partial\boldsymbol{ \theta}}\right)_{\boldsymbol{\theta}=\boldsymbol{\theta}_{\boldsymbol{g}, \alpha}}\\ \boldsymbol{0}_{r}\end{array}\right)+o_{p}(1).\]
Therefore,
\[\left(\begin{array}{cc}-\boldsymbol{\Psi}_{n}\left(\boldsymbol{\theta}_{ \boldsymbol{g},\alpha}\right)&-\boldsymbol{M}(\boldsymbol{\theta}_{\boldsymbol{g},\alpha})\\ \boldsymbol{M}(\boldsymbol{\theta}_{\boldsymbol{g},\alpha})^{T}&\boldsymbol{0 }_{r\times r}\end{array}\right)\left(\begin{array}{c}n^{1/2}(\widetilde{ \boldsymbol{\theta}}_{\alpha}-\boldsymbol{\theta}_{\boldsymbol{g},\alpha})\\ n^{1/2}\boldsymbol{\lambda}_{n}\end{array}\right)=\left(\begin{array}{c}n^{1 /2}\left(\frac{\partial H_{n}(\boldsymbol{\theta})}{\partial\boldsymbol{ \theta}}\right)_{\boldsymbol{\theta}=\boldsymbol{\theta}_{\boldsymbol{g}, \alpha}}\\ \boldsymbol{0}_{r}\end{array}\right)+o_{p}(1).\]
Therefore,
\[\left(\begin{array}{cc}-\boldsymbol{\Psi}_{n}\left(\boldsymbol{\theta}_{ \boldsymbol{g},\alpha}\right)&-\boldsymbol{M}(\boldsymbol{\theta}_{\boldsymbol{g},\alpha})\\ \boldsymbol{M}(\boldsymbol{\theta}_{\boldsymbol{g},\alpha})^{T}&\boldsymbol{0 }_{r\times r}\end{array}\right)\left(\begin{array}{c}n^{1/2}(\widetilde{ \boldsymbol{\theta}}_{\alpha}-\boldsymbol{\theta}_{\boldsymbol{g},\alpha})\\ n^{1/2}\boldsymbol{\lambda}_{n}\end{array}\right)=\left(\begin{array}{c}n^{1 /2}\left(\frac{\partial H_{n}(\boldsymbol{\theta})}{\partial\boldsymbol{ \theta}}\right)_{\boldsymbol{\theta}=\boldsymbol{\theta}_{\boldsymbol{g}, \alpha}}\\ \boldsymbol{0}_{r}\end{array}\right)+o_{p}(1).\]
Therefore,
\[\left(\begin{array}{cc}-\boldsymbol{\Psi}_{n}\left(\boldsymbol{\theta}_{ \boldsymbol{g},\alpha}\right)&-\boldsymbol{M}(\boldsymbol{\theta}_{ \boldsymbol{g},\alpha})\\ \boldsymbol{M}(\boldsymbol{\theta}_{\boldsymbol{g},\alpha})^{T}&\boldsymbol{0 }_{r\times r}\end{array}\right)\left(\begin{array}{c}n^{1/2}(\widetilde{ \boldsymbol{\theta}}_{\alpha}-\boldsymbol{\theta}_{\boldsymbol{g},\alpha})\\ n^{1/2}\boldsymbol{\lambda}_{n}\end{array}\right)=\left(\begin{array}{c}n^{1 /2}\left(\frac{\partial H_{n}(\boldsymbol{\theta})
\[\left(\begin{array}{cc}n^{1/2}(\widetilde{\mathbf{\theta}}_{\alpha}-\mathbf{\theta}_{ \mathbf{g},\alpha})\\ n^{1/2}\mathbf{\lambda}_{n}\end{array}\right)=\left(\begin{array}{cc}-\mathbf{\Psi}_{ n}\left(\mathbf{\theta}_{\mathbf{g},\alpha}\right)&-\mathbf{M}(\mathbf{\theta}_{\mathbf{g},\alpha})\\ \mathbf{M}(\mathbf{\theta}_{\mathbf{g},\alpha})^{T}&\mathbf{0}_{r\times r}\end{array}\right)^ {-1}\left(\begin{array}{cc}n^{1/2}\left(\frac{\partial H_{n}(\mathbf{\theta})}{ \partial\mathbf{\theta}}\right)_{\mathbf{\theta}=\mathbf{\theta}_{\mathbf{g},\alpha}}\\ \mathbf{0}_{r}\end{array}\right)+o_{p}(1).\]
But
\[\left(\begin{array}{cc}-\mathbf{\Psi}_{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha} \right)&-\mathbf{M}(\mathbf{\theta}_{\mathbf{g},\alpha})\\ \mathbf{M}(\mathbf{\theta}_{\mathbf{g},\alpha})^{T}&\mathbf{0}\end{array}\right)^{-1}=\left( \begin{array}{cc}\mathbf{P}_{\alpha}^{*}(\mathbf{\theta}_{\mathbf{g},\mathbf{\alpha}})&-\bm {Q}_{\alpha}(\mathbf{\theta}_{\mathbf{g},\alpha})\\ -\mathbf{Q}_{\alpha}(\mathbf{\theta}_{\mathbf{g},\alpha})^{T}&\mathbf{R}_{\alpha}(\mathbf{\theta} _{\mathbf{g},\alpha})\end{array}\right)\!,\]
where \(\mathbf{P}_{\alpha}^{*}(\mathbf{\theta}_{\mathbf{g},\alpha})\) and \(\mathbf{Q}_{\alpha}(\mathbf{\theta}_{\mathbf{g},\alpha})\) are given in (34) and (35), respectively. The matrix \(\mathbf{R}_{\alpha}(\mathbf{\theta}_{\mathbf{g},\alpha})\) is the matrix needed to make the right hand side of the above equation equal to the indicated inverse. Then,
\[n^{1/2}(\widetilde{\mathbf{\theta}}_{\alpha}-\mathbf{\theta}_{\mathbf{g},\alpha})=\mathbf{P} ^{*}(\mathbf{\theta}_{\mathbf{g},\alpha})n^{1/2}\left(\frac{\partial H_{n}(\mathbf{\theta} )}{\partial\mathbf{\theta}}\right)_{\mathbf{\theta}=\mathbf{\theta}_{\mathbf{g},\alpha}}+o_{p} (1) \tag{41}\]
and the result holds.
In the following lemma we establish a property about matrix \(\mathbf{P}_{\alpha}^{*}(\mathbf{\theta}_{\mathbf{g},\alpha})\) that will be required for the next theorem.
**Lemma 10**: _Given \(\mathbf{P}_{\alpha}^{*}(\mathbf{\theta}_{\mathbf{g},\alpha})\) and \(\mathbf{\Psi}_{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha}\right)\), it follows_
\[\mathbf{P}_{\alpha}^{*}(\mathbf{\theta}_{\mathbf{g},\alpha})\mathbf{\Psi}_{n}\left(\mathbf{\theta} _{\mathbf{g},\alpha}\right)\mathbf{P}_{\alpha}^{*}(\mathbf{\theta}_{\mathbf{g},\alpha})=-\bm {P}_{\alpha}^{*}(\mathbf{\theta}_{\mathbf{g},\alpha}).\]
**Proof.** Applying the definitions and denoting
\[\mathbf{A}^{-1}(\mathbf{\theta}_{\mathbf{g},\alpha})=\left[\mathbf{M}(\mathbf{\theta}_{\mathbf{g}, \alpha})^{T}\mathbf{\Psi}_{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha}\right)^{-1}\mathbf{M}( \mathbf{\theta}_{\mathbf{g},\alpha})\right]^{-1},\]
we obtain
\[\mathbf{P}_{\alpha}^{*}(\mathbf{\theta}_{\mathbf{g},\alpha})\mathbf{\Psi}_{n} \left(\mathbf{\theta}_{\mathbf{g},\alpha}\right)\mathbf{P}_{\alpha}^{*}(\mathbf{\theta}_{\mathbf{g },\alpha})\] \[= \left[\mathbf{\Psi}_{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha}\right)^{-1} \mathbf{M}(\mathbf{\theta}_{\mathbf{g},\alpha})\mathbf{A}^{-1}(\mathbf{\theta}_{\mathbf{g},\alpha}) \mathbf{M}(\mathbf{\theta}_{\mathbf{g},\alpha})^{T}\mathbf{\Psi}_{n}\left(\mathbf{\theta}_{\mathbf{g}, \alpha}\right)^{-1}-\mathbf{\Psi}_{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha}\right)^{-1}\right]\] \[= \left[\mathbf{\Psi}_{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha}\right)^{-1} \mathbf{M}(\mathbf{\theta}_{\mathbf{g},\alpha})\mathbf{A}^{-1}(\mathbf{\theta}_{\mathbf{g},\alpha}) \mathbf{M}(\mathbf{\theta}_{\mathbf{g},\alpha})^{T}-Id\right]\mathbf{P}_{\alpha}^{*}(\mathbf{ \theta}_{\mathbf{g},\alpha})\] \[= \mathbf{\Psi}_{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha}\right)^{-1}\mathbf{M}( \mathbf{\theta}_{\mathbf{g},\alpha})\mathbf{A}^{-1}\left(\mathbf{\theta}_{\mathbf{g},\alpha}\right) \mathbf{M}(\mathbf{\theta}_{\mathbf{g},\alpha})^{T}\mathbf{\Psi}_{n}\left(\mathbf{\theta}_{\mathbf{g}, \alpha}\right)^{-1}\mathbf{M}(\mathbf{\theta}_{\mathbf{g},\alpha})\mathbf{A}^{-1}(\mathbf{\theta}_ {\mathbf{g},\alpha})\mathbf{\Psi}_{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha}\right)^{-1}\] \[-\mathbf{\Psi}_{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha}\right)^{-1}\mathbf{M}( \mathbf{\theta}_{\mathbf{g},\alpha})\mathbf{A}^{-1}(\mathbf{\theta}_{\mathbf{g},\alpha})\mathbf{M}( \mathbf{\theta}_{\mathbf{g},\alpha})^{T}\mathbf{\Psi}_{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha} \right)^{-1}-\mathbf{P}_{\alpha}^{*}(\mathbf{\theta}_{\mathbf{g},\alpha})\] \[= -\mathbf{P}_{\alpha}^{*}(\mathbf{\theta}_{\mathbf{g},\alpha}).\]
Hence, the result holds.
Suppose now that we have chosen a model as the best fitting model and we wonder if this model overfits the data and a restricted model is more accurate. Then, we can pose this problem as a model selection problem with two models,
the big one and a restricted model, and apply the results of the previous section. Hence, it suffices to compute \(RP_{NH}((M_{1}^{(s)},...,M_{n}^{(s)},\boldsymbol{\theta})\) for both models and select the one attaining the minimum. Assuming the restricted model is correct, in the following theorem we shall establish the asymptotic distribution of
\[2n\left[RP_{NH}\left(M_{1}^{(s)},...,M_{n}^{(s)},\widehat{\boldsymbol{\theta}} _{\alpha}\right)-RP_{NH}\left(M_{1}^{(s)},...,M_{n}^{(s)},\widetilde{ \boldsymbol{\theta}}_{\alpha}\right)\right],\]
where \(RP_{NH}\left((M_{1}^{(s)},...,M_{n}^{(s)},\widehat{\boldsymbol{\theta}}_{ \alpha}\right)\) was given in (24) and
\[RP_{NH}\left((M_{1}^{(s)},...,M_{n}^{(s)},\widetilde{\boldsymbol{\theta}}_{ \alpha}\right)=H_{n,\alpha}\left(\widetilde{\boldsymbol{\theta}}_{\alpha} \right)+\frac{1}{n}trace\left(\boldsymbol{\Omega}_{n}^{R}\left(\widetilde{ \boldsymbol{\theta}}_{\alpha}\right)\boldsymbol{\Psi}_{n}^{R}\left(\widetilde {\boldsymbol{\theta}}_{\alpha}\right)^{-1}\right),\]
being \(\boldsymbol{\Psi}_{n}^{R}\left(\widetilde{\boldsymbol{\theta}}_{\alpha}\right)\) and \(\boldsymbol{\Omega}_{n}^{R}\left(\widetilde{\boldsymbol{\theta}}_{\alpha}\right)\) the matrices defined in (16) and (17) but for the restricted model.
Note that the probability of selecting the restricted model is
\[\Pr\left(RP_{NH}\left(M_{1}^{(k)},...,M_{n}^{(k)},\widehat{\boldsymbol{ \theta}}_{\alpha}\right)-RP_{NH}\left(M_{1}^{(k)},...,M_{n}^{(k)},\widetilde{ \boldsymbol{\theta}}_{\alpha}\right)>0\right).\]
**Theorem 11**: _Assume conditions_ **C1-C8** _hold and suppose that the fitting parameter, \(\boldsymbol{\theta}_{\boldsymbol{g},\alpha},\) belongs to the restricted model. Then, the asymptotic distribution of_
\[2n\left(RP_{NH}\left((M_{1}^{(s)},...,M_{n}^{(s)},\widehat{\boldsymbol{\theta }}_{\alpha}\right)-RP_{NH}\left((M_{1}^{(s)},...,M_{n}^{(s)},\widetilde{ \boldsymbol{\theta}}_{\alpha}\right)\right)\]
_coincides with the distribution of the random variable_
\[\sum_{j=1}^{r}\lambda_{j}(\boldsymbol{\theta}_{g,\alpha})\left(\boldsymbol{ \theta}\right)Z_{j}^{2}+2trace(\boldsymbol{\Omega}_{n}\left(\boldsymbol{ \theta}_{\boldsymbol{g},\alpha}\right)\boldsymbol{\Psi}_{n}^{-1}\left( \boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right))-2trace(\boldsymbol{ \Omega}_{n}^{R}\left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right)( \boldsymbol{\Psi}_{n}^{R})^{-1}\left(\boldsymbol{\theta}_{\boldsymbol{g}, \alpha}\right)),\]
_where \(Z_{1},\ldots,Z_{k}\) are independent standard normal variables, \(\lambda_{1}(\boldsymbol{\theta}_{\boldsymbol{g},\alpha}),\ldots,\lambda_{r}( \boldsymbol{\theta}_{\boldsymbol{g},\alpha})\) are the nonzero eigenvalues of \(-\boldsymbol{Q}_{\alpha}(\boldsymbol{\theta}_{\boldsymbol{g},\alpha})\boldsymbol {M}(\boldsymbol{\theta}_{\boldsymbol{g},\alpha})^{T}\boldsymbol{\Psi}_{n} \left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right)^{-1}\boldsymbol{ \Omega}_{n}\left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right)\) and_
\[r=rank\left(\boldsymbol{\Omega}_{n}\left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right)\boldsymbol{Q}_{\alpha}(\boldsymbol{\theta}_{\boldsymbol{g},\alpha})\boldsymbol{M}(\boldsymbol{\theta}_{\boldsymbol{g},\alpha})^{T} \boldsymbol{\Psi}_{n}\left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right)^ {-1}\boldsymbol{\Omega}_{n}\left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha} \right)\right).\]
**Proof.** Let us denote
\[L=2n\left[RP_{NH}\left(M_{1}^{(s)},...,M_{n}^{(s)},\widehat{\boldsymbol{ \theta}}_{\alpha}\right)-RP_{NH}\left(M_{1}^{(s)},...,M_{n}^{(s)},\widetilde {\boldsymbol{\theta}}_{\alpha}\right)\right].\]
Then,
\[L=2n\left[H_{n,\alpha}\left(\widehat{\boldsymbol{\theta}}_{\alpha}\right)-H_ {n,\alpha}\left(\widetilde{\boldsymbol{\theta}}_{\alpha}\right)\right]+2trace \left[\boldsymbol{\Omega}_{n}\left(\widehat{\boldsymbol{\theta}}_{\alpha} \right)\boldsymbol{\Psi}_{n}\left(\widehat{\boldsymbol{\theta}}_{\alpha} \right)^{-1}\right]-2trace\left[\boldsymbol{\Omega}_{n}^{R}\left(\widetilde {\boldsymbol{\theta}}_{\alpha}\right)\boldsymbol{\Psi}_{n}^{R}\left(\widetilde {\boldsymbol{\theta}}_{\alpha}\right)^{-1}\right].\]
First, note that
\[H_{n}\left(\widetilde{\boldsymbol{\theta}}_{\alpha}\right) = H_{n,\alpha}\left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha} \right)+\left(\frac{\partial H_{n,\alpha}\left(\boldsymbol{\theta}\right)}{ \partial\boldsymbol{\theta}}\right)_{\boldsymbol{\theta}=\boldsymbol{\theta}_ {\boldsymbol{g},\alpha}}\left(\widetilde{\boldsymbol{\theta}}_{\alpha}- \boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right)\] \[+\frac{1}{2}\left(\widetilde{\boldsymbol{\theta}}_{\alpha}- \boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right)^{T}\left(\frac{\partial^{2 }H_{n,\alpha}\left(\boldsymbol{\theta}\right)}{\partial\boldsymbol{\theta} \partial\boldsymbol{\theta}^{T}}\right)_{\boldsymbol{\theta}=\boldsymbol{ \theta}_{\boldsymbol{g},\alpha}}\left(\widetilde{\boldsymbol{\theta}}_{\alpha}- \boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right)+o(||\widetilde{\boldsymbol {\theta}}_{\alpha}-\boldsymbol{\theta}_{\boldsymbol{g},\alpha}||^{2}).\]
Hence,
\[2n\left[H_{n}\left(\widetilde{\boldsymbol{\theta}}_{\alpha} \right)-H_{n,\alpha}\left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right)\right] = 2\sqrt{n}\left(\frac{\partial H_{n,\alpha}\left(\boldsymbol{ \theta}\right)}{\partial\boldsymbol{\theta}}\right)_{\boldsymbol{\theta}= \boldsymbol{\theta}_{\boldsymbol{g},\alpha}}\sqrt{n}\left(\widetilde{ \boldsymbol{\theta}}_{\alpha}-\boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right)\] \[+\sqrt{n}\left(\widetilde{\boldsymbol{\theta}}_{\alpha}- \boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right)^{T}\left(\frac{\partial^{2 }H_{n,\alpha}\left(\boldsymbol{\theta}\right)}{\partial\boldsymbol{\theta} \partial\boldsymbol{\theta}^{T}}\right)_{\boldsymbol{\theta}=\boldsymbol{ \theta}_{\boldsymbol{g},\alpha}}\sqrt{n}\left(\widetilde{\boldsymbol{\theta}}_ {\alpha}-\boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right)+o_{p}(1).\]
Now, taking into account that
\[\sqrt{n}\left(\widetilde{\boldsymbol{\theta}}_{\alpha}- \boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right)=\boldsymbol{P}_{\alpha}^{* }(\boldsymbol{\theta}_{\boldsymbol{g},\alpha})\sqrt{n}\left(\frac{\partial H _{n,\alpha}\left(\boldsymbol{\theta}\right)}{\partial\boldsymbol{\theta}} \right)_{\boldsymbol{\theta}=\boldsymbol{\theta}_{\boldsymbol{g},\alpha}}+o_{p} (1),\]
and
\[\left(\frac{\partial^{2}H_{n,\alpha}\left(\boldsymbol{\theta} \right)}{\partial\boldsymbol{\theta}\partial\boldsymbol{\theta}^{T}}\right)_ {\boldsymbol{\theta}=\boldsymbol{\theta}_{\boldsymbol{g},\alpha}}\rightarrow \boldsymbol{\Psi}_{n}\left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right),\]
by Eq. (26), we conclude that
\[2n\left[H_{n}\left(\widetilde{\boldsymbol{\theta}}_{\alpha} \right)-H_{n,\alpha}\left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right)\right]\] \[= 2\sqrt{n}\left(\frac{\partial H_{n,\alpha}\left(\boldsymbol{ \theta}\right)}{\partial\boldsymbol{\theta}}\right)_{\boldsymbol{\theta}= \boldsymbol{\theta}_{\boldsymbol{g},\alpha}}^{T}\boldsymbol{P}_{\alpha}^{*}( \boldsymbol{\theta}_{\boldsymbol{g},\alpha})\sqrt{n}\left(\frac{\partial H_{n,\alpha}\left(\boldsymbol{\theta}\right)}{\partial\boldsymbol{\theta}}\right)_ {\boldsymbol{\theta}=\boldsymbol{\theta}_{\boldsymbol{g},\alpha}}\] \[+\sqrt{n}\left(\frac{\partial H_{n,\alpha}\left(\boldsymbol{ \theta}\right)}{\partial\boldsymbol{\theta}}\right)_{\boldsymbol{\theta}= \boldsymbol{\theta}_{\boldsymbol{g},\alpha}}^{T}\boldsymbol{P}_{\alpha}^{*}( \boldsymbol{\theta}_{\boldsymbol{g},\alpha})\boldsymbol{\Psi}_{n}\left( \boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right)\boldsymbol{P}_{\alpha}^{*}( \boldsymbol{\theta}_{\boldsymbol{g},\alpha})\sqrt{n}\left(\frac{\partial H_{n,\alpha}\left(\boldsymbol{\theta}\right)}{\partial\boldsymbol{\theta}}\right)_ {\boldsymbol{\theta}=\boldsymbol{\theta}_{\boldsymbol{g},\alpha}}+o_{p}(1).\]
Now, applying the previous lemma, we know that
\[\boldsymbol{P}_{\alpha}^{*}(\boldsymbol{\theta}_{\boldsymbol{g},\alpha}) \boldsymbol{\Psi}_{n}\left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right) \boldsymbol{P}_{\alpha}^{*}(\boldsymbol{\theta}_{\boldsymbol{g},\alpha})=- \boldsymbol{P}_{\alpha}^{*}(\boldsymbol{\theta}_{\boldsymbol{g},\alpha}),\]
and thus,
\[2n\left[H_{n}\left(\widetilde{\boldsymbol{\theta}}_{\alpha}\right)-H_{n, \alpha}\left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right)\right]= \sqrt{n}\left(\frac{\partial H_{n,\alpha}\left(\boldsymbol{\theta}\right)}{ \partial\boldsymbol{\theta}}\right)_{\boldsymbol{\theta}=\boldsymbol{\theta}_{ \boldsymbol{g},\alpha}}^{T}\boldsymbol{P}_{\alpha}^{*}(\boldsymbol{\theta}_{ \boldsymbol{g},\alpha})\sqrt{n}\left(\frac{\partial H_{n,\alpha}\left( \boldsymbol{\theta}\right)}{\partial\boldsymbol{\theta}}\right)_{\boldsymbol{ \theta}=\boldsymbol{\theta}_{\boldsymbol{g},\alpha}}+o_{p}(1).\]
On the other hand,
\[H_{n,\alpha}\left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right) = H_{n}\left(\widehat{\boldsymbol{\theta}}_{\alpha}\right)+\left( \frac{\partial H_{n,\alpha}\left(\boldsymbol{\theta}\right)}{\partial \boldsymbol{\theta}}\right)_{\boldsymbol{\theta}=\widehat{\boldsymbol{ \theta}}_{\alpha}}\left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha}-\widehat{ \boldsymbol{\theta}}_{\alpha}\right)\] \[+\frac{1}{2}\left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha}- \widehat{\boldsymbol{\theta}}_{\alpha}\right)^{T}\left(\frac{\partial^{2}H_{n, \alpha}\left(\boldsymbol{\theta}\right)}{\partial\boldsymbol{\theta}\partial \boldsymbol{\theta}^{T}}\right)_{\boldsymbol{\theta}=\widehat{\boldsymbol{ \theta}}_{\alpha}}\left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha}-\widehat{ \boldsymbol{\theta}}_{\alpha}\right)+o(||\boldsymbol{\theta}_{\boldsymbol{g}, \alpha}-\widehat{\boldsymbol{\theta}}_{\alpha}||^{2}).\]
Now, taking into account that
\[\left(\frac{\partial^{2}H_{n,\alpha}\left(\boldsymbol{\theta}\right)}{ \partial\boldsymbol{\theta}\partial\boldsymbol{\theta}^{T}}\right)_{ \boldsymbol{\theta}=\widehat{\boldsymbol{\theta}}_{\alpha}}\longrightarrow \left(\frac{\partial^{2}H_{n,\alpha}\left(\boldsymbol{\theta}\right)}{ \partial\boldsymbol{\theta}\partial\boldsymbol{\theta}^{T}}\right)_{ \boldsymbol{\theta}=\boldsymbol{\theta}_{\boldsymbol{g},\alpha}} \longrightarrow\boldsymbol{\Psi}_{n}\left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right),\]
and
\[\left(\frac{\partial H_{n,\alpha}\left(\boldsymbol{\theta}\right)}{\partial \boldsymbol{\theta}}\right)_{\boldsymbol{\theta}=\widehat{\boldsymbol{ \theta}}_{\alpha}}=0,\]
we conclude that
\[2n\left[H_{n}\left(\widehat{\boldsymbol{\theta}}_{\alpha}\right)-H_{n,\alpha} \left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right)\right]=-\sqrt{n} \left(\widehat{\boldsymbol{\theta}}_{\alpha}-\boldsymbol{\theta}_{\boldsymbol {g},\alpha}\right)^{T}\boldsymbol{\Psi}_{n}\left(\boldsymbol{\theta}_{ \boldsymbol{g},\alpha}\right)\sqrt{n}\left(\widehat{\boldsymbol{\theta}}_{ \alpha}-\boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right)+o_{p}(1).\]
Applying \(\left(\frac{\partial H_{n,\alpha}\left(\boldsymbol{\theta}\right)}{\partial \boldsymbol{\theta}}\right)_{\boldsymbol{\theta}=\widehat{\boldsymbol{ \theta}}_{\alpha}}=\boldsymbol{0}\), we have by Taylor
\[\boldsymbol{0}=n^{1/2}\left(\frac{\partial H_{n,\alpha}\left(\boldsymbol{ \theta}\right)}{\partial\boldsymbol{\theta}^{T}}\right)_{\boldsymbol{ \theta}=\boldsymbol{\theta}_{\boldsymbol{g},\alpha}}+\boldsymbol{\Psi}_{n}( \boldsymbol{\theta}_{\boldsymbol{g},\alpha})n^{1/2}\left(\widehat{\boldsymbol {\theta}}_{\alpha}-\boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right)+o_{p}( 1),\]
so that
\[n^{1/2}\left(\widehat{\boldsymbol{\theta}}_{\alpha}-\boldsymbol{\theta}_{ \boldsymbol{g},\alpha}\right)=-n^{1/2}\boldsymbol{\Psi}_{n}(\boldsymbol{ \theta}_{\boldsymbol{g},\alpha})^{-1}\left(\frac{\partial H_{n,\alpha}\left( \boldsymbol{\theta}\right)}{\partial\boldsymbol{\theta}^{T}}\right)_{ \boldsymbol{\theta}=\boldsymbol{\theta}_{\boldsymbol{g},\alpha}}+o_{p}(1).\]
Hence,
\[2n\left[H_{n}\left(\widehat{\boldsymbol{\theta}}_{\alpha}\right)-H_{n,\alpha }\left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right)\right]=-\sqrt{n} \left(\frac{\partial H_{n,\alpha}\left(\boldsymbol{\theta}\right)}{\partial \boldsymbol{\theta}}\right)_{\boldsymbol{\theta}=\boldsymbol{\theta}_{ \boldsymbol{g},\alpha}}^{T}\boldsymbol{\Psi}_{n}\left(\boldsymbol{\theta}_{ \boldsymbol{g},\alpha}\right)^{-1}\sqrt{n}\left(\frac{\partial H_{n,\alpha} \left(\boldsymbol{\theta}\right)}{\partial\boldsymbol{\theta}}\right)_{ \boldsymbol{\theta}=\boldsymbol{\theta}_{\boldsymbol{g},\alpha}}+o_{p}(1).\]
But as \(\boldsymbol{P}^{*}(\boldsymbol{\theta}_{\boldsymbol{g},\alpha})=\boldsymbol{Q }_{\alpha}(\boldsymbol{\theta}_{\boldsymbol{g},\alpha})\boldsymbol{M}( \boldsymbol{\theta}_{\boldsymbol{g},\alpha})^{T}\boldsymbol{\Psi}_{n}\left( \boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right)^{-1}-\boldsymbol{\Psi}_{n} \left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha}\right)^{-1},\) we obtain
\[2n\left[H_{n,\alpha}\left(\widehat{\boldsymbol{\theta}}_{ \alpha}\right)-H_{n,\alpha}\left(\widetilde{\boldsymbol{\theta}}_{\alpha} \right)\right]= -\sqrt{n}\left(\frac{\partial H_{n,\alpha}\left(\boldsymbol{ \theta}\right)}{\partial\boldsymbol{\theta}^{T}}\right)_{\boldsymbol{\theta}= \boldsymbol{\theta}_{\boldsymbol{g},\alpha}}^{T}\boldsymbol{Q}_{\alpha}( \boldsymbol{\theta}_{\boldsymbol{g},\alpha})\boldsymbol{M}(\boldsymbol{\theta}_{ \boldsymbol{g},\alpha})^{T}\boldsymbol{\Psi}_{n}\left(\boldsymbol{\theta}_{ \boldsymbol{g},\alpha}\right)^{-1}\] \[\times\sqrt{n}\left(\frac{\partial H_{n,\alpha}(\boldsymbol{ \theta})}{\partial\boldsymbol{\theta}}\right)_{\boldsymbol{\theta}=\boldsymbol{ \theta}_{\boldsymbol{g},\alpha}}+o_{p}(1).\]
Finally we have,
\[\sqrt{n}\left(\frac{\partial H_{n,\alpha}\left(\boldsymbol{\theta}\right)}{ \partial\boldsymbol{\theta}^{T}}\right)_{\boldsymbol{\theta}=\boldsymbol{ \theta}_{\boldsymbol{g},\alpha}}\overset{L}{\longrightarrow}N(\boldsymbol{0}_{k}, \boldsymbol{\Omega}_{n}\left(\boldsymbol{\theta}_{\boldsymbol{g},\alpha} \right)),\]
and thus the asymptotic distribution of \(2n\left[H_{n,\alpha}\left(\widehat{\mathbf{\theta}}_{\alpha}\right)-H_{n,\alpha} \left(\widetilde{\mathbf{\theta}}_{\alpha}\right)\right]\) coincides with the distribution of the random variable
\[\underset{i=1}{\overset{r}{\sum}}\lambda_{i}(\mathbf{\theta}_{\mathbf{g},\alpha})Z_{i}^ {2},\]
where \(Z_{1},\ldots,Z_{r}\) are independent standard normal variables, \(\lambda_{1}(\mathbf{\theta}_{\mathbf{g},\alpha}),\ldots,\lambda_{r}(\mathbf{\theta}_{\mathbf{g },\alpha})\) are the nonzero eigenvalues of \(-\mathbf{Q}_{\alpha}(\mathbf{\theta}_{\mathbf{g},\alpha})\mathbf{M}(\mathbf{\theta}_{\mathbf{g},\alpha })^{T}\mathbf{\Psi}_{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha}\right)^{-1}\mathbf{\Omega}_{ n}\left(\mathbf{\theta}_{\mathbf{g},\alpha}\right)\) and
\[r=\text{rank}\left(\mathbf{Q}_{\alpha}(\mathbf{\theta}_{\mathbf{g},\alpha})\mathbf{M}(\mathbf{ \theta}_{\mathbf{g},\alpha})^{T}\mathbf{\Psi}_{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha} \right)^{-1}\mathbf{\Omega}_{n}\left(\mathbf{\theta}_{\mathbf{g},\alpha}\right)\right).\]
For more details see Corollary 2.1 in [10]. This finishes the proof.
The above result provides a way to asymptotically compute the probability of over-fitting, which is of great interest in model selection theory.
Example: The RP-based model selection under the multiple linear regression model and restricted parameter spaces.
We shall consider the MLRM as defined in Section 3.1 and we are interested in comparing a full model with a restricted model under the restrictions
\[\beta_{p-r+1}=...=\beta_{p}=0.\]
In this case the model parameter is \(\mathbf{\theta}=\left(\beta_{0},...,\beta_{p},\sigma\right)\) and the function \(\mathbf{m}(\mathbf{\theta})\) defining the restrictions is
\[\mathbf{m}(\mathbf{\theta})=\mathbf{m}\left(\beta_{0},...,\beta_{p},\sigma\right)=(\beta _{p-r+1},...,\beta_{p}).\]
Consequently, its derivative is given by
\[\mathbf{M}(\mathbf{\theta})=\frac{\partial\mathbf{m}(\mathbf{\theta})}{\partial\mathbf{\theta}}= \left(\begin{array}{c}\mathbf{0}_{(p-r+1)\times r}\\ \mathbf{I}_{r\times r}\\ \mathbf{0}_{1\times r}\end{array}\right).\]
Let us expressed the design matrix \(\mathbb{X}\) as
\[\mathbb{X}=\left(\mathbb{X}_{1},\mathbb{X}_{2}\right),\]
with \(\mathbb{X}_{1}\) a \(n\times(p-r+1)\) matrix and \(\mathbb{X}_{2}\) a \(n\times r\) matrix. It is clear that \(\mathbb{X}_{1}\) is the design matrix for the restricted model and \(\mathbb{X}_{2}\) corresponds to the design matrix for the full model whose parameters are not in the small model. The matrices \(\mathbf{\Psi}_{n}\left(\mathbf{\beta},\sigma\right)\) and \(\mathbf{\Omega}_{n}\left(\mathbf{\beta},\sigma\right)\) given in Eq. (29) can be rewritten, using the notation \(\mathbb{X}_{1}\) and \(\mathbb{X}_{2}\), as
\[\mathbf{\Psi}_{n}\left(\mathbf{\beta},\sigma\right)=K_{1}\left(\alpha+1\right)^{- \frac{3}{2}}\left[\begin{array}{ccc}\frac{1}{2}\mathbb{X}_{1}^{T}\mathbb{X} _{1}&\frac{1}{2}\mathbb{X}_{1}^{T}\mathbb{X}_{2}&0\\ \frac{1}{n}\mathbb{X}_{2}^{T}\mathbb{X}_{1}&\frac{1}{n}\mathbb{X}_{2}^{T} \mathbb{X}_{2}&0\\ 0&0&\frac{2}{\alpha+1}\end{array}\right],\]
being \(K_{1}\) as defined in (22) and
\[\boldsymbol{\Omega}_{n}\left(\boldsymbol{\beta},\sigma\right)=K_{1}^{2}\sigma^{2} \frac{1}{\left(2\alpha+1\right)^{3/2}}\left[\begin{array}{ccc}\frac{1}{n} \mathbb{X}_{1}^{T}\mathbb{X}_{1}&\frac{1}{n}\mathbb{X}_{1}^{T}\mathbb{X}_{2}&0 \\ \frac{1}{n}\mathbb{X}_{2}^{T}\mathbb{X}_{1}&\frac{1}{n}\mathbb{X}_{2}^{T} \mathbb{X}_{2}&0\\ 0&0&\frac{\left(3\alpha^{2}+4\alpha+2\right)}{\left(\alpha+1\right)^{2}\left(2 \alpha+1\right)}\end{array}\right].\]
Now, the inverse of the matrix \(\boldsymbol{\Psi}_{n}\left(\boldsymbol{\beta},\sigma\right)\) is given by
\[\boldsymbol{\Psi}_{n}^{-1}\left(\boldsymbol{\beta},\sigma\right)=K_{1}\left( \alpha+1\right)^{3/2}\left[\begin{array}{ccc}n\boldsymbol{A}_{11}&n \boldsymbol{A}_{12}&0\\ n\boldsymbol{A}_{21}&n\boldsymbol{A}_{22}&0\\ 0&0&\frac{\alpha+1}{2}\end{array}\right],\]
with
\[\boldsymbol{A}_{11} = \left(\mathbb{X}_{1}^{T}\mathbb{X}_{1}\right)^{-1}+\left( \mathbb{X}_{1}^{T}\mathbb{X}_{1}\right)^{-1}\mathbb{X}_{1}^{T}\mathbb{X}_{2} \boldsymbol{D}^{-1}\mathbb{X}_{2}^{T}\mathbb{X}_{1}\left(\mathbb{X}_{1}^{T} \mathbb{X}_{1}\right)^{-1},\] \[\boldsymbol{A}_{12} = -\left(\mathbb{X}_{1}^{T}\mathbb{X}_{1}\right)^{-1}\mathbb{X}_{1 }^{T}\mathbb{X}_{2}\boldsymbol{D}^{-1},\] \[\boldsymbol{A}_{21} = -\boldsymbol{D}^{-1}\mathbb{X}_{2}^{T}\mathbb{X}_{1}\left( \mathbb{X}_{1}^{T}\mathbb{X}_{1}\right)^{-1},\] \[\boldsymbol{A}_{22} = \boldsymbol{D}^{-1},\]
being
\[\boldsymbol{D}=\mathbb{X}_{2}^{T}\mathbb{X}_{2}-\mathbb{X}_{2}^{T}\mathbb{X}_{ 1}\left(\mathbb{X}_{1}^{T}\mathbb{X}_{1}\right)^{-1}\mathbb{X}_{1}^{T} \mathbb{X}_{2}.\]
Therefore, we have that the matrix \(\boldsymbol{\Psi}_{n}^{-1}\left(\boldsymbol{\beta},\sigma\right)\) can be computed as
\[\boldsymbol{\Psi}_{n}^{-1}\left(\boldsymbol{\beta},\sigma\right) \boldsymbol{M}(\boldsymbol{\beta},\sigma) = K_{1}^{-1}\left(\alpha+1\right)^{3/2}\left[\begin{array}{ccc}n \boldsymbol{A}_{11}&n\boldsymbol{A}_{12}&0\\ n\boldsymbol{A}_{21}&n\boldsymbol{A}_{22}&0\\ 0&0&\frac{\alpha+1}{2}\end{array}\right]\left(\begin{array}{c}\boldsymbol {0}_{\left(p-r\right)\times r}\\ \boldsymbol{I}_{r\times r}\\ \boldsymbol{0}_{r}\end{array}\right)\] \[= K_{1}^{-1}\left(\alpha+1\right)^{3/2}n\left[\begin{array}{c}- \left(\mathbb{X}_{1}^{T}\mathbb{X}_{1}\right)^{-1}\mathbb{X}_{1}^{T} \mathbb{X}_{2}\boldsymbol{D}^{-1}\\ \boldsymbol{D}^{-1}\\ 0\end{array}\right].\]
On the other hand,
\[\left(\boldsymbol{M}(\boldsymbol{\beta},\sigma)^{T}\boldsymbol{\Psi}_{n}^{-1 }\left(\boldsymbol{\beta},\sigma\right)\boldsymbol{M}(\boldsymbol{\beta}, \sigma)\right)^{-1}=\frac{K_{1}\left(\alpha+1\right)^{-3/2}}{n}\boldsymbol{D},\]
and
\[\boldsymbol{M}(\boldsymbol{\beta},\sigma)^{T}\boldsymbol{\Psi}_{n}^{-1} \left(\boldsymbol{\beta},\sigma\right)\boldsymbol{\Omega}_{n}\left( \boldsymbol{\beta},\sigma\right)=\left(\alpha+1\right)^{3/2}\frac{K_{1} \sigma^{2}}{(2\alpha+1)^{3/2}}\left(\boldsymbol{0},\boldsymbol{I}_{r\times r },\boldsymbol{0}\right),\]
and so, multiplying the above expressions we obtain that
\[\boldsymbol{Q}_{\alpha}(\boldsymbol{\beta},\sigma)=\boldsymbol{\Psi}_{n}^{-1 }\left(\boldsymbol{\beta},\sigma\right)\boldsymbol{M}(\boldsymbol{\beta}, \sigma)\left[\boldsymbol{M}(\boldsymbol{\beta},\sigma)^{T}\boldsymbol{\Psi}_{n }^{-1}(\boldsymbol{\beta},\sigma)\boldsymbol{M}(\boldsymbol{\beta},\sigma) \right]^{-1}=\left[\begin{array}{c}-\left(\mathbb{X}_{1}^{T}\mathbb{X}_{ 1}\right)^{-1}\mathbb{X}_{1}^{T}\mathbb{X}_{2}\\ \boldsymbol{I}_{r\times r}\\ 0\end{array}\right],\]
\[\mathbf{\Omega}_{n}\left(\widehat{\boldsymbol{\beta}}_{\alpha},\widehat{\sigma}_{ \alpha}\right)\mathbf{\Psi}_{n}^{-1}\left(\widehat{\boldsymbol{\beta}}_{ \alpha},\widehat{\sigma}_{\alpha}\right)\rightarrow\sigma_{\boldsymbol{g}, \alpha}^{2}K_{1}\frac{(\alpha+1)^{3/2}}{(2\alpha+1)^{3/2}}\left((p-r+1)+ \frac{\left(3\alpha^{2}+4\alpha+2\right)}{2\left(2\alpha+1\right)\left(\alpha+ 1\right)}\right).\]
Therefore,
\[trace\left(\mathbf{\Omega}_{n}\left(\widehat{\boldsymbol{\beta}}_{\alpha}, \widehat{\sigma}_{\alpha}\right)\mathbf{\Psi}_{n}^{-1}\left(\widehat{\boldsymbol {\beta}}_{\alpha},\widehat{\sigma}_{\alpha}\right)\right)-trace\left( \mathbf{\Omega}_{n}^{R}\left(\widetilde{\boldsymbol{\beta}}_{\alpha}, \widetilde{\sigma}_{\alpha}\right)\left(\mathbf{\Psi}_{n}^{R}\right)^{-1} \left(\widetilde{\boldsymbol{\beta}}_{\alpha},\widetilde{\sigma}_{\alpha} \right)\right)\rightarrow\sigma_{\boldsymbol{g},\alpha}^{2}K_{1}\frac{( \alpha+1)^{3/2}}{(2\alpha+1)^{3/2}}r.\]
Finally, the asymptotic probability of selecting the restricted model when this model is correct is
\[\Pr\left(2n\left(RP_{NH}(M_{1}^{(s)},...,M_{n}^{(s)},\widehat{ \boldsymbol{\theta}}_{\alpha})-RP_{NH}(M_{1}^{(s)},...,M_{n}^{(s)},\widetilde{ \boldsymbol{\theta}}_{\alpha})\right)>0\right)\rightarrow\] \[\Pr\left(\left(-\alpha+1\right)^{3/2}\frac{K_{1}\sigma_{ \boldsymbol{g},\alpha}^{2}}{(2\alpha+1)^{3/2}}\chi_{r}^{2}+2\left(\alpha+1 \right)^{3/2}\frac{K_{1}\sigma_{\boldsymbol{g},\alpha}^{2}}{(2\alpha+1)^{3/2 }}r>0\right)=\] \[\quad=\Pr\left(\left(\alpha+1\right)^{3/2}\frac{K_{1}\sigma_{ \boldsymbol{g},\alpha}^{2}}{(2\alpha+1)^{3/2}}\left(2r-\chi_{r}^{2}\right)>0 \right)=\Pr\left(\chi_{r}^{2}<2r\right).\]
Simulation Study
To evaluate the performance of the \(RP_{NH}\)-criterion introduced in this paper, we consider the situation of a polynomial regression model. We take the model
\[Y_{i}=X_{i}+2X_{i}^{2}-X_{i}^{3}+X_{i}^{4}+\epsilon_{i},i=1,...,n,\]
where \(\epsilon_{i}\sim\mathcal{N}(0,1)\) and the variables \(X_{i}\) are fixed and chosen uniformly in the interval [-2, 2]. Next, we take \(n=100\), so that
\[X_{i}=-2+\frac{4}{102}(i+1),i=1,...,100.\]
We consider several theoretical models aiming to fit this data. These models are given by the degree of the polynomial defining the model. Note that the regression coefficients adopt the same expression as in MLRM, just taking \(X^{i}\) as \(X_{i}\), and thus we can use the formulas developed in the previous sections. In our case, we have considered six different models, varying from constants (degree 0) to polynomials of degree 5. Thus defined, each model is characterized by the degree, denoted by \(p\).
We take 1000 different sample data \((Y^{s},X^{s}),s=1,...,1000\) and for each sample, we select the best fitting model according to several criteria. We have considered \(AIC,BIC,AIC_{c}\) and the \(RP_{NH}\)-criterion for different values of the tuning parameter, namely \(\alpha=0.01,0.02,0.04,0.07,0.1,0.2,0.4,0.5,0.7\) and 1.
In Table 1 we have written the number of times that each model is selected for each model selection criterion. From these results, it can be seen that \(BIC\) seems to be the best fitting selection criterion, the other model selection criteria having a similar performance.
As it was explained throughout the paper, we expect \(RP_{NH}\) to be a robust selection criterion. To check this hypothesis, we have considered a situation of
\begin{table}
\begin{tabular}{|c|c c c c c|} \hline \(p\) & 0 & 1 & 2 & 3 & 4 & 5 \\ \hline \(AIC\) & 0 & 0 & 0 & 0 & 836 & 164 \\ \(BIC\) & 0 & 0 & 0 & 967 & 33 \\ \(AIC_{c}\) & 0 & 0 & 0 & 864 & 136 \\ \(RPNH_{0.01}\) & 0 & 0 & 0 & 822 & 178 \\ \(RPNH_{0.02}\) & 0 & 0 & 0 & 822 & 178 \\ \(RPNH_{0.04}\) & 0 & 0 & 0 & 826 & 174 \\ \(RPNH_{0.1}\) & 0 & 0 & 0 & 822 & 178 \\ \(RPNH_{0.2}\) & 0 & 0 & 0 & 834 & 166 \\ \(RPNH_{0.4}\) & 0 & 0 & 0 & 842 & 158 \\ \(RPNH_{0.5}\) & 0 & 0 & 0 & 841 & 159 \\ \(RPNH_{0.7}\) & 0 & 0 & 0 & 838 & 162 \\ \(RPNH_{1.0}\) & 0 & 0 & 0 & 837 & 163 \\ \hline \end{tabular}
\end{table}
Table 1: Results for uncontaminated data.
contamination. Thus, we consider the previous model but we introduce contamination in some of the data. More concretely, we define
\[\epsilon_{i}\sim\mathcal{U}(\min_{i}(X_{i}+2X_{i}^{2}-X_{i}^{3}+X_{i}^{4})-r, \max_{i}(X_{i}+2X_{i}^{2}-X_{i}^{3}+X_{i}^{4})+r),\]
for some of the data chosen at random. Here, \(r\) is a constant measuring the strength of contamination, in the sense that the bigger \(r,\) the strongest the contamination. We have considered three values \(r=1,5,10.\) Moreover, we have varied the proportion of data affected by contamination. In this study, we have chosen the proportion of contamination as \(0.05,0.10,0.20,0.30.\)
Again, we have obtained the best fitting model according different model selection criteria, and we have conducted this experiment 1000 times. The number of times that each model is selected for each combination of contamination and strength of contamination \(r\) is given in Tables 2, 3, 4 and 5. The left part of each table corresponds to \(r=1,\) the center part for \(r=5\) and the right part for \(r=10.\)
\begin{table}
\begin{tabular}{|c|c c c c c c|c c c c c c|c c c c c|} \hline & \multicolumn{8}{c|}{\(r=1\)} & \multicolumn{8}{c|}{\(r=5\)} & \multicolumn{8}{c|}{\(r=10\)} \\ \hline \(p\) & 0 & 1 & 2 & 3 & 4 & 5 & 0 & 1 & 2 & 3 & 4 & 5 & 0 & 1 & 2 & 3 & 4 & 5 \\ \hline \(AIC\) & 0 & 0 & 1 & 19 & 659 & 321 & 0 & 0 & 10 & 27 & 622 & 341 & 0 & 0 & 9 & 38 & 616 & 337 \\ \(BIC\) & 0 & 0 & 16 & 64 & 802 & 118 & 0 & 0 & 39 & 84 & 751 & 126 & 0 & 0 & 57 & 96 & 715 & 132 \\ \(AIC_{c}\) & 0 & 0 & 1 & 22 & 694 & 283 & 0 & 0 & 12 & 33 & 651 & 304 & 0 & 0 & 12 & 43 & 634 & 311 \\ \(RPNH_{0,01}\) & 0 & 0 & 3 & 14 & 844 & 139 & 0 & 0 & 5 & 18 & 830 & 147 & 0 & 0 & 9 & 22 & 812 & 157 \\ \(RPNH_{0,02}\) & 0 & 0 & 0 & 13 & 866 & 121 & 0 & 0 & 2 & 47 & 833 & 118 & 0 & 0 & 6 & 62 & 810 & 122 \\ \(RPNH_{0,04}\) & 0 & 0 & 2 & 20 & 844 & 134 & 0 & 0 & 1 & 18 & 833 & 148 & 0 & 0 & 1 & 18 & 833 & 148 \\ \(RPNH_{0,1}\) & 0 & 0 & 0 & 0 & 835 & 165 & 0 & 0 & 0 & 0 & 836 & 164 & 0 & 0 & 0 & 0 & 830 & 170 \\ \(RPNH_{0.2}\) & 0 & 0 & 0 & 0 & 829 & 171 & 0 & 0 & 0 & 0 & 833 & 167 & 0 & 0 & 0 & 0 & 833 & 167 \\ \(RPNH_{0.4}\) & 0 & 0 & 0 & 0 & 837 & 163 & 0 & 0 & 0 & 0 & 835 & 165 & 0 & 0 & 0 & 0 & 839 & 161 \\ \(RPNH_{0.5}\) & 0 & 0 & 0 & 0 & 837 & 163 & 0 & 0 & 0 & 0 & 848 & 152 & 0 & 0 & 0 & 0 & 834 & 166 \\ \(RPNH_{0.7}\) & 0 & 0 & 0 & 0 & 842 & 158 & 0 & 0 & 0 & 0 & 836 & 164 & 0 & 0 & 0 & 0 & 831 & 169 \\ \(RPNH_{1,0}\) & 0 & 0 & 0 & 0 & 838 & 162 & 0 & 0 & 0 & 0 & 836 & 164 & 0 & 0 & 0 & 0 & 830 & 170 \\ \hline \end{tabular}
\end{table}
Table 2: Results for contamination degree of 5%
\begin{table}
\begin{tabular}{|c|c c c c c c c c c c c c c c c c|} \hline & \multicolumn{8}{c|}{\(r=1\)} & \multicolumn{8}{c|}{\(r=5\)} & \multicolumn{8}{c|}{\(r=10\)} \\ \hline \(p\) & 0 & 1 & 2 & 3 & 4 & 5 & 0 & 1 & 2 & 3 & 4 & 5 & 0 & 1 & 2 & 3 & 4 & 5 \\ \hline \(AIC\) & 0 & 0 & 17 & 64 & 591 & 328 & 0 & 0 & 18 & 82 & 575 & 325 & 0 & 0 & 47 & 106 & 538 & 309 \\ \(BIC\) & 0 & 0 & 94 & 155 & 629 & 122 & 0 & 0 & 95 & 180 & 611 & 114 & 0 & 0 & 153 & 178 & 558 & 111 \\ \(AIC_{c}\) & 0 & 0 & 21 & 75 & 621 & 283 & 0 & 0 & 24 & 93 & 601 & 282 & 0 & 55 & 123 & 556 & 266 \\ \(RPNH_{0.01}\) & 0 & 0 & 24 & 37 & 770 & 169 & 0 & 0 & 26 & 48 & 750 & 176 & 0 & 0 & 37 & 61 & 705 & 197 \\ \(RPNH_{0.02}\) & 0 & 0 & 19 & 30 & 800 & 151 & 0 & 0 & 24 & 40 & 780 & 156 & 0 & 0 & 30 & 60 & 747 & 163 \\ \(RPNH_{0.04}\) & 0 & 0 & 16 & 60 & 809 & 115 & 0 & 0 & 19 & 100 & 764 & 117 & 0 & 0 & 23 & 94 & 770 & 113 \\ \(RPNH_{0.1}\) & 0 & 0 & 0 & 5 & 845 & 150 & 0 & 0 & 0 & 1 & 839 & 160 & 0 & 0 & 0 & 1 & 851 & 148 \\ \(RPNH_{0.2}\) & 0 & 0 & 0 & 0 & 829 & 171 & 0 & 0 & 0 & 0 & 835 & 165 & 0 & 0 & 0 & 0 & 844 & 156 \\ \(RPNH_{0.4}\) & 0 & 0 & 0 & 829 & 171 & 0 & 0 & 0 & 0 & 840 & 160 & 0 & 0 & 0 & 0 & 850 & 150 \\ \(RPNH_{0.5}\) & 0 & 0 & 0 & 0 & 824 & 176 & 0 & 0 & 0 & 0 & 845 & 155 & 0 & 0 & 0 & 0 & 841 & 159 \\ \(RPNH_{0.7}\) & 0 & 0 & 0 & 0 & 831 & 169 & 0 & 0 & 0 & 0 & 833 & 167 & 0 & 0 & 0 & 0 & 834 & 166 \\ \(RPNH_{1.0}\) & 0 & 0 & 0 & 0 & 841 & 159 & 0 & 0 & 0 & 0 & 835 & 165 & 0 & 0 & 0 & 0 & 833 & 167 \\ \hline \end{tabular}
\end{table}
Table 3: Results for a contamination degree of 10%.
From the results in these tables, it can be seen that the performance of \(AIC,BIC\) and \(AIC_{c}\) dramatically decrease, in the sense that the proportion of times obtaining the true degree \(p=4\) decreases if contamination is present. As expected, the bigger the rate of contaminated data, the poorer the performance. Note however that they are not very affected for different values of \(r\).
On the other hand, the results are quite similar to the uncontaminated case for \(RP_{NH}\) and big values of the tuning parameter. This was the expected result and it follows the same behavior as other situations where RP has been considered. The best behavior appears for \(\alpha=0.4\) and \(\alpha=0.5\), where the efficiency is good and the performance in terms of robustness is very good.
Finally, in order to test if this is the usual behavior of these methods, we have repeated this study for different values of the polynomial regression, each coefficient varying in \(\{-2,-1,0,1,2\}.\) This leads to 3125 different models for each value of \(r=1,5,10\), so that we have 9 375 different situations. And for all of them we can extract the same conclusions.
\begin{table}
\begin{tabular}{|c|c c c c c|c c c c c|c c c c c|} \hline & \multicolumn{8}{c|}{\(r=1\)} & \multicolumn{8}{c|}{\(r=5\)} & \multicolumn{8}{c|}{\(r=10\)} \\ \hline \(p\) & 0 & 1 & 2 & 3 & 4 & 5 & 0 & 1 & 2 & 3 & 4 & 5 & 0 & 1 & 2 & 3 & 4 & 5 \\ \hline \(AIC\) & 0 & 0 & 52 & 134 & 509 & 305 & 0 & 0 & 76 & 176 & 464 & 284 & 0 & 0 & 148 & 169 & 397 & 286 \\ \(BIC\) & 0 & 0 & 238 & 210 & 440 & 112 & 0 & 0 & 278 & 234 & 398 & 90 & 0 & 0 & 367 & 223 & 325 & 85 \\ \(AIC_{c}\) & 0 & 0 & 64 & 154 & 512 & 270 & 0 & 0 & 91 & 191 & 476 & 242 & 0 & 0 & 179 & 180 & 400 & 241 \\ \(RPNH_{0.01}\) & 0 & 0 & 41 & 92 & 596 & 271 & 0 & 0 & 43 & 93 & 561 & 303 & 0 & 0 & 52 & 95 & 514 & 339 \\ \(RPNH_{0.02}\) & 0 & 0 & 37 & 85 & 625 & 253 & 0 & 0 & 39 & 92 & 589 & 280 & 0 & 0 & 47 & 90 & 561 & 302 \\ \(RPNH_{0.04}\) & 0 & 0 & 29 & 75 & 676 & 220 & 0 & 0 & 32 & 88 & 661 & 219 & 0 & 0 & 43 & 93 & 648 & 216 \\ \(RPNH_{0.1}\) & 0 & 0 & 42 & 214 & 631 & 113 & 0 & 0 & 41 & 138 & 693 & 128 & 0 & 0 & 20 & 64 & 810 & 106 \\ \(RPNH_{0.2}\) & 0 & 0 & 0 & 0 & 836 & 164 & 0 & 0 & 0 & 0 & 831 & 169 & 0 & 0 & 0 & 0 & 849 & 151 \\ \(RPNH_{0.4}\) & 0 & 0 & 0 & 0 & 837 & 163 & 0 & 0 & 0 & 0 & 833 & 167 & 0 & 0 & 0 & 0 & 840 & 160 \\ \(RPNH_{0.5}\) & 0 & 0 & 0 & 0 & 836 & 164 & 0 & 0 & 0 & 0 & 829 & 171 & 0 & 0 & 0 & 0 & 840 & 160 \\ \(RPNH_{0.7}\) & 0 & 0 & 0 & 0 & 845 & 155 & 0 & 0 & 0 & 0 & 827 & 173 & 0 & 0 & 0 & 0 & 849 & 151 \\ \(RPNH_{1.0}\) & 0 & 0 & 0 & 0 & 834 & 166 & 0 & 0 & 0 & 0 & 823 & 177 & 0 & 0 & 0 & 0 & 836 & 164 \\ \hline \end{tabular}
\end{table}
Table 4: Results for a contamination degree of 20%.
\begin{table}
\begin{tabular}{|c|c c c c c|c c c c c c|c c c c c c|} \hline & \multicolumn{8}{c|}{\(r=1\)} & \multicolumn{8}{c|}{\(r=5\)} & \multicolumn{8}{c|}{\(r=10\)} \\ \hline \(p\) & 0 & 1 & 2 & 3 & 4 & 5 & 0 & 1 & 2 & 3 & 4 & 5 & 0 & 1 & 2 & 3 & 4 & 5 \\ \hline \(AIC\) & 0 & 0 & 112 & 178 & 433 & 277 & 0 & 0 & 114 & 209 & 373 & 304 & 0 & 0 & 192 & 183 & 374 & 251 \\ \(BIC\) & 0 & 0 & 327 & 256 & 327 & 90 & 0 & 0 & 385 & 240 & 276 & 99 & 0 & 0 & 457 & 212 & 253 & 78 \\ \(AIC_{c}\) & 0 & 0 & 136 & 191 & 436 & 237 & 0 & 0 & 137 & 233 & 368 & 262 & 0 & 0 & 219 & 189 & 371 & 221 \\ \(RPNH_{0.01}\) & 0 & 0 & 51 & 79 & 519 & 351 & 0 & 0 & 55 & 90 & 488 & 367 & 0 & 0 & 58 & 90 & 428 & 424 \\ \(RPNH_{0.02}\) & 0 & 0 & 46 & 77 & 540 & 337 & 0 & 0 & 48 & 84 & 520 & 348 & 0 & 0 & 52 & 87 & 472 & 389 \\ \(RPNH_{0.04}\) & 0 & 0 & 44 & 81 & 573 & 302 & 0 & 0 & 46 & 80 & 555 & 319 & 0 & 0 & 53 & 78 & 533 & 336 \\ \(RPNH_{0.1}\) & 0 & 0 & 70 & 187 & 550 & 193 & 0 & 0 & 63 & 221 & 537 & 179 & 0 & 0 & 55 & 139 & 628 & 178 \\ \(RPNH_{0.2}\) & 0 & 0 & 17 & 68 & 774 & 141 & 0 & 0 & 11 & 13 & 817 & 159 & 0 & 0 & 2 & 8 & 854 & 136 \\ \(RPNH_{0.4}\) & 0 & 0 & 0 & 0 & 856 & 144 & 0 & 0 & 0 & 0 & 833 & 167 & 0 & 0 & 0 & 0 & 841 & 159 \\ \(RPNH_{0.5}\) & 0 & 0 & 0 & 0 & 845 & 155 & 0 & 0 & 0 & 0 & 830 & 170 & 0 & 0 & 0 & 0 & 832 & 168 \\ \(RPNH_{0.7}\) & 0 & 0 & 0 & 0 & 834 & 166 & 0 & 0 & 0 & 0 & 815 & 185 & 0 & 0 & 0 & 0 & 826 & 174 \\ \(RPNH_{1.0}\) & 0 & 0 & 0 & 0 & 828 & 172 & 0 & 0 & 1 & 1 & 813 & 185 & 0 & 0 & 0 & 0 & 841 & 159 \\ \hline \end{tabular}
\end{table}
Table 5: Results for a contamination degree of 30%.
Real data example
In this section we analyze a set of real data at the light of this new model selection tool based on RP. We consider the problem proposed in [11] and later studied in [26]. The dependent variable \(Y\) measures the heat evolved in calories per gram as a function of four ingredients: tricalcium aluminate (\(X_{1}\)), tricalcium silicate (\(X_{2}\)), tetracalcium alumino-ferrite (\(X_{3}\)) and dicalcium silicate (\(X_{4}\)). The data are given in Table 6. It is assumed that \(Y\) can be written in terms of \(X_{1},X_{2},X_{3},X_{4}\) as a MLRM. We have considered the \(RP_{NH}\) procedure to select the best model for different values of the tuning parameter.
Considering different subsets of independent variables, we obtain 15 different multiple linear models and the goal is to select the best one. However, it is known that at least two independent variables are needed because cement needs a combination of at least two reactants. Hence, we can remove the four simple linear regression models and we finally consider 11 possible models.
We have applied the \(RP_{NH}\)-criterion defined in (30) for different values of the tuning parameter to select the most appropriate model. The solution is given in Table 7. As it can be seen in this table, the combinations of \(X_{1},X_{2},X_{3}\) and \(X_{1},X_{2},X_{4}\) seem to be the best candidates, with tiny differences between them. These results are similar to the conclusions obtained in [26]. Remark also the good performance of model \(X_{1},X_{2}\).
## 7 Conclusions
In this paper we have developed a new procedure for model selection for independent but not identically distributed observations aiming to compete with other methods based on maximum likelihood in terms of efficiency but being
\begin{table}
\begin{tabular}{|c c c c|c|} \hline \(X_{1}\) & \(X_{2}\) & \(X_{3}\) & \(X_{4}\) & \(Y\) \\ \hline
7 & 26 & 6 & 60 & 78.5 \\
1 & 29 & 15 & 52 & 74.3 \\
11 & 56 & 8 & 20 & 104.3 \\
11 & 31 & 8 & 47 & 87.6 \\
7 & 52 & 6 & 33 & 95.9 \\
11 & 55 & 9 & 22 & 109.2 \\
3 & 71 & 17 & 6 & 102.7 \\
1 & 31 & 22 & 44 & 72.5 \\
2 & 54 & 18 & 22 & 93.1 \\
21 & 47 & 4 & 26 & 115.9 \\
1 & 40 & 23 & 34 & 83.8 \\
11 & 66 & 9 & 12 & 113.3 \\
10 & 68 & 8 & 12 & 109.4 \\ \hline \end{tabular}
\end{table}
Table 6: The Hald cement data.
more robust agaisnt outlying data. For this purpose, we have considered RP, a tool that has proved itself to provide robust estimations in many statistical problems. We have developed a model selection criterion, the \(RP_{NH}\)-criterion, extending the well-known AIC. Besides, we have shown that the sample estimator is an unbiased estimator. Next, we have considered the case of having a restricted model and we have developed a procedure to decide whether the large model is more appropriate for modeling the available data. As an example of application, we have developed the MLRM when we aim to find the best model fitting a set of data. We have conducted a simulation study that shows that this new procedure works very well under contamination, i.e. simulations suggest that the procedure is robust. Besides, it seems that the cost in terms of efficiency is reduced. Finally, we have applied this new procedure in a situation with real data.
## Acknowledgements
This work was supported by the Spanish Grant PID2021-124933NB-I00.
\begin{table}
\begin{tabular}{|c|c c c c c|} \hline & \(RPNH_{0.01}\) & \(RPNH_{0.02}\) & \(RPNH_{0.04}\) & \(RPNH_{0.05}\) & \(RPNH_{0.07}\) \\ \hline \(X_{1},X_{2}\) & 2.4179 & 2.3655 & 2.2642 & 2.2170 & 2.1280 \\ \(X_{1},X_{3}\) & 3.8824 & 4.3871 & 4.1321 & 4.0138 & 3.7933 \\ \(X_{1},X_{4}\) & 2.5408 & 2.4827 & 2.3738 & 2.3226 & 2.2261 \\ \(X_{2},X_{3}\) & 3.3638 & 3.2836 & 3.1140 & 3.0349 & 2.8868 \\ \(X_{2},X_{4}\) & 3.7164 & 3.8865 & 3.6579 & 3.5523 & 3.3565 \\ \(X_{3},X_{4}\) & 2.9519 & 2.8785 & 2.7413 & 2.6771 & 2.5564 \\ \(X_{1},X_{2},X_{3}\) & 2.4024 & 2.3493 & 2.2495 & 2.2026 & 2.1141 \\ \(X_{1},X_{2},X_{4}\) & 2.4013 & 2.3484 & 2.2490 & 2.2023 & 2.1142 \\ \(X_{1},X_{3},X_{4}\) & 2.4295 & 2.3759 & 2.2752 & 2.2279 & 2.1386 \\ \(X_{2},X_{3},X_{4}\) & 2.6091 & 2.5490 & 2.4363 & 2.3834 & 2.2837 \\ \(X_{1},X_{2},X_{3},X_{4}\) & 2.4747 & 2.4197 & 2.3164 & 2.2679 & 2.1765 \\ \hline Best model & \((X_{1},X_{2},X_{4})\) & \((X_{1},X_{2},X_{4})\) & \((X_{1},X_{2},X_{4})\) & \((X_{1},X_{2},X_{4})\) & \((X_{1},X_{2},X_{3})\) \\ \hline & \(RPNH_{0.1}\) & \(RPNH_{0.2}\) & \(RPNH_{0.4}\) & \(RPNH_{0.5}\) & \(RPNH_{0.7}\) \\ \hline \(X_{1},X_{2}\) & 2.0064 & 1.6822 & 1.2656 & 1.1249 & 0.9197 \\ \(X_{1},X_{3}\) & 3.4984 & 2.7476 & 1.8439 & 1.5662 & 1.2019 \\ \(X_{1},X_{4}\) & 2.0946 & 1.7454 & 1.3004 & 1.1517 & 0.9411 \\ \(X_{2},X_{3}\) & 2.6873 & 2.1710 & 1.5474 & 1.3482 & 1.0702 \\ \(X_{2},X_{4}\) & 3.0967 & 2.4472 & 1.7055 & 1.4771 & 1.1616 \\ \(X_{3},X_{4}\) & 2.3930 & 1.9647 & 1.4322 & 1.2638 & 1.0347 \\ \(X_{1},X_{2},X_{3}\) & 1.9933 & 1.6716 & 1.2598 & 1.1264 & 0.9173 \\ \(X_{1},X_{2},X_{4}\) & 1.9939 & 1.6735 & 1.2623 & 1.1240 & 0.9228 \\ \(X_{1},X_{3},X_{4}\) & 2.0167 & 1.6921 & 1.2756 & 1.1352 & 0.9985 \\ \(X_{2},X_{3},X_{4}\) & 2.1482 & 1.7891 & 1.3340 & 1.1824 & 1.0089 \\ \(X_{1},X_{2},X_{3},X_{4}\) & 2.0521 & 1.7221 & 1.3014 & 1.1601 & 0.9548 \\ \hline Best model & \((X_{1},X_{2},X_{3})\) & \((X_{1},X_{2},X_{3})\) & \((X_{1},X_{2},X_{3})\) & \((X_{1},X_{2},X_{3})\) & \((X_{1},X_{2},X_{4})\) & \((X_{1},X_{2},X_{3})\) \\ \hline \end{tabular}
\end{table}
Table 7: Results for the Hald cement data. |
2303.10985 | A Column Generation Approach for Radiation Therapy Patient Scheduling
with Planned Machine Unavailability and Uncertain Future Arrivals | The number of cancer cases per year is rapidly increasing worldwide. In
radiation therapy (RT), radiation from linear accelerators is used to kill
malignant tumor cells. Scheduling patients for RT is difficult both due to the
numerous medical and technical constraints, and because of the stochastic
inflow of patients with different urgency levels. In this paper, a Column
Generation (CG) approach is proposed for the RT patient scheduling problem. The
model includes all the constraints necessary for the generated schedules to
work in practice, including for example different machine compatibilities,
individualized patient protocols, and multiple hospital sites. The model is the
first to include planned interruptions in treatments due to maintenance on
machines, which is an important aspect when scheduling patients in practice, as
it can create bottlenecks in the patient flow. Different methods to ensure that
there are available resources for high priority patients at arrival are
compared, including static and dynamic time reservation. Data from Iridium
Netwerk, the largest cancer center in Belgium, is used to evaluate the CG
approach. The results show that the dynamic time reservation method outperforms
the other methods used to handle uncertainty in future urgent patients. A
sensitivity analysis also shows that the dynamic time reservation method is
robust to fluctuations in arrival rates. The CG approach produces schedules
that fulfill all the medical and technical constraints posed at Iridium Netwerk
with acceptable computation times. | Sara Frimodig, Per Enqvist, Jan Kronqvist | 2023-03-20T10:15:14Z | http://arxiv.org/abs/2303.10985v1 | A Column Generation Approach for Radiation Therapy Patient Scheduling with Planned Machine Unavailability and Uncertain Future Arrivals
###### Abstract
The number of cancer cases per year is rapidly increasing worldwide. In radiation therapy (RT), radiation from linear accelerators is used to kill malignant tumor cells. Scheduling patients for RT is difficult both due to the numerous medical and technical constraints, and because of the stochastic inflow of patients with different urgency levels. In this paper, a Column Generation (CG) approach is proposed for the RT patient scheduling problem. The model includes all the constraints necessary for the generated schedules to work in practice, including for example different machine compatibilities, individualized patient protocols, and multiple hospital sites. The model is the first to include planned interruptions in treatments due to maintenance on machines, which is an important aspect when scheduling patients in practice, as it can create bottlenecks in the patient flow. Different methods to ensure that there are available resources for high priority patients at arrival are compared, including static and dynamic time reservation. Data from Iridium Network, the largest cancer center in Belgium, is used to evaluate the CG approach. The results show that the dynamic time reservation method outperforms the other methods used to handle uncertainty in future urgent patients. A sensitivity analysis also shows that the dynamic time reservation method is robust to fluctuations in arrival rates. The CG approach produces schedules that fulfill all the medical and technical constraints posed at Iridium Network with acceptable computation times.
Radiation therapy Patient scheduling Operations research Column generation
## 1 Introduction
Cancer is one of the leading causes of premature mortality worldwide. By 2040, the predicted number of new cancer cases per year is expected to exceed 27 million [1], a 40% increase compared to the estimated 19.3 million cancer cases in 2020 [2]. Radiation therapy (RT) is a cancer treatment that uses high doses of radiation to destroy or damage cancer cells. As a consequence of the rising cancer incidents, the need for RT will increase [3]. In external beam radiation, a machine called linear accelerator (_Iinac_) is used to direct radiation from outside the body into the tumor. Each treatment is either curative, with intent to cure the patient, or palliative, with intent to improve quality of life by providing symptom control. Cancer patients are often divided into different urgency levels depending mainly on the site of the cancer and treatment intent. For high priority patients, the treatment should start as soon as possible after admission, while lower prioritized patients can wait for two to four weeks.
The duration of each treatment session varies between patients due to for example treatment technique and tumor location. RT treatments are normally divided into a number of sessions called _fraction_s that are delivered daily over several weeks. Delivering a small fraction of the total radiation dose allows time for normal cells to repair between the
treatments, which reduces the side effects. However, the overall treatment time should be kept as short as possible, as longer gaps between fractions can enable the repopulation of cancer cells to accelerate, leading to potentially lower cure rates [4]. Patients usually receive treatment five days a week, but if there is some machine unavailability it can be necessary to postpone fractions, leading to gaps in the schedule.
In the RT scheduling problem (RTSP), the aim is to schedule patients for RT, given a set of linacs, for a certain planning horizon. Patients can have preferences on what time during the day they want to be treated, and on what hospital for cancer centers with multiple sites. Moreover, the treatments have different machine requirements. Other difficulties include patients that are treated with multiple consecutive treatments, or with non-conventional treatments. Furthermore, planned and unplanned unavailability of the machines can create bottlenecks in the patient flow. One of the main challenges for the RTSP is that there is uncertainty in demand. Since the patients are of different priority, it is important that there are resources available for urgent patients at arrival. In practice, this is often handled by reserving a percentage of the machine capacity for high priority patients. This method can cause delays in treatments, as well as unnecessary idle time on the machines, especially at large clinics with high patient flow.
Long waiting times for RT negatively impacts clinical outcomes. For example, it can cause patients to experience prolonged symptoms, tumor growth, and psychological distress [5; 6; 7; 8; 9]. Long waiting times has also been identified to cause stress for RT staff, which can compromise quality and safety of the treatments [10]. The waiting time for RT is often directly linked to the RTSP, since the number of linacs in a clinic is usually limited. At cancer clinics today, almost all patient scheduling is done manually by the staff. As the demand for RT grows, efficient resource planning is an important tool to achieve short waiting times. Therefore, this paper makes the following contributions:
* The _main contribution_ is that we present an automatic scheduling algorithm for the RTSP, with all constraints and objectives necessary for the model to work in practice. For the first time, planned machine unavailability is included in an RTSP model, which is an important step towards a full clinical implementation. We also present a method for dynamic time reservation to handle uncertainty in future patient arrivals.
* The method is evaluated using data from Iridium Network, a large RT center located in Antwerp, Belgium. In 2020, they operated 10 linacs, delivering 5500 RT treatments to approximately 4000 patients.
* The main _technical novelty_ lies in the column generation (CG) approach. To the best of our knowledge, this is the first model to simultaneously assign all fractions of the patients to both linacs and specific time windows, while also considering _planned machine unavailability_ and the constraints and objectives related to the resulting gaps in the schedules. It is also the first model to include _consecutive treatments_, where a primary treatment is followed by a secondary treatment with some additional constraints, and _non-conventional treatments_, such as treatments that should be delivered every-other-day. The model also supports _multiple hospital locations_ and allows the patients to have hospital site preferences. Furthermore, the model includes all the medical and technical constraints necessary for the scheduling to work in practice at Iridium Network.
The paper is organized as follows. Section 2 presents related work, followed by a problem description in Section 3. Section 4 presents the column generation model. Methods to manage uncertainty in patient arrivals are discussed in Section 5. Section 6 presents the experiments and the numerical results. The conclusions are presented in Section 7.
## 2 Related Work
Scheduling in healthcare has been widely studied, and summarized in several extensive literature reviews. Cayirli et al. [11], Gupta et al. [12], and Ahmadi-Javid et al. [13] present reviews that focus on scheduling on a single resource, and Marywissen et al. [14] review the special case of multi-appointment scheduling. Patient scheduling consists of _allocation scheduling_ and _appointment scheduling_. Allocation scheduling refers to methodologies for allocating patients to resources in advance of the service date, when future demand is still unknown, without assigning specific appointment times. In contrast, in appointment scheduling all patients for a given service day are assumed to be known, and specific resources and starting times are assigned to the patients. Appointment scheduling problems for example aim to minimize machine idle time, maximize preference satisfaction regarding treatment time, or deal with uncertain treatment durations or delays. On the other hand, allocation scheduling must fulfill capacity constraints, and usually deals with patients of different types and priorities. Most allocation scheduling algorithms are intended to dynamically allocate resources to patients using a rolling time horizon. The problem studied in this paper aims to bridge the gap between appointment and allocation scheduling by performing these two scheduling tasks simultaneously.
The RTSP has been studied in various alternations in the past 15 years. For a review of the literature, see Vieira et al. [15], in which the authors found \(12\) papers addressing the problem of scheduling RT patients on linacs. The solution methods have varied; exact methods such as integer programming (IP) have been used to solve small instances and metaheuristics have been used for larger ones.
Patient scheduling is done either in an _online_ fashion, where each request is handled immediately, or in _batches_, where the scheduling is done at certain time intervals (such as daily or weekly). Using batch scheduling, Conforti et al. [16] present the first IP model for optimization of RT appointments using treatment slots of equal length (_block_ scheduling), a model that was later developed by the same authors to allow for different fraction durations (_non-block_ scheduling) [17]. Jacquemin et al. [18] introduce the notion of treatment patterns in an IP model to allow non-consecutive treatment days. These papers all present models for a short planning horizon that do not consider all the constraints present in real-world RT scheduling, such as multiple machine types and partial availability in the schedule. For a more realistic setup, Petrovic et al. [19, 20] propose heuristic and metaheuristic approaches for block scheduling based on prioritized rules, where the latter considers both scheduling of the pretreatment phase and the treatment phase.
The first method for dynamic RT scheduling taking future events into account was presented by Saure et al. [21], where the problem is modeled as a discounted infinite-horizon Markov decision process that finds an approximately optimal scheduling policy. Their proposed policy can increase the treatments initiated within 10 days from 73% to 96% compared to a myopic policy (i.e., not taking future patients into account). Gocgun [22] later extended the same problem setup to also include patient cancellations. These papers assume a simplified model of a cancer center, equipped with three identical machines, 8.25 requests per day and scheduling done in batches. In contrast, Legrain et al. [23] propose a hybrid method combining stochastic and online optimization to dynamically schedule patients as they arrive. Also their setup is small; they consider block scheduling with two linacs and less than 3.5 requests per day. Aringhieri et al. [24] also present methods for online RT scheduling, and develop three online optimization algorithms for a block-scheduling formulation and one machine. These methods all allocate a start day to each patient, with no sequencing of patients throughout the day, and are difficult to scale to realistic instances.
Particle therapy (PT) is a form of RT where a single particle beam is shared between multiple treatment rooms. The medical and technical constraints differ from conventional photon beam RT, and most studies focus on maximizing the beam utilization by optimal appointment scheduling. Maschler et al. [25] showed that the exact formulation of the problem is highly intractable. Using different heuristic methods, Vogl et al. [26] and Maschler et al. [27] both create a schedule with treatments close to a pre-defined target time. Accounting for uncertain activity durations, Braune et al. [28] present a stochastic optimization model for appointment scheduling in PT and solve it using heuristics.
Focusing on appointment scheduling, where the patient list is assumed to be known and the main task is the sequencing of patients throughout the day, Vieira et al. [29] create weekly schedules using a mixed-integer programming (MIP) model combined with a pre-processing heuristic that divides the problem into subproblems for clusters of machines. Their objective is to maximize the fulfillment of the patients' time window preferences. In [30], the same authors evaluate their method in two Dutch clinics, showing that the weekly schedule was improved in both centers. Another paper that focuses on patient sequencing is Moradi et al. [31], where the authors present a data-driven approach that uses the patient information to improve the weekly schedules. The predictions are utilized in a MIP model to determine the optimal sequence of patients, for a list of patients that have previously been assigned a day. The model is simplified; all patient durations are equal and all machines are identical and independent. However, the results seem promising; it is favorable to schedule reliable patients early on to reduce idle time on machines caused by delayed patients or no-shows.
A two-stage approach for the RTSP is proposed by Pham et al. [32]. In the first phase, an IP model assigns patients to linacs and days, and the second phase assigns specific appointment times using either a MIP or a constraint programming (CP) model. The authors evaluate the algorithm dynamically on a rolling time horizon using different scheduling strategies. The approach does not take future arrivals into account when making scheduling decisions; a certain percent of linac capacity is saved for urgent patients in the first phase. Their approach is tested using generated data with seven linacs and a time horizon of 60 days based on data from CHUM, a cancer center in Canada. They show that the CP model finds good solutions sooner, while the MIP model proves optimality faster.
In Frimodig et al. [33], the sequencing of patients is decided at the same time as the assignment to linacs and days. The authors compare an IP model, a CP model and a column generation (CG) model to solve the problem, and include the expected future patients to dynamically reserve linac capacity for future urgent patients. The models are tested on generated data based on data from Iridium Network in Belgium, that has ten linacs. The results show that the CG model outperforms the other in all problem instances. The setup is similar to the setup in this paper, with the difference that the models do not consider unavailability of machines, consecutive treatments, or non-conventional treatments. Furthermore, they do not evaluate the CG approach dynamically on a rolling time horizon; that is done in this paper.
CG is a decomposition technique that if often successfully used for solving huge integer programs. The method was first presented by Ford and Fulkerson [34], and has the advantage that it does not consider all variables explicitly, but instead only generates the variables that have the potential to improve the objective function. The method alternates between a restricted master problem (RMP) and one or more subproblems used to generate new variables to the RMP. When applying CG in scheduling problems, the decision variable in the master problem is most often a schedule for one day, or one shift, or one person, and the master problem is a set partitioning problem used to find the optimal overall
schedule by minimizing the sum of the costs of the associated variables. The subproblems are used to find the variables. Other medical fields where CG has been applied is for surgeon and surgery scheduling [35], for patient admission [36], and for nurse scheduling [37]. In cancer treatments, CG has also been used for brachytherapy scheduling using deteriorating treatment times [38], and for intraday scheduling in chemotherapy [39].
## 3 Problem Description
The task in the RTSP is to assign patients to treatment machines (linacs) for each fraction according to a specific set of rules and objectives. This section presents the real-world constraints and objectives present at Iridium Network. Figure 1 presents how a patient usually is treated with RT.
The instructions for how a patient should be scheduled are given primarily by the _treatment protocol_, which is assigned to the patient by the treating physician. The most common treatment protocols at Iridium Network can be seen in Table 1. Each treatment protocol is associated with a _priority_ based on the urgency for treatment and the treatment intent. There are three priority groups: A, B and C. In 2020 at Iridium, approximately \(37\%\) of the patients were priority A, \(16\%\) were priority B, and \(46\%\) were priority C. The treatment protocol states the _minimum number of fractions per week_, that is, how much it is allowed to deviate from the ideal five fractions per week. The protocol also states the _preferred machines_ for the treatment, and the machines that are _allowed_ (but not preferred). Furthermore, the protocol states the minimum number of _days from CT_, which is the time period from the mandatory CT simulation to start of RT treatment used to create the treatment plan. The protocols have _allowed start days_: palliative patients can start any weekday, whereas curative patients cannot start on Fridays. In addition to the treatment protocol, there is also patient specific information: the _number of fractions_ during which the patient will be treated, the _dose prescription_ for each fraction, and the _duration_ of the first and subsequent fractions, where the first is longer than the rest for each patient because of initial setup and quality checks, as well as extra time for patient education and reassurance.
Iridium Network is a network with four different hospitals that each have between two and four linacs. Two linacs are _completely beam-matched_ if they are the same machine type at the same hospital, and _partially beam-matched_ if they are the same machine type, but at different hospitals. Switching between completely beam-matched machines between fractions can be done at no cost, whereas there is a cost for switching to a machine that is only a partially matched.
The day is divided into four _time windows_, and the model assigns each fraction to a machine, time window and day. Assigning patients to time windows instead of exact starting times leads to a more efficient model, while it enables all the objectives and constraints needed to follow the instructions from the scheduling staff at Iridium Network.
Priority B and C patients are assumed to be notified of their starting day one week (five working days) in advance, which the majority of patients find reasonable according to the literature [40]. Priority A patients are notified immediately.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Protocol & Priority & Minimum & Minimum & Preferred machines & Allowed machines \\ & & fractions/week & days from CT & & \\ \hline Palliative & A & 1 & 0 & M1, M2, M3, M4, M5, M6, & M10 \\ & & & & M7, M8 & \\ Breast & C & 3 & 7 & M1, M4, M5, M6, M8 & M2, M3, M7, M10 \\ Prostate & C & 3 & 9 & M1, M3, M4, M5, M6, M7, M8 & M2, M10 \\ Head-Neck & A & 5 & 11 & M2, M3, M5, M6, M10 & M1, M4 \\ Lung & B & 4 & 9 & M2, M3, M5, M6, M7, M10 & M1, M4, M8 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The most common treatment protocols Iridium Network
Figure 1: A typical fractionation scheme of an RT patient
All fractions are communicated at once, as this is the current practice at Iridium. Each patient's schedule can change until is has been communicated, i.e., booking decisions are postponed to the next day if patients are scheduled after the notification period. A _daily batch scheduling_ strategy is applied, using information accumulated throughout the day.
For an automatic scheduling algorithm to work in practice, machine unavailability is something that needs to be considered in some way. In [4], the UK guidelines for the maximum overall treatment times are presented. The gaps between fractions are often caused by machine failure or some other unexpected event, and the gaps induced by the scheduling should be kept to a minimum. Figure 2 shows when machines have _planned unavailability_ for an example month in 2020 at Iridium Network. Depending on each patient's treatment protocol, the affected fractions can either be scheduled on a beam-matched machine the same day, or postponed by adding them after the last one, or re-scheduled the same week by scheduling two fractions on some day the same week as the unavailability.
Some patients are treated with two or three _consecutive treatments_, where the primary treatment is directly followed by a secondary treatment. For example, breast cancer patients are often treated with a boost plan that follows after the primary plan. The secondary treatment must be handled separately in the scheduling since both machine requirements and durations of the fractions can differ from the primary plan. The secondary plan also has the extra constraints that the first day should follow after the last of the primary plan (ideally start the day after the primary plan has ended). At Iridium Network in 2020, around 17% of the patients were treated with two or more consecutive treatments. Furthermore, there are so called _non-conventional treatments_, meaning treatments that do not follow the regular five fractions per week schedule. The most common are treatments that should be delivered three times per week, with a pause day between each fraction. At Iridium in 2020, around 9% of all treatments were non-conventional.
Because of _uncertainty in the arrival rates_, it is important that there are resources available for urgent patients at arrival. This is difficult since the treatments span multiple weeks (often \(5-7\) weeks) and the patients have different priority. At Iridium Network, this is handled by the booking administrator, who reserves empty timeslots for urgent patients on each machine. In this paper, different methods for handling uncertainty in future arrivals are presented in Section 5.
The _scheduling objectives_ are formulated in collaboration with Iridium Network. The most important objective is to minimize the waiting times, especially for urgent patients. The patients should ideally be scheduled around the same time every day. For patients that have a preference for treatment time, it should try to be fulfilled. The treatment should if possible be scheduled on a preferred machine, and the number of switches to a machine that is not beam-matched should be kept to a minimum. Since Iridium has multiple hospital sites, the schedule should try to meet the patient preference on treating hospital. Finally, the overall treatment time for each patient should be kept as short as possible to avoid gaps in the schedule. The objectives are combined into an objective function that is presented in Section 4.2.
## 4 Column Generation Model
The RTSP is formulated as a binary set partitioning model, where each decision variable is a schedule for a patient, and the aim is to choose the optimal set of patient schedules. Since it would be too expensive to generate all feasible patient schedules, a CG approach is used, which consists of a (restricted) master problem and one subproblem for each patient. The master problem is used for _schedule selection_: it is solved to choose a schedule for each patient to make the overall schedule feasible and optimal. The subproblems are used for _schedule generation_: for each patient, a new schedule is generated that fulfills all medical and technical constraints, and if the schedule has a negative reduced cost, the variable is added to the master problem. The CG algorithm is presented in Figure 3. The notations are introduced in Table 2.
The subproblems are isolated from each other and are often very easy to solve. However, the optimization is exclusively guided by the values of the dual variables, which might lead to a large number of iterations in the CG algorithm. To improve the speed of the CG algorithm, all subproblems are run in the first iteration, and henceforth in every third iteration. In the two intermediate iterations, only the subproblems that previously have had a negative reduced cost are run, since these are the most likely to have a negative reduced cost again. The CG algorithm terminates when no negative reduced cost variables are generated by the subproblems, which means that the linear relaxation to the original problem has obtained an optimal solution. To get an integer solution the CG procedure is normally embedded in a branch-and-price algorithm (see e.g. [41]), but this is not done in this paper due to limitations in solution time. Instead, the linear program is converted to an integer program in the last step. This does not ensure an optimal solution, since some schedules not generated by the procedure could potentially improve the integer solutions.
Figure 2: Schedule for March. “W” indicates weekend and a number indicates planned unavailability on that machine
### Master Problem: Schedule Selection
In the master problem, each patient has an associated index set \(\mathcal{K}_{p}\) of feasible schedules, and the variable \(a_{p}^{i}=1\) if schedule \(i\in\mathcal{K}_{p}\) is allocated to \(p\in\mathcal{P}\), and 0 otherwise. Model (1)-(6) is the master problem: the restricted master problem (RMP) is made of a subset \(\mathcal{K}_{p}^{\prime}\subset\mathcal{K}_{p}\) of feasible schedules for each \(p\in\mathcal{P}\). Each schedule \(i\in\mathcal{K}_{p}\) has a cost \(c_{p}^{i}\), a treatment duration \(D_{p,m,d,w}^{i}\) for each machine, day, and time window, and a start day \(d_{\text{start},p}^{i}\) and end day \(d_{\text{end},p}^{i}\) representing the first and last days of treatment, computed from the subproblem variables in (36), (43), (44) and (45).
The objective function (1) is to minimize the total cost of the chosen schedules. Constraint (2) states that exactly one schedule is chosen for each patient. Constraint (3) ensures that all chosen schedules will fit in the overall schedule. To handle _consecutive treatments_, where a patient has two treatment courses that follow each other sequentially, the primary patient is denoted \(p_{c,1}\) and a dummy patient is created for the secondary treatment, denoted \(p_{c,2}\). The primary and secondary patients are connected in \((p_{c,1},p_{c,2})\in\mathcal{P}^{con}\). For these treatments, (4) states that the first day of the
\begin{table}
\begin{tabular}{l l} \hline \hline Parameter & Description \\ \hline \(\mathcal{P}=\{1,\ldots,P\}\) & Set of all patients \\ \((p_{c,1},p_{c,2})\in\mathcal{P}^{con}\) & List of tuples of connected patients for consecutive treatment \(c\) \\ \(\mathcal{D}=\{1,\ldots,D\}\) & Set of weekdays in the planning horizon \\ \(\mathcal{W}=\{1,2,3,4\}\) & Set of time windows in a day \\ \(L_{w}=\{135,120,120,135\}\) & The window length in minutes for window \(w\in\mathcal{W}\) \\ \(\mathcal{H}=\{1,\ldots,H\}\) & Set of treatment protocols, where \(h_{p}\in\mathcal{H}\) is the protocol of patient \(p\) \\ \(\mathcal{M}=\{M1,\ldots,M10\}\) & Set of machines \\ \(\mathcal{M}_{h}=(\mathcal{M}_{h}^{\text{perf}}\cup\mathcal{M}_{h}^{\text{ allowed}})\subseteq\mathcal{M}\) & Set of machines for protocol \(h\), where \(\mathcal{M}_{h}^{\text{perf}}\) are the set of preferred machines and \(\mathcal{M}_{h}^{\text{allowed}}\) are the set of allowed (but not preferred) machines \\ \(\mathcal{C}_{\mathcal{M}}=\{\{M3,M10\},\{M5,M6\}\}\) & Sets of completely beam-matched machines \\ \(\mathcal{B}_{\mathcal{M}}=\{\{M1,M4,M8\},\{M9\},\) & Sets of all beam-matched machines \\ \(\{M2,M3,M5,M6,M7,M101\}\}\) & Sets of machines at the different hospital sites \(S1,\ldots,S4\). \(\mathcal{S}_{\mathcal{M}}{}^{\text{pred},p}\) is the set of machines at site preference of patient \(p\) \\ \(dur_{p}^{1},dur_{p}^{1}\) & Duration of first and subsequent fraction for patient \(p\) (minutes) \\ \(\mathcal{F}_{p}\in\{1,\ldots,F_{p}\}\) & Set of all fractions for patient \(p\) \\ \(S_{m,d,w}\in\{0,\ldots,L_{w}\}\) & Occupied minutes each window \(w\in\mathcal{W}\), machine \(m\in\mathcal{M}\) and day \(d\in\mathcal{D}\) \\ \(\mathcal{A}_{h_{p}}\in\mathcal{D}\) & Set of allowed start days for the protocol \(h\) of patient \(p\) \\ \(c_{h_{p}}\in\{10,3\}\) & Weights for patient \(p\)’s protocol \(h\) based on priority group A, B or C \\ \(d_{p}^{\min}\in\mathcal{D}_{w}\) & The earliest day for patient \(p\) to be scheduled \\ \(p_{p}^{\text{pred}\cdot\text{perf}}\subset\mathcal{P}\) & Set of patients that have a time window preference \\ \(w^{\text{pred},p}\in\mathcal{W}\) & The window preference of patient \(p\in\mathcal{P}^{w\cdot\text{pref}}\) \\ \(f_{h}^{\min}\in\{1,\ldots,5\}\) & Minimum number of fractions per week for protocol \(h\) \\ \(\mathcal{G}=\{1,\ldots,G\}\) & Set of all weeks in planning horizon. \(\mathcal{D}_{g}\) is the set of days in week \(g\in\mathcal{G}\). \\ \(\mathcal{G}_{\mathcal{M}}^{\mathcal{U}}\subset\mathcal{G}\) & Set of weeks where there is unavailability for machine group \(b_{\mathcal{M}}\in\mathcal{B}_{\mathcal{M}}\) \\ \(\mathcal{D}_{m}^{\mathcal{U}}\subset\mathcal{D}\) & Set of unavailable days for machine \(m\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Notations for the models
Figure 3: The column generation algorithm
secondary treatment is at least one day after the last day of the primary treatment, and (5) states that the first day of the secondary treatment is at most three days after the last day of the primary treatment for each pair \((p_{c,1},p_{c,2})\in\mathcal{P}^{con}\). The full formulation is thus
minimize \[1+\sum_{p\in\mathcal{P}}\sum_{i\in\mathcal{K}_{p}}c_{p}^{i}a_{p}^{i}\] (1) subject to \[\sum_{i\in\mathcal{K}_{p}}a_{p}^{i}=1\,, \forall p\in\mathcal{P}\] (2) \[\sum_{p\in\mathcal{P}}\sum_{i\in\mathcal{K}_{p}}a_{p}^{i}D_{p,m,d,w}^{i}+S_{m,d,w}\leq L_{w}, \forall m\in\mathcal{M},d\in\mathcal{D},w\in\mathcal{W}\] (3) \[\sum_{i\in\mathcal{K}_{p_{c,1}}}a_{p_{c,1}}^{i}d_{\text{end},p_{c,1}}^{i}+1\leq\sum_{i\in\mathcal{K}_{p_{c,2}}}a_{p_{c,2}}^{i}d_{\text{start},p_{ c,2}}^{i}, \forall(p_{c,1},p_{c,2})\in\mathcal{P}^{con}\] (4) \[\sum_{i\in\mathcal{K}_{p_{c,2}}}a_{p_{c,2}}^{i}d_{\text{start},p_{ c,2}}^{i}\leq\sum_{i\in\mathcal{K}_{p_{c,1}}}a_{p_{c,1}}^{i}d_{\text{end},p_{c,1}}^{i}+3, \forall(p_{c,1},p_{c,2})\in\mathcal{P}^{con}\] (5) \[a_{p}^{i}\in\{0,1\} \forall p\in\mathcal{P},i\in\mathcal{K}_{p}.\] (6)
Relaxing the integer assumption and solving the LP yields the dual variables \(\lambda_{p}\) associated with (2), \(\gamma_{m,d,w}\) associated with (3), \(\eta_{c}\) associated with (4) and \(\xi_{c}\) associated with (5), where each \(c\) is the index of the \((p_{c,1},p_{c,2})\)-pair.
### Subproblems: Schedule Generation
One subproblem is formed for each patient \(p\in\mathcal{P}\), with the aim to generate a new feasible schedule to add to \(\mathcal{K}_{p}^{\prime}\), i.e., as a column to the RMP. The main variables for the subproblem model are presented in Table 3. As the subproblems are complex, the constraints and objectives are described individually.
#### 4.2.1 Subproblem Constraints
The subproblem constraints should ensure that all medical and technical constraints are fulfilled in schedule \(i\in\mathcal{K}_{p}\) for each patient \(p\in\mathcal{P}\). Constraint (7) forces fraction \(f\) to be scheduled exactly one time for each patient. Constraint (8) states that the first fraction for patient \(p\) is scheduled on machine \(m\) on day \(d\), in any window, whereas constraint (9) also gives the correct time window \(w\) for the first fraction. Furthermore, constraint (10) states that each patient is scheduled in exactly one time window for each fraction.
\[\sum_{m\in\mathcal{M}}\sum_{d\in\mathcal{D}}q_{p,m,d,f}^{i}=1, \forall f\in\mathcal{F}_{p} \tag{7}\] \[q_{p,m,d,1}^{i}=\sum_{w\in\mathcal{W}}t_{p,m,d,w}^{i}, \forall m\in\mathcal{M},d\in\mathcal{D}\] (8) \[t_{p,m,d,w}^{i}\leq x_{p,m,d,w}^{i}, \forall m\in\mathcal{M},d\in\mathcal{D},w\in\mathcal{W}\] (9) \[\sum_{w\in\mathcal{W}}x_{p,m,d,w}^{i}=\sum_{f\in\mathcal{F}_{p}}q _{p,m,d,f}^{i}, \forall m\in\mathcal{M},d\in\mathcal{D} \tag{10}\]
The earliest day to start treatment is \(d_{p}^{\min}\) and a treatment can only start on an allowed start day given by \(\mathcal{A}_{h_{p}}\). The fractions can only be scheduled on a machine for the patient protocol given by \(\mathcal{M}_{h_{p}}\), but not if the machine is unavailable.
\begin{table}
\begin{tabular}{l l} \hline \(q_{p,m,d,f}^{i}\in\{0,1\}\) & 1 if fraction \(f\in\mathcal{F}_{p}\) is scheduled on weekday \(d\in\mathcal{D}\) on machine \(m\in\mathcal{M}\) in schedule \(i\in\mathcal{K}_{p}\) for \(p\in\mathcal{P}\), 0 otherwise \\ \(x_{p,m,d,w}^{i}\in\{0,1\}\) & 1 if patient \(p\) in schedule \(i\in\mathcal{K}_{p}\) is scheduled in window \(w\in\mathcal{W}\) on machine \(m\in\mathcal{M}\) on weekday \(d\in\mathcal{D}\), 0 otherwise \\ \(t_{p,m,d,w}^{i}\in\{0,1\}\) & 1 if patient \(p\) in schedule \(i\in\mathcal{K}_{p}\) starts treatment in window \(w\in\mathcal{W}\) on machine \(m\in\mathcal{M}\) on weekday \(d\in\mathcal{D}\), 0 otherwise \\ \(\nu_{p,g}^{i}\in\{0,\ldots,5\}\) & The number of fractions scheduled in week \(g\in\mathcal{G}\) for patient \(p\in\mathcal{P}\) in schedule \(i\in\mathcal{K}_{p}\) \\ \(\tau_{p}^{i}\in\{1,2,\ldots\}\) & The total number of weeks that each patient is scheduled \\ \(\rho_{p,g}^{i}\in\{0,1\}\) & 0 for week \(g\) if the minimum fraction requirement is not met, or if \(g\) is the first or last week of treatment \\ \hline \end{tabular}
\end{table}
Table 3: Main variables in the subproblem (schedule generation)
In total, this is captured in constraints (11) and (12).
\[q^{i}_{p,m,d,1}=0,\quad\forall m\in\mathcal{M},d\in\mathcal{D}\text{ if }d<d^{\min}_{p}\text{ or }d\notin\mathcal{A}_{h_{p}} \tag{11}\] \[q^{i}_{p,m,d,f}=0,\quad\forall m\in\mathcal{M},d\in\mathcal{D} \text{ if }m\notin\mathcal{M}_{h_{p}}\text{ or }d\in\mathcal{D}^{\mathcal{U}}_{m},f\in \mathcal{F}_{p} \tag{12}\]
Constraint (13) ensures that the treatment fits within each time window \(w\) on machine \(m\) on day \(d\). Since the duration of the first fraction is different from the rest, the first term will evaluate to zero in the first fraction. The duration plus the already occupied time slots \(S_{m,d,w}\) in that window should be less than or equal the window length \(L_{w}\).
\[\left((x^{i}_{p,m,d,w}-t^{i}_{p,m,d,w})\,dur^{f}_{p}+t^{i}_{p,m,d,w}\,dur^{1}_{ p}\right)+S_{m,d,w}\leq L_{w},\quad\forall m\in\mathcal{M},d\in\mathcal{D},w \in\mathcal{W} \tag{13}\]
Planned machine unavailability.Because of planned machine unavailability due to maintenance or holidays, it is not possible to always schedule fractions on consecutive weekdays the way it was done in [33]. Instead, fractions are scheduled on consecutive days on weeks with no unavailability in machine group by constraint (14). Furthermore, two fractions are scheduled on consecutive days when two days in a row are available for all machines in a machine group, for patients where the minimum fractions per week is less than five (otherwise, two fractions on the same day may be necessary if there is an unavailable day is in the same week) in (15). All fractions must be in the same machine group for all patients, which is enforced by constraint (16). Finally, since the duration is different for the first fraction, and because the day of the last fraction is needed in the objectives, constraint (17) enforces an ordering of the fractions.
\[\sum_{m\in b_{\mathcal{M}}}q^{i}_{p,m,d,f}=\sum_{m\in b_{\mathcal{ M}}}q^{i}_{p,m,d+1,f+1}, \forall b_{\mathcal{M}}\in\mathcal{B}_{\mathcal{M}},d\in\mathcal{ D}_{g},f\in\mathcal{F}_{p},g\in\mathcal{G}\text{ where }g\notin\mathcal{G}^{\mathcal{U}}_{b_{\mathcal{M}}} \tag{14}\] \[\sum_{m\in b_{\mathcal{M}}}q^{i}_{p,m,d,f}=\sum_{m\in b_{\mathcal{ M}}}q^{i}_{p,m,d+1,f+1}, \text{if }f^{\min}_{h_{p}}<5,\forall b_{\mathcal{M}}\in\mathcal{B}_{ \mathcal{M}},f\in\mathcal{F}_{p},d\in\mathcal{D}\] (15) \[\text{where }d\notin\mathcal{D}^{\mathcal{U}}_{m}\text{ and }d+1\notin \mathcal{D}^{\mathcal{U}}_{m}\text{ for all }m\in b_{\mathcal{M}}\]
\[\sum_{m\in b_{\mathcal{M}}}\sum_{d\in\mathcal{D}}q^{i}_{p,m,d,f}= \sum_{m\in b_{\mathcal{M}}}\sum_{d\in\mathcal{D}}q^{i}_{p,m,d+1,f+1}, \forall b_{\mathcal{M}}\in\mathcal{B}_{\mathcal{M}},f\in\mathcal{F}_{p} \tag{16}\] \[\sum_{d\in\mathcal{D}}d\sum_{m\in\mathcal{M}}q^{i}_{p,m,d,f}\leq \sum_{d\in\mathcal{D}}d\sum_{m\in\mathcal{M}}q^{i}_{p,m,d,f+1}, \forall f=\{1,\ldots,F_{p}-1\} \tag{17}\]
If all machines in a machine group is available a week, there should be at most one fraction per day, enforced by (18). If a patient requires less than five fractions per week, constraint (19) states that there should also be at most one fraction per day. If there is unavailability in a week, there should instead be at most two fractions on one day, enforced by (20). Constraint (21) states that at most one day per week has two fractions scheduled on the same day. There should be at most two gap days between fractions, stated in constraint (22). Finally, it is not allowed to have two fractions on the first day of treatment, which is enforced by constraint (23).
\[\sum_{m\in b_{\mathcal{M}}}\sum_{f\in\mathcal{F}_{p}}q^{i}_{p,m,d, f}\leq 1, \forall b_{\mathcal{M}}\in\mathcal{B}_{\mathcal{M}},d\in\mathcal{D}_{g},g \in\mathcal{G}\text{ where }g\notin\mathcal{G}^{\mathcal{U}}_{b_{\mathcal{M}}} \tag{18}\] \[\sum_{m\in\mathcal{M}}\sum_{f\in\mathcal{F}_{p}}q^{i}_{p,m,d,f}\leq 1, \text{if }f^{\min}_{h_{p}}<5,\forall d\in\mathcal{D}\] (19) \[\sum_{m\in b_{\mathcal{M}}}\sum_{f\in\mathcal{F}_{p}}q^{i}_{p,m,d, f}\leq 2, \text{if }f^{\min}_{h_{p}}=5,\forall b_{\mathcal{M}}\in\mathcal{B}_{ \mathcal{M}},d\in\mathcal{D}_{g},g\in\mathcal{G}^{\mathcal{U}}_{b_{\mathcal{M}}}\] (20) \[\sum_{m\in\mathcal{M}}\sum_{f\in\mathcal{F}_{p}}(q^{i}_{p,m,d_{1},f}+q^{i}_{p,m,d_{2},f})\leq 3, \forall g\in\mathcal{G},d_{1},d_{2}\in\mathcal{D}_{g}\text{ with }d_{1}<d_{2}\] (21) \[\sum_{d\in\mathcal{D}}d\sum_{m\in\mathcal{M}}(q^{i}_{p,m,d,f+1}-q ^{i}_{p,m,d,f})\leq 3, \forall f=\{1,\ldots,F_{p}-1\}\] (22) \[\sum_{d\in\mathcal{D}}d\sum_{m\in\mathcal{M}}(q^{i}_{p,m,d,2}-q^{i} _{p,m,d,1})\geq 1 \text{if }F_{p}>1 \tag{23}\]
If two fractions are scheduled on the same day, they need to be scheduled with some time apart (typically 6 hours). This is enforced by constraint (24) (note that \(\mathcal{W}=\{1,2,3,4\}\)). If there are no treatments scheduled on day \(d\), (24) simply states that \(0\leq 2\). If there is one fraction scheduled, the fraction can be scheduled in any window that day. If there are two fractions scheduled, the the fractions cannot be scheduled in window two or three since the left-hand-side will
then be greater than two, forcing them to be scheduled in windows one and four, and thereby being far enough apart. Furthermore, constraint (25) forces maximum one fraction to be scheduled in each time window.
\[2\sum_{m\in\mathcal{M}}\sum_{w=\{2,3\}}x^{i}_{p,m,d,w}+\sum_{m\in \mathcal{M}}\sum_{w=\{1,4\}}x^{i}_{p,m,d,w}\leq 2, \forall d\in\mathcal{D} \tag{24}\] \[\sum_{m\in\mathcal{M}}x^{i}_{p,m,d,w}\leq 1 \forall w\in\mathcal{W},d\in\mathcal{D} \tag{25}\]
Minimum Fractions per Week.The minimum fractions per week requirement does not apply in the weeks before treatment starts or after it ends. It is approved to have less than the minimum fractions in the first or last treatment week, since a treatment could start or end in the middle of the week. The minimum number of fractions per week must be fulfilled in all intermediate weeks between the first week and the last. The number of fractions that are scheduled in week \(g\) are counted in the variable \(\nu^{i}_{p,g}\) in equation (26). The total number of weeks that each patient is scheduled is computed as the variable \(\tau^{i}_{p}\) by subtracting the start week from the end week in (27). This gives the number of intermediate weeks, that have minimum fraction requirements. For patients with \(f^{\min}_{h_{p}}<5\), there can be intermediate weeks if there are six (or more) fractions in total. Therefore, this constraint applies for all patients where \(F_{p}>5\).
\[\nu^{i}_{p,g}=\sum_{m\in\mathcal{M}}\sum_{d\in\mathcal{D}_{g}} \sum_{f\in\mathcal{F}_{p}}q^{i}_{p,m,d,f}, \forall g\in\mathcal{G} \tag{26}\] \[\tau^{i}_{p}=\sum_{g\in\mathcal{G}}g\sum_{d\in\mathcal{D}_{g}} \sum_{m\in\mathcal{M}}(q^{i}_{p,m,d,F_{p}}-q^{i}_{p,m,d,1})-1, \text{if }F_{p}>5 \tag{27}\]
If the minimum fraction requirement is not met, i.e., the number of scheduled fractions \(\nu^{i}_{p,g}<f^{\min}_{h_{p}}\), then (28) forces \(\rho^{i}_{p,g}=0\). Also, \(\rho^{i}_{p,g}=0\) if the week is the start week or the end week by constraints (29) and (30).
\[\nu^{i}_{p,g}\geq f^{\min}_{h_{p}}\rho^{i}_{p,g}, \forall g\in\mathcal{G} \tag{28}\] \[\rho^{i}_{p,g}+\sum_{m\in\mathcal{M}}\sum_{d\in\mathcal{D}_{g}}q^ {i}_{p,m,d,1}\leq 1, \text{if }F_{p}>5,\forall g\in\mathcal{G}\] (29) \[\rho^{i}_{p,g}+\sum_{m\in\mathcal{M}}\sum_{d\in\mathcal{D}_{g}}q^ {i}_{p,m,d,F_{p}}\leq 1, \text{if }F_{p}>5,\forall g\in\mathcal{G} \tag{30}\]
All intermediate weeks must have minimum fraction requirement scheduled. This is enforced by summing all binary variables \(\rho^{i}_{p,g}\) to be the number of intermediate weeks in constraint (31).
\[\sum_{g\in\mathcal{G}}\rho^{i}_{p,g}=\tau^{i}_{p},\quad\text{if }F_{p}>5 \tag{31}\]
Non-Conventional Treatments.The most common type on non-conventional treatment is where patients are treated with a pause days between each fraction. For these patients, constraints (14), (15) (fractions on consecutive days) do not apply. Since the time horizon is assumed to have only weekdays, but also the weekend is considered a pause between fractions, the set \(\mathcal{D}^{\text{fr}i}\) is created to include all Fridays in the planning horizon. Constraint (32) states that fraction \(f\) and \(f+1\) should be at least two days apart, unless it is a Friday and then (33) states that the fractions should instead be at least one weekday apart. Constraint (22) already states that the fractions should be at most three days apart.
\[\sum_{d\in\mathcal{D}\setminus\mathcal{D}^{\text{fr}i}}d\sum_{m \in\mathcal{M}}(q^{i}_{p,m,d,f+1}-q^{i}_{p,m,d,f})\geq 2 \forall f=\{1,\ldots,F_{p}-1\} \tag{32}\] \[\sum_{d\in\mathcal{D}^{\text{fr}i}}d\sum_{m\in\mathcal{M}}(q^{i} _{p,m,d,f+1}-q^{i}_{p,m,d,f})\geq 1 \forall f=\{1,\ldots,F_{p}-1\} \tag{33}\]
Another type of non-conventional treatment is for a group of patients where \(F_{p}=5\) and \(f^{\min}_{h_{p}}=5\), meaning that all fractions must be scheduled in the same week Monday to Friday. For these patients, \(\mathcal{A}_{p}\) is the set of Mondays in the planning horizon, and the ordering is enforced by constraint (34).
\[\sum_{m\in\mathcal{B}_{\mathcal{M}}}q^{i}_{p,m,d,f}=\sum_{m\in \mathcal{B}_{\mathcal{M}}}q^{i}_{p,m,d+1,f+1},\quad\forall b_{\mathcal{M}} \in\mathcal{B}_{\mathcal{M}},d=\{1,\ldots,D-1\},f=\{1,\ldots,F_{p}-1\} \tag{34}\]
A third type of non-conventional treatment at Iridium Network is total body irradiation (TBI) treatments, that should be delivered twice daily on Monday to Wednesday. For these patients, \(\mathcal{A}_{p}\) is the set of Mondays in the planning horizon and \(f_{h_{p}}^{\min}=6\). Constraints (18) and (19) are removed to allow multiple fractions per day. Constraint (24) already forces the fractions to be scheduled in the first and last time window, but we now allow this for multiple days per week by removing constraint (21). Constraints (14), (15) and (23) are also removed for these patients to never force fractions to be on consecutive days. Furthermore, (35) states that there should be at most two days between first and last fraction.
\[\sum_{d\in\mathcal{D}}d\sum_{m\in\mathcal{M}}(q_{p,m,d,F_{p}}^{i}-q_{p,m,d,1} ^{i})=2 \tag{35}\]
#### 4.2.2 Subproblem Objective Function
The subproblem objective function has two main components; the cost for the schedule, and the cost related to the dual variables from the master problem. The variables used to formulate the objective function are presented in Table 4.
Schedule cost.The objectives presented in Section 3 are formulated as a cumulative cost function, where the different objectives are combined with weights \(\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\alpha_{5},\alpha_{6},\alpha_{7}\) in (36) to the total cost \(c_{p}^{i}\) of the schedule \(i\in\mathcal{K}_{p}\).
\[\begin{split} c_{p}^{i}=&\ \alpha_{1}(c_{h_{p}}\sum_{m\in\mathcal{M}}\sum_{d=d _{p}^{\min}}^{D}q_{p,m,d,1}^{i}(d-d_{p}^{\min}))+\alpha_{2}(\sum_{d\in \mathcal{D}}z_{p,d}^{i})+\\ &\ \alpha_{3}(u_{p}^{i})+\alpha_{4}(\sum_{f\in\mathcal{F}_{p}}\sum_{m \in\mathcal{M}}\sum_{d\in\mathcal{D}}q_{p,m,d,f}^{i}\mathds{1}_{(m\notin \mathcal{M}_{p}^{\min})})+\alpha_{5}(\sum_{f=1}^{F_{p}-1}s_{p,f}^{i})+\\ &\ \alpha_{6}(\sum_{f\in\mathcal{F}_{p}}\sum_{m\in\mathcal{M}}\sum_{d \in\mathcal{D}}q_{p,m,d,f}^{i}\mathds{1}_{(m\notin\mathcal{S}_{\mathcal{M}} \neq\sigma_{p})})+\alpha_{7}o_{p}^{i}\end{split} \tag{36}\]
The most important objective is to minimize a weighted sum of the waiting times. In the first term in (36), the number of waiting days after \(d_{p}^{\min}\) are linearly penalized with weight \(c_{h_{p}}\) corresponding to the priority group of protocol \(h\) for patient \(p\). The second cost is the deviations in treatment time for each patient, which is computed as the number of time window switches. The variable \(z_{p,d}\) is defined according to constraints (37) and (38) to compute the time window switches between two days, and used in the second term in the cost function (36).
\[z_{p,d}^{i}\geq\sum_{m\in\mathcal{M}}(x_{p,m,d,w}^{i}-x_{p,m,d+1,w}^{i}), \forall d=\{1,\ldots,D-1\},w\in\mathcal{W} \tag{37}\] \[z_{p,d}^{i}\geq\sum_{m\in\mathcal{M}}(x_{p,m,d+1,w}^{i}-x_{p,m,d,w}^{i}), \forall d=\{1,\ldots,D-1\},w\in\mathcal{W} \tag{38}\]
Some patients have a preference on treatment time of the day. The variable \(u_{p}^{i}\) is defined in (39) and (40) to be the violation of the time window preference for each patient, and is used in the third term in (36).
\[u_{p}^{i}=\sum_{m\in\mathcal{M}}\sum_{d\in\mathcal{D}}\sum_{w \in\mathcal{W}}x_{p,m,d,w}^{i}\mathds{1}_{(w\neq u^{\text{pred},p})}, \text{if }p\in\mathcal{P}^{w\cdot\text{pred}} \tag{39}\] \[u_{p}^{i}=0, \text{if }p\notin\mathcal{P}^{w\cdot\text{pred}} \tag{40}\]
The number of fractions scheduled on a non-preferred machine stated by the treatment protocol are summed in the fourth term in (36). Moreover, there is a cost for the number of switches between machines that are not completely beam-matched. If fraction \(f\) is scheduled on a machine in a group of completely beam-matched machines, but \(f+1\) is not, then it must be scheduled on a partially matched machine. The variable \(s_{p,f}^{i}\) is one if there is a switch to a partially
\begin{table}
\begin{tabular}{l l} \hline \hline \(z_{p,d}^{i}\in\{0,1\}\) & \(1\) if patient \(p\) in schedule \(i\) has switched windows from day \(d\) to \(d+1\), \(0\) otherwise \\ \(u_{p}^{i}\in\{0,1,\ldots\}\) & The number of violation of the time window preference for patient \(p\in\mathcal{P}\) in schedule \(i\) \\ \(s_{p,f}^{i}\in\{0,1\}\) & \(1\) if patient \(p\) in schedule \(i\) switches to a partially beam-matched machine from fraction \(f\) to \(f+1\), \(0\) \\ \(o_{p}^{i}\in\{0,1,\ldots\}\) & otherwise, for \(f=\{1,\ldots,F_{p}-1\}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Cost function variables in the subproblem (schedule generation)
matched machine, enforced by constraint (41). All machine switches to partially matched machines are summed in the fifth term in (36).
\[s^{i}_{p,f}\geq\sum_{d\in\mathcal{D}}\sum_{m\in\mathcal{M}}(q^{i}_{p,m,d,f}-q^{i} _{p,m,d,f+1}),\quad\forall c_{\mathcal{M}}\in\mathcal{C}_{\mathcal{M}},f=\{1, \ldots,F_{p}-1\} \tag{41}\]
Most patients have preferences on what hospital to be treated on based on where they live; \(\mathcal{S}_{\mathcal{M}}\) per\(,p\) is the list of machines at the preferred site of patient \(p\in\mathcal{P}\). In the sixth term in (36), the fractions scheduled on another site than the preferred is summed. Finally, there is an objective to keep the overall treatment time as short as possible. The overall treatment time is computed as the number of weekdays from the first to the last fraction. The excess time has a lower bound of zero, and is otherwise the number of days extra needed to complete the fractions, computed by (42) and added to the cost in the seventh term in (36).
\[o^{i}_{p}\geq\sum_{d\in\mathcal{D}}d\sum_{m\in\mathcal{M}}(q^{i}_{p,m,d,F_{p}}- q^{i}_{p,m,d,1})-F_{p} \tag{42}\]
The values of the weights \(\alpha_{1},\ldots,\alpha_{7}\) should reflect the importance of the objectives in relation to each other. In [33], it was shown that the balancing of the different objectives is clearly related to the weights. In this paper, the weights are fixed to have values \(\alpha_{1}=100,\alpha_{2}=1,\alpha_{3}=1,\alpha_{4}=10,\alpha_{5}=10,\alpha_{ 6}=50,\alpha_{7}=300\). Since waiting time objective is also weighted by \(c_{h_{p}}\), this objective has the highest weight for priority A patients.
Objective function.Relaxing the integer assumption in the master problem and solving the LP yields the dual variables \(\lambda_{p}\) associated with (2), \(\gamma_{m,d,w}\) associated with (3), \(\eta_{c}\) associated with (4) and \(\xi_{c}\) associated with (5). The master problem constraints have parameters \(D^{i}_{p,m,d,w}\), corresponding to the duration of the treatment on machine \(m\), day \(d\), time window \(w\) in schedule \(i\in\mathcal{K}_{p}\), and \(d^{i}_{\text{start},p}\) and \(d^{i}_{\text{end},p}\) stating the first and last day of the treatment in schedule \(i\in\mathcal{K}_{p}\). These parameters are computed from the subproblem variables in (43), (44) and (45).
\[D^{i}_{p,m,d,w} =\Big{(}(x^{i}_{p,m,d,w}-t^{i}_{p,m,d,w})\text{\emph{d}ur}^{f}_{p }+t^{i}_{p,m,d,w}\text{\emph{d}ur}^{1}_{p}\Big{)}\quad\forall m\in\mathcal{M},d\in\mathcal{D},w\in\mathcal{W} \tag{43}\] \[d^{i}_{\text{start},p} =\sum_{d\in\mathcal{D}}d\sum_{m\in\mathcal{M}}q^{i}_{p,m,d,1}\] (44) \[d^{i}_{\text{end},p} =\sum_{d\in\mathcal{D}}d\sum_{m\in\mathcal{M}}q^{i}_{p,m,d,F_{p}} \tag{45}\]
For the consecutive treatments, the subproblem objective function has an additional term: for primary treatments, the term is defined in (46), and for secondary treatments, the term is defined in (47), both based on the dual variables from (4) and (5) with \(d^{i}_{\text{start},p}\) and \(d^{i}_{\text{end},p}\) defined in (44), (45), where \(c\) is the index of the \((p_{c,1},p_{c,2})\in\mathcal{P}^{con}\)-pair.
\[-(\eta_{c}-\xi_{c})\sum_{d\in\mathcal{D}}d\sum_{m\in\mathcal{M}} q^{i}_{p,m,d,F_{p}}\quad\text{if }p=p_{c,1}\text{ for }(p_{c,1},p_{c,2})\in\mathcal{P}^{con} \tag{46}\] \[-(\xi_{c}-\eta_{c})\sum_{d\in\mathcal{D}}d\sum_{m\in\mathcal{M}} q^{i}_{p,m,d,1}\quad\text{if }p=p_{c,2}\text{ for }(p_{c,1},p_{c,2})\in\mathcal{P}^{con} \tag{47}\]
The subproblem objective function (48) is the cost of the schedule defined by (36), minus the master dual variables multiplied by the coefficients given from their respective constraints in the master problem. If \(p\in\mathcal{P}^{con}\), the term from (46) or (47) is also added.
\[\text{minimize}\quad c^{i}_{p}-\lambda_{p}-\sum_{m\in\mathcal{M}} \sum_{d\in\mathcal{D}}\sum_{w\in\mathcal{W}}\gamma_{m,d,w}\Big{(}(x^{i}_{p,m,d,w}-t^{i}_{p,m,d,w})\text{\emph{d}ur}^{f}_{p}+t^{i}_{p,m,d,w}\text{\emph{d}ur }^{1}_{p}\Big{)} \tag{48}\]
### Heuristic to Create Initial Schedules
When solving a CG formulation, it is often beneficial to have a heuristically generated set of initial columns [42], which is true also for the CG algorithm as described in Figure 3. For the algorithm to perform well, it is essential that the set of initial schedules are of good _quality_, and that there is _diversity_ in the initial schedules for each patient (i.e., not the same column generated over again). Therefore, there must be randomization included in the heuristic. The number of initial schedules is set to 50, because for a larger number it becomes difficult to create new schedules that are not duplicates of the ones already created without sacrificing too much in quality, which in general increases solution times.
The overarching idea with the schedule construction algorithm is that it will create 50 _complete_ schedules with all patients, that are varying in quality but that all fulfill the capacity constraints. In other words, for \(i=1,\ldots,50\) the
capacity constraint in the master problem (3) can be fulfilled by setting \(a_{p}^{i}=1\) for all \(p\in\mathcal{P}\) for one \(i\) at a time. This is achieved in the algorithm by looping through all patients (in partly randomized order), and assigning them to machines, days and time windows according to some specific (partly randomized) order, while making sure that the maximum capacity of the resources is not exceeded, and repeating this from scratch for each \(i\). The time horizon is chosen long enough so that it will always be possible to find feasible schedules on some machine. The schedule construction heuristic is presented in pseudocode in Appendix A.
## 5 Uncertainty in Future Arrivals
A major challenge in RT patient scheduling is the stochastic patient inflow, and that patients of high priority should start treatment as soon as possible after arrival. Figure 4 shows the daily patient arrivals at Iridium in 2020. The two-week rolling average varies between 13.9 and 26.4 patients per day, and between 4.1 and 10.1 priority A-patients per day. Most clinics, including Iridium Network, reserve a proportion of machine capacity for high priority patients each day. This solution can result in poor quality schedules; urgent patients may have to wait for treatment if not enough capacity was reserved, or the treatments for low-priority patients may be delayed as a result of poor machine utilization.
To account for uncertainties in future arrivals, three different methods to reserve time for high priority patients are investigated. These include to not save time for future arrivals at all (_no reservation_), to reserve time on the machines based on average utilization rates for high priority patients, which should mimic the current practices (_static time reservation_), and to include expected future patients arrivals in the problem as placeholder (dummy) patients, thereby allowing for trade-offs with the actual patients (_dynamic time reservation_). Assuming that we could see into the future, it is possible to do an _offline_ solution by using the actual future arrivals for the coming weeks, and including them each day when creating the schedule. This is of course not possible in reality, butis used to get a best case schedule.
The _static time reservation_ method uses the historical patient arrivals, and distributes them over the preferred machines for each protocol based on the average number of fractions and average session time for each protocol according to Figure 5. The time is reserved for priority A patients by blocking it for priority B and C patients; the input schedule \(S_{m,d,w}\) is adjusted to include the blocked time when schedules for priority B and C patients are generated, both in Algorithm 1 and constraint (13). However, the right-hand-side in (3) remains intact, since the constraint deals with patients of all priority groups. This is not a problem since no schedules that uses the blocked time can be created.
The _dynamic time reservation_ method adds the expected future priority A patients as _placeholder_ (dummy) patients and includes them in the scheduling. Based on historical arrival rates, 36 priority A patients are expected to arrive each week on average. In the dynamic reservation method, these are distributed over the protocols according to the protocol probabilities, where the protocols with similar characteristics are grouped to simplify the problem. An overview of the placeholder patients for each week can be seen in Table 5. In each daily scheduling problem, the placeholder patients are added for each week in the planning horizon where the actual patients are expected to start their treatments, which is usually around 8 weeks. The cost function \(c_{p}^{i}\) in (36) for the future dummy patients does not include costs for time window switches, time window preferences, or machine switches, but it does include waiting time, preferred machines, overall treatment time and site preferences. The site preferences are generated for each dummy future patient based on the historical distribution between the hospitals. To include the expected future patients as placeholder patients in the scheduling will reserve capacity on machines for future patients, however, the reservation is dynamic since it allows for trade-offs with the actual patients; if a low-priority patient has already waited a very long time for treatment, there are cases where it is beneficial for the overall schedule to let that patient start before a patient of higher priority.
Figure 4: Daily arrivals in 2020 at Iridium Network, excluding weekends and holidays
## 6 Results
To test the model, data from Iridium Network from 2020 is used. In 2020, they operated ten linacs. One of them, M9, is a specialized linac and its scheduling can be done separately from the other nine linacs. In 2020, there were 87 days with planned machine unavailability, not counting the weekends. This means that around 34% of all weekdays in the year had some type of machine unavailability planned due to holidays, maintenance or other quality assurance activities. All patients that arrived to Iridium Network in 2019, but had treatments scheduled in 2020, are seen as fixed in the input schedule for the 2020 scheduling. The occupancy in the 2020 schedule resulting from 2019 patients can be seen in Figure 6. On the first day of 2020, this occupancy will make up the input schedule \(S_{m,d,w}\) in the CG model.
The daily batch scheduling at Iridium Network is simulated starting from the 1st of January 2020 by using the _actual_ patient arrivals at Iridium Network (Figure 4) for each day of 2020. A problem instance is made up of the _input schedule_ containing the treatments that are fixed to the schedule due to previous scheduling decisions, together with the list of unscheduled patients _from previous days_, and the _current day's arrivals_. The CG algorithm is run to perform the daily batch scheduling. In the resulting schedule, the patients that start treatment within the notification period are fixed to the input schedule for the next day, while the scheduling decisions are postponed to the next day for the remaining patients.
In total, there are 254 problem instances corresponding to the workdays in 2020. The planning horizon is three months. The number of patients to be scheduled each day varies depending on what uncertainty method from Section 5 that is used, but for the static time blocking method the number of patients will vary between 27
Figure 5: Percentage capacity reserved for priority A patients for each machine for _static time reservation_ method
Figure 6: Occupancy in input schedule resulting from patients that arrived in 2019
\begin{table}
\begin{tabular}{l l l l l l l} \hline Protocol & \# patients/ & Minimum & \(\{\mathit{d}ur^{1}_{p},\mathit{d}ur^{f}_{p}\}\) & Machines preferred & Number & Minimum \\ & week & fractions/week & & & fractions & days from CT \\ \hline Urgent 1 & 19 & 1 & \(\{24,24\}\) & M1, M2, M3, M4, M5, & 3 & 0 \\ & & & & M6, M7, M8 & & \\ VMAT 1 & 6 & 5 & \(\{24,12\}\) & M2, M3, M5, M6, M10 & 28 & 11 \\ VMAT 2 & 5 & 4 & \(\{24,12\}\) & M1, M2, M3, M5, M6, & 23 & 10 \\ & & & & M7, M8, M10 & & \\ STX 1 & 3 & 3 & \(\{40,40\}\) & M9 & 6 & 5 \\ Electron & 1 & 3 & \(\{24,12\}\) & M1, M4, M5, M6, M8 & 12 & 5 \\ VMAT 3 & 1 & 3 & \(\{24,12\}\) & M1, M4, M5, M6, M8 & 10 & 4 \\ Urgent 2 & 1 & 1 & \(\{24,24\}\) & M9 & 1 & 0 \\ \hline \end{tabular}
\end{table}
Table 5: Placeholder priority A patients added to problem in the dynamic time reservation method
of 72.3 patients per daily batch scheduling instance. All data used in this paper is publicly available1. The numerical experiments are run on a Windows 10 computer, with an Intel(r) Core(tm) i9-7940X X-series processor and 64 GB of RAM. The CG model is created in Python 3.8 and solved using IBM ILOG CPLEX 20.1 in the Python API.
Footnote 1: Access through this link: [https://osf.io/j2bxp/?view_only=e1402382b67f4ad0a4b8a3f4ed28088a](https://osf.io/j2bxp/?view_only=e1402382b67f4ad0a4b8a3f4ed28088a)
### Planned machine unavailability
To the best of our knowledge, the RTSP model presented in this paper is the first to consider planned unavailability on machines. Without the planned unavailability, constraints (14) to (31) can be replaced by a single constraint forcing fractions to be on consecutive days, as done in [33]. Furthermore, since gaps in the schedules induced by the scheduling procedure are only allowed when there is machine unavailability, the minimum fractions per week requirements are unnecessary. Figure 7 shows boxplots for the solution times for the different methods to handle future uncertainty. The solution times for the the dynamic time reservation model are shown with and without planned unavailability. The time limit was set to one hour. One can see in Figure 7 that including the planned machine unavailability in the CG model increases the solution times considerably: the average solution time is 3.79 times as high when including unavailability ("DynamicRes") as when not including it ("Without unavailability (DynamicRes)"). Furthermore, the solution quality will also be poorer since there are many more timeouts among the instances: when including the unavailability, 11.8% of the 254 instances time out, whereas only 0.8% of the instances time out when not including the unavailability.
Excluding the weekends, the machines have between 16 and 18 unavailable days each in 2020. When not including the unavailability in the CG model, the number of fractions scheduled on unavailable days is on average 375.1 per machine during 2020. It could be an option to not include the unavailability in the model, and to instead manually reschedule the fractions for the patients on these days. However, the number of fractions to manually reschedule is very large, and there is no way to ensure that the minimum fractions per week requirements can be fulfilled. According to the UK guidelines for the management of treatment interruptions [4], there is a wide range of tumor growth rates, and therefore, three different categories with different regulations on maximum number of gaps in the schedule. Category 1 patients have tumor types for which there is evidence that prolongation of treatment affects outcome, and who are being treated radically with curative intent. For these patients, the guidelines state: "Any audit of this category of patient - departmental or national - should show that there was no prolongation of overall treatment time in excess of two days for at least 95% of the group." The schedules generated by the CG algorithm all fulfill this goal.
### Uncertainty in future arrival rates
The methods _no reservation_, _static time reservation_, and _dynamic time reservation_ are compared to the _offline_ solution, which includes the actual future patient arrivals and therefore is the optimal solution. The solution times are presented in Figure 7. Figure 8 shows six of the objectives for all versions. The solution time is shortest when not taking future patients into account at all (no reservation); this is the only setting in which the CG approach never reaches the time limit. However, in Figure 8 it is clear that although it is comparable with the other methods in objectives b and d-f, it clearly has the worst waiting time for priority A patients (objective a), and some patients would have to wait almost three weeks for treatment. Furthermore, the priority A patients also have a higher cost in objective c, i.e., more fractions scheduled on non-preferred machines. In contrast, the offline solution naturally has the best performance in the waiting time objective, since it always optimizes the schedules for the actual future events that will occur. The offline solution also demonstrates that there is a trade-off between the objectives; although is has better performance in the waiting time objective than the other methods, this is possible due to more fractions being scheduled on non-preferred machines (objective c) for the priority A patients.
Figure 7: Time to solve the instances for the different methods to handle unavailability. ”Res” here means reservation. The dynamic time reservation is shown both with and without unavailability in model. Timeout was set to 1 hour
Figure 7 shows that the static time reservation method is much faster than the dynamic time reservation method, and the solution times are comparable to when not reserving time for priority A patients at all. In Figure 8a), it is shown that the static time reservation method has the same performance in waiting time for priority A patients as the offline solution, however, at the cost of priority B and C patients sometimes having to wait unacceptably long for treatment. In objective b-f, Figure 8 shows that both the static and dynamic time reservation methods have similar performance.
The dynamic time reservation method has the longest solution times as seen in Figure 7, which could be expected since there is one subproblem for each patient, and therefore the time to solve the subproblems scales somewhat linearly with the number of patients (recall that the dynamic time reservation method adds future patients as placeholder patients that are included in the optimization). The master problem, however, also grows with the number of patients and its increase is sometimes much more than linear due to a combinatoric explosion. Although the solution times are longer and the method often times out before reaching optimality, the dynamic time reservation is the method closest to the offline solution for the waiting times and has similar performance in the other objective functions. This indicates that if the timeout would be longer, the quality of the dynamic time reservation method would likely be even better. Overall, the performance in the objective functions as shown in Figure 8 demonstrates that the dynamic time reservation method outperforms both the method without reservation and the static time reservation method.
### Sensitivity analysis for dynamic time reservation method
For the dynamic time reservation method, the amount of time reserved (i.e., the number or placeholder priority A patients) is varied to analyze the sensitivity of the solution. The different cases are presented in Table 6, where the number of future priority A patients, \(\lambda_{A}\), is varied by \(\pm 6\) patients per week. The dynamic time reservation method adds expected future priority A patients for the weeks in the time horizon where the current (actual) patients are assumed to start treatment based on their earliest start day given by \(d_{p}^{\min}\). This is usually around 8 weeks, but can also be shorter and longer. Therefore, Table 6 also shows the average total number of future placeholder patients per problem instance.
\begin{table}
\begin{tabular}{l l l} \hline Number priority A-patients per & Note & Average total number of future placeholder patients \\ week & & per problem instance \\ \hline \(\lambda_{A}=30\) & \(-16.7\%\) & 240.1 (std: 45.0) \\ \(\lambda_{A}=36\) & Expected number from data & 288.5 (std: 54.0) \\ \(\lambda_{A}=42\) & \(+16.7\%\) & 336.7 (std: 63.0) \\ \hline \end{tabular}
\end{table}
Table 6: The variations in expected future arrivals for sensitivity analysis of the dynamic time reservation method
Figure 8: Costs relating to the different objectives for priority group A, B and C, with the top 1% marked as outliers
The results from the sensitivity analysis can be seen in Figure 9. The left plot shows the waiting times in days for the different priority groups. It shows that the dynamic time reservation method is robust to fluctuations in the arrival rates, since all three values of \(\lambda_{A}\) perform similarly, with some variations in the top 1% of the waiting times only. The right plot shows that the computation times for the different values of \(\lambda_{A}\) are also similar, however, it can be seen that for \(\lambda_{A}=42\), the solution times are actually shorter than for \(\lambda_{A}=36\). The most likely explanation to this is that in the CG algorithm, all subproblems (one for each patient) are run only every third iteration, and in the two intermediate iterations only the subproblems that previously had negative reduced costs are run. When \(\lambda_{A}\) is larger, more time is reserved for priority A patients, which means that for the current (actual) patients there is more capacity available in the input schedule, making it more likely that the schedules generated initially by the heuristic can be used in an optimal solution. If the subproblems do not have negative reduced costs, they will not be run in every iteration, saving time considerably.
### Performance of the CG approach
As stated in Section 4, the linear relaxation of the master problem is converted to an integer program in the last step, which does not guarantee that the optimal solution is found. To test the performance of the CG approach, an equivalent IP is formulated and run for a subset of ten problem instances for which the CG approach did not time out, since in those cases it is obvious that the solution can be improved. By using the CG solution as a warm start and giving the IP model unlimited runtime, the optimal value to each instance is determined. Table 7 shows the optimality gap between the CG solution and the _proven_ optimal solution for a number of different initial columns. It also shows the results if warmstarting the IP formulation from a feasible solution generated by the CG heuristic, for which it can prove optimality in four cases within the 24 hour time limit. The results show that the CG approach performs best for 50 initial columns per patient, but that it is robust to changes in this number. On average, the optimality gap between the solution from the CG approach and the proven optimality is 2.3% for the 50 column case.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline & \multicolumn{3}{l}{Relative optimality gap between the current best objective value and the _proven_ optimal value in percent. In} \\ & \multicolumn{1}{l}{parenthesis is the solution time.} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ \hline Instance & CG: 30 columns & CG: 50 columns & CG: 75 columns & CG: 100 columns & IP (24h time limit) \\ & per patient & per patient & per patient & per patient & per patient \\ \hline
46 & 0.5\% (23 min) & 0.1\% (19 min) & 0.4\% (20 min) & 0.0\% (25 min) & 10.3\% (24h) \\
54 & 0.5\% (35 min) & 0.5\% (23 min) & 0.3\% (22 min) & 0.7\% (22 min) & 10.2\% (24h) \\
74 & 0.0\% (23 min) & 0.0\% (22 min) & 0.4\% (25 min) & 0.0\% (29 min) & 57.3\% (out of memory at 6.5h) \\
96 & 10.4\% (31 min) & 10.4\% (31 min) & 10.8\% (31 min) & 10.7\% (36 min) & 0.1\% (24h) \\
121 & 0.0\% (19 min) & 0.0\% (18 min) & 0.0\% (20 min) & 0.0\% (20 min) & 0\% (16.2h) \\
145 & 2.9\% (34 min) & 1.8\% (33 min) & 2.5\% (31 min) & 2.4\% (33 min) & 0\% (18.8h) \\
165 & 0.0\% (27 min) & 0.4\% (34 min) & 0.3\% (24 min) & 0.4\% (35 min) & 7.2\% (out of memory at 14.4h) \\
183 & 0.4\% (18 min) & 0.1\% (22 min) & 0.1\% (23 min) & 0.1\% (24 min) & 50.5\% (out of memory at 14.9h) \\
238 & 0.8\% (20 min) & 0.5\% (25 min) & 0.5\% (20 min) & 0.5\% (26 min) & 0\% (1.9h) \\
245 & 9.6\% (34 min) & 9.3\% (35 min) & 10.5\% (57 min) & 10.9\% (37 min) & 0\% (42 min) \\ \hline Average & 2.5\% (26 min) & 2.3\% (26 min) & 2.6\% (27 min) & 2.6\% (29 min) & 13.5\% (872 min) \\ \hline \hline \end{tabular}
\end{table}
Table 7: The performance of the IP model and the CG approach with different number of initial columns per patient
Figure 9: Sensitivity analysis for dynamic time reservation, with top 1% marked as outliers in the waiting time plot
Conclusions
It has been shown in numerous studies that long waiting times for RT negatively impacts clinical outcomes. Since the number of linacs in a clinic is limited, the waiting time for treatment is often directly linked to the RT patient scheduling problem, which is currently done manually by the staff in most clinics. The main contribution in this paper is that we present an automatic scheduling algorithm for the RTSP based on column generation. The model includes all constraints and objectives necessary for it to work in practice at Iridium Network, a large cancer center with ten linacs in Antwerp, Belgium. To the best of our knowledge, this is the first model to consider planned machine unavailability and the constraints and objectives related to the resulting gaps in the schedules. The model is also the first to include specialized treatments, such as consecutive treatments, and non-conventional treatments. The model also supports multiple hospital locations and allows the patients to have preferences on where they want to be treated. To account for uncertainty in the future arrivals of urgent patients, we present a method for dynamic time reservation, and compare it to the static time reservation method that most clinics use today.
The results show that including planned machine unavailability is not straightforward: there are many additional constraints that must be added to ensure that the patients fulfill the minimum fractions per week requirement while the number of gaps in the schedules are minimized. This leads to a large increase in computational time. However, this addition is necessary for the automatic scheduling algorithm to work in practice when scheduling patients on a rolling time horizon. Furthermore, when comparing different methods to handle uncertainty in future high-priority patient arrivals, the results show that the dynamic time reservation method outperforms the static time reservation method. This is especially true for lower prioritized patients, as the static method seems to sometimes be too conservative when reserving time for future urgent patients. The dynamic time reservation method frequently reaches the time limit of one hour, thus the quality of the scheduled could possibly be improved further if allowing longer computation times. Since the average arrival rates, together with the distribution of the protocols, are taken from the data from 2020, and the dynamic time reservation method is thereafter tested on the 2020 data, there could be a bias and the results could be overvalued. However, the sensitivity analysis show that the dynamic time reservation method is robust to fluctuations in arrival rates, which strengthens the conclusion that this method works very well. Finally, by evaluating the performance of the CG approach compared to an exact method, it can be seen that the CG approach generates schedules that are close to optimal in a reasonable time frame.
Future work.The first future step is to compare the automatically generated scheduled with the actual schedules from Iridium Network that were manually constructed. To do this, the manual schedules must be obtained. To make the comparison fair, the same obstacles must be present when using the automatic scheduling algorithm as were in reality, including unplanned unavailability of machines due to failures.
When accounting for uncertainty in future arrivals, it is possible that the dynamic time reservation method could be improved by not using a static number of placeholder patients to add every week in the time horizon based on average arrivals, but instead using machine learning to predict the future arrivals based on historical arrival rates and the current occupancy in the schedule. To to this, more data is needed, since although a prediction based on the 2020 data can be done, another dataset would be needed for testing.
To improve the quality of the schedules, it is possible that grouping similar treatments to be scheduled after each other could decrease the times needed for machine setup. Furthermore, the fixation devices used during the RT would not need to be moved between treatment rooms, which could also save time and effort for the medical staff.
### Acknowledgements
The authors are greatful to Carole Mercier and Geert de Kerf at Iridium Network for valuable insights in the RT scheduling process and help with data gathering. The authors also thank Mats Carlsson at RISE Research Institutes of Sweden for helpful comments about the manuscript.
## Appendix A Pseudocode for schedule construction
The pseudocode for the construction of the initial schedules in the CG algorithm is presented in Algorithm 1. The function \(R:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) takes a set of elements and returns them in random order. In (49), different sets of machines are presented for each patient, that are used in Algorithm 1. The first set of machines \(\mathcal{M}_{1}\) is the best for the patient, since it adheres to both the site preference and protocol machine preference, and the following are in descending order less well suited for the patient to be scheduled on.
\[\mathcal{M}_{1}^{\text{p}} =\mathcal{S}_{\mathcal{M}}\text{{}^{\text{pref,}p}}\cap\mathcal{M }_{h}^{\text{pref}}, \mathcal{M}_{2}^{\text{p}} =\mathcal{S}_{\mathcal{M}}\text{{}^{\text{pref,}p}}\cap\mathcal{M }_{h} \tag{49}\] \[\mathcal{M}_{3}^{\text{p}} =\mathcal{M}_{h}^{\text{pref}}, \mathcal{M}_{4}^{\text{p}} =\mathcal{M}_{h}\]
|
2310.01768 | Backdiff: a diffusion model for generalized transferable protein
backmapping | Coarse-grained (CG) models play a crucial role in the study of protein
structures, protein thermodynamic properties, and protein conformation
dynamics. Due to the information loss in the coarse-graining process,
backmapping from CG to all-atom configurations is essential in many protein
design and drug discovery applications when detailed atomic representations are
needed for in-depth studies. Despite recent progress in data-driven backmapping
approaches, devising a backmapping method that can be universally applied
across various CG models and proteins remains unresolved. In this work, we
propose BackDiff, a new generative model designed to achieve generalization and
reliability in the protein backmapping problem. BackDiff leverages the
conditional score-based diffusion model with geometric representations. Since
different CG models can contain different coarse-grained sites which include
selected atoms (CG atoms) and simple CG auxiliary functions of atomistic
coordinates (CG auxiliary variables), we design a self-supervised training
framework to adapt to different CG atoms, and constrain the diffusion sampling
paths with arbitrary CG auxiliary variables as conditions. Our method
facilitates end-to-end training and allows efficient sampling across different
proteins and diverse CG models without the need for retraining. Comprehensive
experiments over multiple popular CG models demonstrate BackDiff's superior
performance to existing state-of-the-art approaches, and generalization and
flexibility that these approaches cannot achieve. A pretrained BackDiff model
can offer a convenient yet reliable plug-and-play solution for protein
researchers, enabling them to investigate further from their own CG models. | Yikai Liu, Ming Chen, Guang Lin | 2023-10-03T03:32:07Z | http://arxiv.org/abs/2310.01768v2 | # Backdiff: a diffusion model for generalized transferable protein backmapping
###### Abstract
Coarse-grained (CG) models play a crucial role in the study of protein structures, protein thermodynamic properties, and protein conformation dynamics. Due to the information loss in the coarse-graining process, backmapping from CG to all-atom configurations is essential in many protein design and drug discovery applications when detailed atomic representations are needed for in-depth studies. Despite recent progress in data-driven backmapping approaches, devising a backmapping method that can be universally applied across various CG models and proteins remains unresolved. In this work, we propose BackDiff, a new generative model designed to achieve generalization and reliability in the protein backmapping problem. BackDiff leverages the conditional score-based diffusion model with geometric representations. Since different CG models can contain different coarse-grained sites which include selected atoms (CG atoms) and simple CG auxiliary functions of atomistic coordinates (CG auxiliary variables), we design a self-supervised training framework to adapt to different CG atoms, and constrain the diffusion sampling paths with arbitrary CG auxiliary variables as conditions. Our method facilitates end-to-end training and allows efficient sampling across different proteins and diverse CG models without the need for retraining. Comprehensive experiments over multiple popular CG models demonstrate BackDiff's superior performance to existing state-of-the-art approaches, and generalization and flexibility that these approaches cannot achieve. A pretrained BackDiff model can offer a convenient yet reliable plug-and-play solution for protein researchers, enabling them to investigate further from their own CG models.
## 1 Introduction
All-atom molecular dynamics (MD) simulations provide detailed insights into the atomic-level interactions and dynamics of proteins (Cicocti et al. (2014)). However, the computational cost associated with these simulations is substantial, especially when considering large biological systems or long simulation timescales (Shaw et al. (2010)). The intricacies of atomic interactions necessitate small time steps and slow atomic force evaluations, making it challenging to model slow biological processes, such as protein folding, protein-protein interaction, and protein aggregation. Coarse-grained (CG) simulations emerge as an essential tool to address these challenges (Kmiecik et al. (2016); Marrink et al. (2007); Rudd & Broughton (1998)). By simplifying and grouping atoms into larger interaction units, CG models significantly reduce degrees of freedom, allowing for larger simulation length- and time-scales. A CG representation can be classified into two components: CG atoms and CG auxiliary variables. CG atoms denote those that are direct all-atom particles, meaning that each CG atom corresponds to an atom in the all-atom configuration. On the other hand, CG auxiliary variables function as mathematical representations to capture aggregate properties of groups of atoms. While many traditional physics-based CG models use the side chain center of mass (COM) as their CG auxiliary variables, recent CG models can adapt optimized nonlinear (Diggins IV et al. (2018) or data-driven CG auxiliary variables (Fu et al. (2022)) to give a more comprehensive description of proteins.
However, the coarse-graining process sacrifices atomic-level precision, which, in many cases, is essential for a comprehensive understanding of the biomolecular system, such as the drug binding process. Thus, retrieving all-atom configurations by backmapping the CG configurations is important for a more detailed and accurate modeling of proteins. Traditional simulation-based backmapping methods, which perform by equilibrating configurations through MC or MD simulation (Badaczewska-Dawid et al. (2020); Vickery and Stansfeld (2021); Liu et al. (2008)), are computationally expensive and highly intricate, thus diminishing the value of CG simulations. In response, recent data-driven methods (Li et al. (2020); Louison et al. (2021); An and Deshmukh (2020)) employ generative models for more efficient and accurate protein backmapping. These models learn the probability distribution of all-atom configurations conditioned on CG structures, and can efficiently sample from the distribution. Yang and Gomez-Bombarelli (2023) extends the generative backmapping by aiming for transferability in protein spaces. While these methods have shown promising results for backmapping various proteins, they often train and sample under a single, predefined CG model. Wang et al. (2022) illustrates that the method can be adapted to CG models with different resolutions. However, model retraining is needed for each adjustment.
In this study, we introduce BackDiff, a deep generative backmapping approach built upon conditional score-based diffusion models (Song et al. (2020)). BackDiff aims to achieve transferability across different proteins and generalization across CG methods. The high-level idea of BackDiff is to resolve the transferability of CG atoms at the training phase and CG auxiliary variables at the sampling phase. The primary objective of the training is to reconstruct the missing atoms by learning the probability distribution of these atoms conditioned on CG atoms, using conditional score-based diffusion models. The diffusion model gradually transitions the missing atoms from their original states to a noisy distribution via a forward diffusion process, and learns the reverse diffusion process, which recovers the target geometric distribution from the noise. In order to train a model transferable to multiple CG methods, we develop a self-supervised training method that semi-randomly selects CG atoms in each epoch during training. Due to the variability of CG auxiliary functions, it's infeasible to train a single model adaptable to all CG auxiliary variables. By harnessing the unique properties of the score-based diffusion model, we address this challenge through manifold constraint sampling (Chung et al. (2022)), allowing for flexibility across different CG auxiliary variables. The CG auxiliary variables act as a guiding constraint to the data manifold during the reverse diffusion process, ensuring that our sampled data remains within the generative boundaries defined by these CG auxiliary variables.
We employ BackDiff for extensive backmapping experiments across various popular CG models, all without the need for model retraining. Numerical evaluations demonstrate that BackDiff consistently delivers robust performance across diverse CG models, and commendable transferability across protein spaces, even when data is limited.
## 2 Related Work
### Score-based diffusion models
The score-based diffusion model perturbs data with original distribution \(p_{0}(\mathbf{x})\) to noise with a diffusion process over a unit time horizon by a linear stochastic differential equation (SDE):
\[d\mathbf{x}=\mathbf{f}(\mathbf{x},t)dt+g(t)d\mathbf{w},\ t\in[0,T], \tag{1}\]
where \(\mathbf{f}(\mathbf{x},t),g(t)\) are chosen diffusion and drift functions and \(\mathbf{w}\) denotes a standard Wiener process. With a sufficient amount of time steps, the prior distribution \(p_{T}(\mathbf{x})\) approaches a simple Gaussian distribution. For any diffusion process in equation 1, it has a corresponding reverse-time SDE (Anderson (1982)):
\[d\mathbf{x}=[\mathbf{f}(\mathbf{x},t)-g^{2}(t)\nabla_{\mathbf{x}_{t}}\log p_{ t}(\mathbf{x}_{t})]dt+g(t)d\bar{\mathbf{w}}, \tag{2}\]
with \(\bar{\mathbf{w}}\) a standard Wiener process in the reverse-time. The trajectories of the reverse SDE (2) have the same marginal densities as the forward SDE (1). Thus, the reverse-time SDE (2) can gradually convert noise to data. The score-based diffusion model parameterizes the time-dependent score function \(\nabla_{\mathbf{x}_{t}}\log p_{t}(\mathbf{x}_{t})\) in the reverse SDE (2) with a neural network \(\mathbf{s}_{\theta}(\mathbf{x}(t),t)\). The time-dependent score-based model \(\mathbf{s}_{\theta}(\mathbf{x}(t),t)\) can be trained via minimizing the denoising score matching loss:
\[J(\theta)=\operatorname*{arg\,min}_{\theta}\mathbb{E}_{t}\left\{\mathbb{E}_{ \mathbf{x}(0)}\mathbb{E}_{\mathbf{x}(t)|\mathbf{x}(0)}\left[\|\mathbf{s}_{ \theta}(\mathbf{x}(t),t)-\nabla_{\mathbf{x}_{t}}\log p(\mathbf{x}(t)\mid \mathbf{x}(0))\|_{2}^{2}\right]\right\}, \tag{3}\]
with \(t\) uniformly sampled between \([0,T]\). To sample from the data distribution \(p(\mathbf{x})\), we first draw a sample from the prior distribution \(p(\mathbf{x}_{T})\sim\mathcal{N}(\mathbf{0},\mathbf{I})\), and then discretize and solve the reverse-time SDE with numerical methods, e.g. Euler-Maruyama discretization.
In this work, we consider the variance preserving (VP) form of the SDE in Denoising Diffusion Probabilistic Model (DDPM) (Ho et al. (2020)):
\[\mathrm{d}\mathbf{x}=-\frac{1}{2}\beta(t)\mathbf{x}dt+\sqrt{\beta(t)}\mathrm{ d}\mathbf{w}, \tag{4}\]
with \(\beta(t)\) representing the variance schedule. In a discretized setting of DDPM, we define \(\beta_{1},\beta_{2},...,\beta_{T}\) as the sequence of fixed variance schedule, \(\alpha_{t}=1-\beta_{t}\) and \(\bar{\alpha}_{t}=\prod_{i=1}^{t}\alpha_{i}\), then the forward diffusion process can be written as:
\[\mathbf{x}_{t}=\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_{t }}\mathbf{z},\quad\mathbf{z}\sim\mathcal{N}(\mathbf{0},\mathbf{I}). \tag{5}\]
### Conditional score-based diffusion model for imputation problems
Let us consider a general missing value imputation problem: given a sample \(\mathbf{x}\equiv\{\mathbf{x}_{\text{known}},\mathbf{x}_{\text{omit}}\}\), where \(\mathbf{x}_{\text{known}}\) represents observed values and \(\mathbf{x}_{\text{omit}}\) represents missing values, and \(\mathbf{x}_{\text{known}}\), \(\mathbf{x}_{\text{omit}}\) can vary by samples, we want to recover \(\mathbf{x}_{\text{omit}}\) with the conditional observed values \(\mathbf{x}_{\text{known}}\). In the context of protein backmapping, \(\mathbf{x}_{\text{known}}\) represents the atomic coordinates of CG atoms, while \(\mathbf{x}_{\text{omit}}\) denotes the atomic coordinates to be recovered. Thus, the imputation problem can be formulated as learning the true conditional probability \(p(\mathbf{x}_{\text{omit}}|\mathbf{x}_{\text{known}})\) with a parameterized distribution \(p_{\theta}(\mathbf{x}_{\text{omit}}|\mathbf{x}_{\text{known}})\).
We can apply score-based diffusion model to the imputation problem by incorporating the conditional observed values into the reverse diffusion from equation 2:
\[d\mathbf{x}_{\text{omit}}=[\mathbf{f}(\mathbf{x}_{\text{omit}},t)-g^{2}(t) \nabla_{\mathbf{x}_{\text{omit}}}\log p_{t}(\mathbf{x}_{\text{omit}_{t}}| \mathbf{x}_{\text{known}})]dt+g(t)d\bar{\mathbf{w}}. \tag{6}\]
### Manifold constraint sampling for inverse problems
Consider a many-to-many mapping function \(\mathcal{A}:\mathbf{X}\rightarrow\mathbf{Y}\). The inverse problem is to retrieve the distribution of \(\mathbf{x}\in\mathbf{X}\), which can be multimodal, given a measurement \(\mathbf{y}\in\mathbf{Y}\). In the protein backmapping problem, \(\mathbf{y}\) corresponds to the CG auxiliary variables and \(\mathbf{x}\) the atomic coordinates to recover. With the Bayes' rule:
\[p(\mathbf{x}|\mathbf{y})=p(\mathbf{y}|\mathbf{x})p(\mathbf{x})/p(\mathbf{y}), \tag{7}\]
we can take \(p(\mathbf{x})\) as the prior and sample from the posterior \(p(\mathbf{x}|\mathbf{y})\). If we take the score-based diffusion model as the prior, we can use the reverse diffusion from equation 2 as the sampler from the posterior distribution as follows:
\[d\mathbf{x}=[\mathbf{f}(\mathbf{x},t)-g^{2}(t)(\nabla_{\mathbf{x}_{t}}\log p_ {t}(\mathbf{x}_{t})+\nabla_{\mathbf{x}_{t}}\log p_{t}(\mathbf{y}|\mathbf{x}_{ t}))]dt+g(t)d\bar{\mathbf{w}}, \tag{8}\]
using the Bayes' rule:
\[\nabla_{\mathbf{x}_{t}}\log p_{t}(\mathbf{x}_{t}|\mathbf{y})=\nabla_{\mathbf{ x}_{t}}\log p_{t}(\mathbf{x}_{t})+\nabla_{\mathbf{x}_{t}}\log p_{t}(\mathbf{y}| \mathbf{x}_{t}). \tag{9}\]
By estimating the score function \(\nabla_{\mathbf{x}_{t}}\log p_{t}(\mathbf{x}_{t})\) with a trained score model \(\mathbf{s}_{\theta}\) and computing the likelihood \(\nabla_{\mathbf{x}_{t}}\log p_{t}(\mathbf{y}|\mathbf{x}_{t})\), we can obtain the posterior likelihood \(\nabla_{\mathbf{x}_{t}}\log p_{t}(\mathbf{x}_{t}|\mathbf{y})\). However, computing \(\nabla_{\mathbf{x}_{t}}\log p_{t}(\mathbf{y}|\mathbf{x}_{t})\) in a closed-form is difficult since there is not a clear dependence between \(\mathbf{y}\) and \(\mathbf{x}_{t}\).
Observing that \(\mathbf{y}\) and \(\mathbf{x}_{t}\) are independently conditioned on \(\mathbf{x}_{0}\), we can factorize \(p(\mathbf{y}|\mathbf{x}_{t})\) as:
\[p(\mathbf{y}|\mathbf{x}_{t})=\int p(\mathbf{y}|\mathbf{x}_{0},\mathbf{x}_{t})p( \mathbf{x}_{0}|\mathbf{x}_{t})d\mathbf{x}_{0}=\int p(\mathbf{y}|\mathbf{x}_{0}) p(\mathbf{x}_{0}|\mathbf{x}_{t})d\mathbf{x}_{0}, \tag{10}\]
which interprets the conditional probability as:
\[p\left(\mathbf{y}\mid\mathbf{x}_{t}\right)=\mathbb{E}_{\mathbf{x}_{0}\sim p( \mathbf{x}_{0}|\mathbf{x}_{t})}\left[p\left(\mathbf{y}\mid\mathbf{x}_{0}\right) \right]. \tag{11}\]
This further implies that we can approximate the conditional probability as:
\[p\left(\mathbf{y}\mid\mathbf{x}_{t}\right)\simeq p\left(\mathbf{y}\mid\hat{ \mathbf{x}}_{0}\right), \tag{12}\]
where \(\hat{\mathbf{x}}_{0}=\mathbb{E}\left[\mathbf{x}_{0}\mid\mathbf{x}_{t}\right]\).
Chung et al. (2022) proves that for DDPM, \(p(\mathbf{x}_{0}|\mathbf{x}_{t})\) has a unique posterior mean as:
\[\hat{\mathbf{x}}_{0}=\frac{1}{\sqrt{\bar{\alpha}(t)}}\left(\mathbf{x}_{t}+(1- \bar{\alpha}(t))\nabla_{\mathbf{x}_{t}}\log p_{t}\left(\mathbf{x}_{t}\right) \right). \tag{13}\]
The conditional probability \(p(\mathbf{y}|\mathbf{x}_{0})\) is a Dirac delta function \(p(\mathbf{y}|\mathbf{x}_{0})=\delta(\mathbf{y}-\mathcal{A}(\mathbf{x}_{0}))\). In practice, we replace the multidimensional \(\delta\) function with a multidimensional Gaussian of small variance, which regularizes the strict constraints \(\mathcal{A}(\mathbf{x}_{0})=\mathbf{y}\) with a tight restraint:
\[p\left(\mathbf{y}\mid\mathbf{x}_{0}\right)=\frac{1}{\sqrt{(2\pi)^{n}\sigma^{ 2n}}}\exp\left[-\frac{\left\|\mathbf{y}-\mathcal{A}\left(\mathbf{x}_{0}\right) \right\|_{2}^{2}}{2\sigma^{2}}\right], \tag{14}\]
where \(n\) is the dimension of \(\mathbf{y}\) and \(\sigma\) is the standard deviation. Taking the approximation from equation 12, we can get:
\[\nabla_{\mathbf{x}_{t}}\log p\left(\mathbf{y}\mid\mathbf{x}_{t}\right)\simeq \nabla_{\mathbf{x}_{t}}\log p\left(\mathbf{y}\mid\hat{\mathbf{x}}_{0}\right)= -\frac{1}{\sigma^{2}}\nabla_{\mathbf{x}_{t}}\left\|\mathbf{y}-\mathcal{A} \left(\hat{\mathbf{x}}_{0}\right)\right\|_{2}^{2}. \tag{15}\]
Finally, by estimating \(\nabla_{\mathbf{x}_{t}}\log p_{t}\left(\mathbf{x}_{t}\right)\) with a neural network \(\mathbf{s}_{0}(\mathbf{x}_{t},t)\), we can formulate the conditional reverse diffusion of DDPM modified from equation 8 as:
\[d\mathbf{x}=[\mathbf{f}(\mathbf{x},t)-g^{2}(t)(\mathbf{s}_{0}(\mathbf{x}_{t},t )-\zeta\nabla_{\mathbf{x}_{t}}\left\|\mathbf{y}-\mathcal{A}\left(\hat{\mathbf{ x}}_{0}\right)\right\|_{2}^{2}]dt+g(t)d\bar{\mathbf{w}}, \tag{16}\]
with \(\hat{\mathbf{x}}_{0}\) expressed as in equation 13, and \(\zeta=\frac{1}{\sigma^{2}}\) is the correction weight.
## 3 Preliminary
### Notations and Problem Definition
Notations.In this paper, each protein with \(N\) heavy atoms is represented as an undirected graph \(\mathcal{G}=\langle\mathcal{V},\mathcal{E}\rangle\), where \(\mathcal{V}=\{v_{i}\}_{i=1}^{N}\) is the set of nodes representing heavy atoms and \(\mathcal{E}=\{e_{ij}|(i,j)\subseteq|\mathcal{V}|\times|\mathcal{V}|\}\) is the set of edges representing inter-atomic bonds and nonbonded interactions. An all-atom configuration of the protein can be represented as \(\mathcal{C}=[\mathbf{c}_{1},\mathbf{c}_{2},\cdots,\mathbf{c}_{N}]\in\mathbb{R}^{N\times 3}\), with \(\mathbf{c}_{i}\) the Cartesian coordinates of \(i\)-th heavy atom. A CG model defines a CG mapping function \(\xi\): \(\mathcal{R}=\xi(\mathcal{C})\), that transforms the all-atom coordinate representation \(\mathcal{C}\) to a lower-dimensional CG representation \(\mathcal{R}\in\mathbb{R}^{n}\) (\(n<3N\)). We further denote \(\mathcal{R}\equiv\{\mathcal{R}_{\text{am}},\mathcal{R}_{\text{am}}\}\), where \(\mathcal{R}_{\text{am}}\) represents CG atoms and \(\mathcal{R}_{\text{am}}\) are CG auxiliary variables.
Problem Definition.Given a protein graph \(\mathcal{G}\) and a coarse-grained configuration \(\mathcal{R}\), the task of protein backmapping is to learn and efficiently sample from \(p(\mathcal{C}|\mathcal{R},\mathcal{G})\). This will allow us to conduct CG MD simulations to longer time- and length- scales for any protein with a CG method chosen at will, and recover the lost information by sampling from the posterior distribution, without the need for retraining. In this work, we only require that the atomic coordinates of all alpha carbons (\(C_{\alpha}\)) are included in CG representations \(\mathcal{R}\).
## 4 Backdiff Method
In this section, we elaborate on the proposed Backdiff framework. On the high level, Backdiff addresses the transferability of \(\mathcal{R}_{\text{am}}\) and \(\mathcal{R}_{\text{aux}}\) distinctly, resolving them in different components of the work. During training, we develop a diffusion-based generative model to learn the distribution \(p(\mathcal{C}|\mathcal{R}_{\text{atm}},\mathcal{G})\). We approach this training by viewing it as a missing-node-imputation problem, and devise a self-supervised training strategy that can accommodate a wide range of missing-node combinations. During the sampling procedure, we enforce the condition of \(\mathcal{R}_{\text{aux}}\) by incorporating a correction term through the reverse diffusion process. This ensures accurate sampling from the distribution \(p(\mathcal{C}|\mathcal{R}_{\text{atm}},\mathcal{R}_{\text{aux}},\mathcal{G})\). Finally, we apply the same manifold constraint sampling technique on bond lengths and bond angles to avoid generating unrealistic protein configurations. In Sec. 4.1 and Sec. 4.2, we present a description of the training process and the self-supervised training strategy. In Sec. 4.3, we explain how BackDiff avoids dealing with equivariance. Finally, we show how to utilize manifold constraint sampling to adapt to arbitrary CG auxiliary variables \(\mathcal{R}_{\text{aux}}\) and
to enforce bond lengths and bond angles in Sec. 4.4 and Sec. 4.5. The high-level schematic of the sampling process is shown in Fig. 1. An equivariant Graph Neural Network is used in this paper to parameterize the score-based diffusion model. We elaborate on details of the GNN architecture used in Appendix G.1.
### Training formulation
Let us denote the target all-atom configuration as \(\mathcal{C}\equiv\{\mathcal{C}_{\text{omit}},\mathcal{R}_{\text{atm}}\}\), with \(\mathcal{C}_{\text{omit}}\) denoting the Cartesian coordinates of atoms omitted during the coarse-graining process. We further denote \(\mathcal{D}\) the displacement of omitted atoms from the \(C_{\alpha}\) of their corresponding residues. Since we require the atomic coordinates of \(C_{\alpha}\) to be incorporated in the CG conditions, we can observe that \(p(\mathcal{C}|\mathcal{R}_{\text{atm}},\mathcal{G})=p(\mathcal{C}_{\text{omit} }|\mathcal{R}_{\text{atm}},\mathcal{G})=p(\mathcal{D}|\mathcal{R}_{\text{atm}},\mathcal{G})\). We choose \(p(\mathcal{D}|\mathcal{R}_{\text{atm}},\mathcal{G})\) as our learning target since compared to the Cartesian coordinates \(\mathcal{C}_{\text{omit}}\), the displacement \(\mathcal{D}\) spans a smaller data range and thus enhances training stability.
We model the conditional distribution \(p(\mathcal{D}|\mathcal{R}_{\text{atm}},\mathcal{G})\) using the score-based diffusion model with a modified reverse diffusion defined in equation 6. We define a parameterized conditional score function \(\mathbf{s}_{\theta}:(\mathbf{D}_{t}\times\mathbb{R}|\mathbf{R}_{\text{atm}}) \rightarrow\mathbf{D}_{t}\) to approximate \(\nabla_{\mathcal{D}_{t}}\log p_{t}(\mathcal{D}_{t}|\mathcal{R}_{\text{atm}})\). We follow the same training procedure for the unconditional score-based diffusion as described in Sec. 2.1: given the Cartesian coordinates of CG atoms \(\mathcal{R}_{\text{atm}}\) and the displacement of omitted atoms from alpha carbons \(\mathcal{D}\), we perturb the displacement \(\mathcal{D}\) with DDPM forward diffusion process defined following equation 4:
\[\mathcal{D}_{t}=\sqrt{\bar{\alpha}_{t}}\mathcal{D}_{0}+\sqrt{1-\bar{\alpha}_{ t}}\mathbf{z},\quad\mathbf{z}\sim\mathcal{N}(\mathbf{0},\mathbf{I}). \tag{17}\]
Next, we sample perturbed \(\mathcal{D}\) and train \(s_{\theta}\) by minimizing the loss function
\[J(\theta)=\underset{\theta}{\arg\min}\mathbb{E}_{t,\mathcal{D}(0), \mathcal{D}(t)|\mathcal{D}(0)}\left[\left\|\mathbf{s}_{\theta}(\mathcal{D}(t),t|\mathcal{R}_{\text{atm}})-\nabla_{\mathcal{D}(t)}\log p_{0t}(\mathcal{D}(t )\mid\mathcal{D}(0),\mathcal{R}_{\text{atm}})\right\|_{2}^{2}\right]. \tag{18}\]
Inspired by Tashiro et al. (2021), we develop a self-supervised learning framework for the backmapping problem. During each iteration of training, for each all-atom configuration, we choose a set of atoms as CG atoms \(\mathcal{R}_{\text{atm}}\), following a semi-randomized strategy, and leave the rest of the atoms as omitted atoms \(\mathcal{C}_{\text{omit}}\) and compute their displacements \(\mathcal{D}\) from corresponding alpha carbons. During training, the choice of CG atoms will change from iteration to iteration.
Figure 1: The sampling process of BackDiff. The reverse diffusion process gradually converts the noisy configuration into the plausible configuration, conditioned on CG atoms \(\mathcal{R}_{\text{atm}}\). In each diffusion step, the configuration is “corrected” with auxiliary variables, bond lengths and bond angles as manifold constraints.
### Choice of CG atoms in self-supervised learning
In this study, all \(C_{\alpha}\) are enforced as CG atoms. For the rest of the atoms, we provide three strategies for choosing CG atoms during training. Each strategy can be chosen based on information known about the target proteins and CG methods.
(1) Random strategy: we randomly select a certain percentage of atoms as CG atoms. This strategy should be adopted if we do not know the common choices of CG atoms of CG models. The percentage is uniformly sampled from \([0\%,100\%]\) to adapt to various CG resolutions.
(2) Semi-random strategy: for different types of atoms (\(C,N\) on the backbone, \(C_{\beta},C_{\gamma}\) on the side chain, etc.), we assign different percentages to choose as CG atoms. This strategy should be adopted if we know the common choices of CG atoms but want to keep the diversity of training.
(3) Fix strategy: we choose a fixed set of atoms as CG atoms. This strategy is adopted if we want to train BackDiff with respect to a specific CG method.
More detailed descriptions of algorithms of methods (1) and (2) are given in Appendix C.2.
### Equivariance
Equivariance is a commonly used property in geometric deep learning (Satorras et al. (2021); Batzner et al. (2022); Maron et al. (2018)). A function \(\phi:X\to Y\) is said to be equivariant w.r.t. a group \(G\) if
\[\phi\left(T_{g}(\mathbf{x})\right)=S_{g}(\phi(\mathbf{x})), \tag{19}\]
where \(T_{g}:X\to X\) and \(S_{g}:Y\to Y\) are transformations of \(g\in G\). In this work, we consider \(G\) the SE(3) group, which is the group of rotation and translation.
**Proposition 1**.: _Our training target \(p(\mathcal{C}|\mathcal{R}_{\text{sum}},\mathcal{G})\) is SE(3)-equivariant, i.e.,\(p(\mathcal{C}|\mathcal{R}_{\text{sum}},\mathcal{G})=p(T_{g}(\mathcal{C})T_{g}( \mathcal{R}_{\text{sum}}),\mathcal{G})\), then for all diffusion time \(t\), the time-dependent score function is SE(3)-equivariant:_
\[\nabla_{\mathcal{C}}\log p_{t}(\mathcal{C}|\mathcal{R}_{\text{sum}}, \mathcal{G}) =\nabla_{\mathcal{C}}\log p_{t}(T(\mathcal{C})|T(\mathcal{R}_{ \text{sum}}),\mathcal{G}) \tag{20}\] \[=S(\nabla_{\mathcal{C}}\log p_{t}(S(\mathcal{C})|S(\mathcal{R}_{ \text{sum}}),\mathcal{G}))\]
_for translation \(T\) and rotation \(S\)._
### Manifold constraint sampling on CG auxiliary variables
Let us consider CG auxiliary variables \(\mathcal{R}_{\text{aux}}\) obtained from a many-to-many mapping function \(\xi_{\text{aux}}\):
\[\mathcal{R}_{\text{aux}}=\xi_{\text{aux}}(\mathcal{D},\mathcal{R}_{\text{ atm}}). \tag{21}\]
With a learned \(\mathbf{s}_{\theta}(\mathcal{D}_{t}|\mathcal{R}_{\text{atm}},\mathcal{G})\), our objective is to sample from \(p_{\mathcal{G}}(\mathcal{D}|\mathcal{R}_{\text{atm}},\mathcal{R}_{\text{aux}})\) for an arbitrary CG auxiliary function \(\xi_{\text{aux}}\) with the score-based diffusion model. The sampling process, however, requires knowledge of \(\nabla_{\mathcal{D}_{t}}\log p_{\mathcal{G}}(\mathcal{D}_{t}|\mathcal{R}_{ \text{atm}},\mathcal{R}_{\text{aux}})\). We can compute \(\nabla_{\mathcal{D}_{t}}\log p_{\mathcal{G}}(\mathcal{D}_{t}|\mathcal{R}_{ \text{atm}},\mathcal{R}_{\text{aux}})\) from \(\nabla_{\mathcal{D}_{t}}\log p_{\mathcal{G}}(\mathcal{D}_{t}|\mathcal{R}_{ \text{atm}})\) using Baye's rule:
\[\nabla_{\mathcal{D}_{t}}\log p_{\mathcal{G}}(\mathcal{D}_{t}|\mathcal{R}_{ \text{atm}},\mathcal{R}_{\text{aux}}) =\nabla_{\mathcal{D}_{t}}\log p_{\mathcal{G}}(\mathcal{D}_{t}| \mathcal{R}_{\text{atm}}) \tag{22}\] \[+\nabla_{\mathcal{D}_{t}}\log p_{\mathcal{G}}(\mathcal{R}_{\text{ aux}}|\mathcal{R}_{\text{atm}},\mathcal{D}_{t}).\]
This decomposition allows us to take \(p_{\mathcal{G}}(\mathcal{D}_{t}|\mathcal{R}_{\text{atm}})\) as prior and sample from \(p_{\mathcal{G}}(\mathcal{D}|\mathcal{R}_{\text{atm}},\mathcal{R}_{\text{aux}})\) with the manifold constraint sampling technique. The first term \(\nabla_{\mathcal{D}_{t}}\log p_{\mathcal{G}}(\mathcal{D}_{t}|\mathcal{R}_{ \text{atm}})\) is estimated with \(\mathbf{s}_{\theta}(\mathcal{D}_{t}|\mathcal{R}_{\text{atm}},\mathcal{G})\), and the second term \(\nabla_{\mathcal{D}_{t}}\log p_{\mathcal{G}}(\mathcal{R}_{\text{atm}}|\mathcal{R }_{\text{atm}},\mathcal{D}_{t})\) is estimated following equation 15:
\[\nabla_{\mathcal{D}_{t}}\log p_{\mathcal{G}}(\mathcal{R}_{\text{ aux}}|\mathcal{R}_{\text{atm}},\mathcal{D}_{t})\simeq-\zeta\nabla_{\mathcal{D}_{t}} \left\|\mathcal{R}_{\text{aux}}-\xi_{\text{aux}}\left(\hat{\mathcal{D}}_{0}, \mathcal{R}_{\text{atm}}\right)\right\|_{2}^{2}, \tag{23}\]
where \(\hat{\mathcal{D}}_{0}\) is computed according to equation 13:
\[\hat{\mathcal{D}}_{0}=\frac{1}{\sqrt{\hat{\alpha}(t)}}\left(\mathcal{D}_{t}+(1- \bar{\alpha}(t))\nabla_{\mathcal{D}_{t}}\log p_{\mathcal{G}}(\mathcal{D}_{t}| \mathcal{R}_{\text{atm}})\right). \tag{24}\]
We provide the pseudo code of the sampling process in Appendix C.1.
### Manifold constraint on bond lengths and angles
In proteins, the bond lengths and bond angles exhibit only minor fluctuations due to the strong force constants of covalent bonds. This results in an ill-conditioned probability distribution for Cartesian coordinates. A diffusion model based on Cartesian coordinates faces challenges in accurately learning such a distribution, potentially leading to the generation of unrealistic configurations. In this work, we apply manifold constraints on bond lengths and bond angles in addition to CG auxiliary variables, as the posterior conditions.
## 5 Experiment
**Datasets** Following the recent protein backmapping work, we use the protein structural ensemble database PED (Lazar et al. (2021)) as our database. PED contains structural ensembles of 227 proteins, including intrinsically disordered protein (IDP). Among the 227 proteins, we choose 92 data computed from MD simulation or sampling methods for training and testing purposes.
**Evaluation** We conduct several experiments to demonstrate the flexibility, reliability, and transferability of BackDiff. We evaluate the performance of Backdiff on 3 popular CG models: UNRES model (Livo et al. (2014)),Rosetta model (Das Baker (2008), and MARTINI model (Souza et al. (2021)). The CG mapping protocol of each model is summarized in Table.5 in Appendix D. We perform both single- and multi-protein experiments, with single-protein experiments training and inference on one single protein, and multi-protein experiments training and inference on multiple proteins. Single-protein experiments are conducted on PED00011 (5926 frames) and PED00151 (9746 frames). We randomly split the training, validation, and testing datasets into PED00011 (3000 frames for training, 2826 frames for validation, 100 frames for testing), and PED00151 (4900 frames for training, 4746 frames for validation, and 100 frames for testing). For multi-protein experiments, we randomly select up to 500 frames for each protein from the dataset as the training dataset. For testing, we randomly select 100 frames other than the ones used in training for PED00011 and PED00151, and 45 frames other than the ones used in training for PED00055. In both single- and multi-protein experiments, we test BackDiff with fixed training strategies (CG-fixed) and BackDiff with semi-random training strategy (CG-transferable).
**Baselines** We choose GenZProt (Yang & Gomez-Bombarelli (2023)) and modified Torsional Diffusion (TD) (Jing et al. (2022)) as the state-of-the-art baselines. Since GenZProt and Torsional Diffusion utilize internal coordinates (torsion angles) as training objectives and adapting them to multiple CG methods can be ill-defined, we conduct single- and multi-protein experiments with fixed CG methods for the two baseline models.
**Evaluation Metrics** Since backmapping generates multiple configurations (\(\mathcal{C}_{\text{gen}}\)) from one CG configuration, a good protein backmapping model should be able to generate some samples that match the original all-atom configuration (\(\mathcal{C}_{\text{ref}}\)) (accuracy), consist of new configurations (diversity), and are physically realistic. For the accuracy metrics, we identify one generated sample (\(\mathcal{C}_{\text{min}}\)) with the minimum Root Mean Squared Distance (\(\text{RMSD}_{\min}\)) w.r.t \(\mathcal{C}_{\text{ref}}\), and compute the Mean Square Error (MSE) of \(\mathcal{C}_{\text{min}}\)'s sidechain COMs from \(\mathcal{C}_{\text{ref}}\) (\(\text{SCMSE}_{\min}\)). We report the mean and standard deviation of the \(\text{RMSD}_{\min}\) and \(\text{SCMSE}_{\min}\) across all testing frames. A lower \(\text{RMSD}_{\min}\) and \(\text{SCMSE}_{\min}\) indicate the model's stronger capacity to find \(\mathcal{C}_{\text{ref}}\) as one representative sample. For the diversity metric, we evaluate the generative diversity score (DIV) of \(\mathcal{C}_{\text{gen}}\) and \(\mathcal{C}_{\text{ref}}\), as suggested in Jones et al. (2023): \(\text{DIV}(\mathcal{C}_{\text{gen}},\mathcal{C}_{\text{ref}})\). Full definitions of DIV is provided in Appendix F. A lower DIV suggests that the model can generate diverse \(\mathcal{C}_{\text{gen}}\). Finally, we use steric clash ratio (SCR) to evaluate whether a model can generate physically realistic samples. SCR is defined following the metric in GenZProt: the ratio of steric clash occurrence in all atom-atom pairs within 5.0 A, where the steric clash is defined as an atom-atom pair with a distance smaller than 1.2 A.
We also perform ablation studies to assess the impact of constraining bond lengths and bond angles during BackDiff's sampling. This evaluation uses the Mean Absolute Error (MAE) to compare bond lengths and angles between the ground truth and generated samples. Since GenZProt and Torsional Diffusion construct all-atom configurations from internal coordinates (bond lengths, bond angles, and torsion angles), and inherently prevent unrealistic bond lengths and angles, we exclude their errors from the report.
**Results and discussions** The evaluation metric results on UNRES CG model are summarized in Table 1 and Table 2. As shown in the tables, BackDiff consistently outperforms the state-of-the-art ML models in both single- and multi-protein experiments, and is capable of generating all-atom configurations of higher accuracy, diversity and physical significance. Notably, even when BackDiff is trained for transferability across various CG methods, it maintains performance comparable to training with a fixed CG method. This underscores BackDiff's robust generalization and its reliability in adapting to diverse CG methods. A closer look at the sampled structures, as visualized in Figure 2, reveals that BackDiff more accurately recovers local structures.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & Method & PED00011 & PED00151 \\ \hline \multirow{4}{*}{\(\text{RMSD}_{\text{min}}\) (Å)} & **BackDiff (fixed)** & \(\mathbf{0.415(0.107)}\) & \(\mathbf{0.526(0.125)}\) \\ & BackDiff (trans) & \(0.598(0.112)\) & \(0.663(0.182)\) \\ & GenZProt & \(1.392(0.276)\) & \(1.246(0.257)\) \\ & TD & \(1.035(0.158)\) & \(1.253(0.332)\) \\ \hline \multirow{4}{*}{SCR (\(\%\))} & **BackDiff (fixed)** & \(\mathbf{0.100(0.035)}\) & \(\mathbf{0.105(0.063)}\) \\ & BackDiff (trans) & \(0.216(0.178)\) & \(0.320(0.157)\) \\ \cline{1-1} & GenZProt & \(0.408(0.392)\) & \(0.647(0.384)\) \\ \cline{1-1} & TD & \(0.356(0.303)\) & \(0.452(0.187)\) \\ \hline \multirow{4}{*}{\(\text{SCMSE}_{\text{min}}\) (Å\({}^{2}\))} & **BackDiff (fixed)** & \(\mathbf{0.045(0.008)}\) & \(\mathbf{0.049(0.021)}\) \\ & BackDiff (trans) & \(0.061(0.010)\) & \(0.104(0.038)\) \\ \cline{1-1} & GenZProt & \(1.225(0.121)\) & \(1.340(0.182)\) \\ \cline{1-1} & TD & \(1.134(0.125)\) & \(1.271(0.158)\) \\ \hline \multirow{4}{*}{DIV} & **BackDiff (fixed)** & \(\mathbf{0.045(0.027)}\) & \(\mathbf{0.072(0.034)}\) \\ \cline{1-1} & BackDiff (trans) & \(0.144(0.045)\) & \(0.201(0.032)\) \\ \cline{1-1} & GenZProt & \(0.453(0.241)\) & \(0.527(0.185)\) \\ \cline{1-1} & TD & \(0.128(0.064)\) & \(0.146(0.049)\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on single-protein experiments backmapping from UNRES CG model. The method labeled “BackDiff (trans)” is CG-transferable, while the other three are CG-fixed. We report the mean and standard deviation for 100 generated samples.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & Method & PED00011 & PED00055 & PED00151 \\ \hline \multirow{4}{*}{\(\text{RMSD}_{\text{min}}\)(Å)} & **BackDiff (fixed)** & \(\mathbf{0.652(0.214)}\) & \(1.690(0.372)\) & \(\mathbf{1.292(0.160)}\) \\ & BackDiff (trans) & \(0.708(0.188)\) & \(\mathbf{1.340(0.237)}\) & \(1.435(0.226)\) \\ & GenZProt & \(2.337(0.466)\) & \(2.741(0.515)\) & \(2.634(0.353)\) \\ & TD & \(1.714(0.385)\) & \(2.282(0.400)\) & \(1.634(0.282)\) \\ \hline \multirow{4}{*}{SCR (\(\%\))} & **BackDiff (fixed)** & \(\mathbf{0.626(0.482)}\) & \(0.829(0.546)\) & \(\mathbf{0.463(0.268)}\) \\ & BackDiff (trans) & \(0.918(0.609)\) & \(\mathbf{0.786(0.335)}\) & \(0.820(0.316)\) \\ \cline{1-1} & GenZProt & \(2.347(1.289)\) & \(2.477(0.448)\) & \(1.545(0.602)\) \\ \cline{1-1} & TD & \(0.983(0.476)\) & \(1.584(0.501)\) & \(0.620(0.320)\) \\ \hline \multirow{4}{*}{\(\text{SCMSE}_{\text{min}}\) (Å\({}^{2}\))} & **BackDiff (fixed)** & \(\mathbf{0.076(0.012)}\) & \(0.103(0.026)\) & \(\mathbf{0.100(0.021)}\) \\ \cline{1-1} & BackDiff (trans) & \(0.082(0.027)\) & \(\mathbf{0.088(0.015)}\) & \(0.123(0.040)\) \\ \cline{1-1} & GenZProt & \(1.951(0.327)\) & \(1.784(0.402)\) & \(1.869(0.330)\) \\ \cline{1-1} & TD & \(1.320(0.282)\) & \(1.195(0.318)\) & \(1.717(0.397)\) \\ \hline \multirow{4}{*}{DIV} & **BackDiff (fixed)** & \(0.155(0.069)\) & \(0.276(0.109)\) & \(0.213(0.087)\) \\ \cline{1-1} & **BackDiff (trans)** & \(\mathbf{0.079(0.052)}\) & \(\mathbf{0.143(0.067)}\) & \(\mathbf{0.122(0.060)}\) \\ \cline{1-1} & GenZProt & \(0.636(0.132)\) & \(0.662(0.147)\) & \(0.612(0.143)\) \\ \cline{1-1} & TD & \(0.179(0.066)\) & \(0.252(0.086)\) & \(0.201(0.075)\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on multi-protein experiments backmapping from UNRES CG model.
As noted earlier, one limitation of the Cartesian-coordinate-based diffusion model is its inability to consistently produce realistic bond lengths and bend angles, given that these typically fluctuate within narrow ranges. In Table 3, we present the MAE for both bond lengths and bond angles, illustrating the benefits of constraining the sampling diffusion path using these parameters as posterior conditions. The results clearly indicate that manifold constraint sampling substantially reduces the errors in bond lengths and angles, enhancing the model's overall performance.
## 6 Conclusion
In this work, we propose BackDiff, a generative model for recovering proteins' all-atom structures from coarse-grained simulations. BackDiff combines a self-supervised score-based diffusion model with manifold constraint sampling to adapt to different CG models and utilizes geometric representations to achieve transferability across different proteins. Our rigorous experiments across various
\begin{table}
\begin{tabular}{c l c c c} \hline \hline & Method & PED00011 & PED00055 & PED00151 \\ \hline \multirow{2}{*}{\begin{tabular}{c} Bond length MAE \\ (Å) \\ \end{tabular} } & **BackDiff (cons)** & \(<\mathbf{0.001}\) & \(<\mathbf{0.001}\) & \(<\mathbf{0.001}\) \\ & BackDiff (plain) & \(0.542(0.047)\) & \(0.496(0.055)\) & \(0.332(0.032)\) \\ \hline \multirow{2}{*}{
\begin{tabular}{c} Bond angle MAE \\ \end{tabular} } & **BackDiff (cons)** & \(\mathbf{0.167(0.095)}\) & \(\mathbf{0.106(0.088)}\) & \(\mathbf{0.124(0.097)}\) \\ & BackDiff (plain) & \(0.333(0.071)\) & \(0.245(0.070)\) & \(0.251(0.082)\) \\ \hline \multirow{2}{*}{SCR (\(\%\))} & **BackDiff (cons)** & \(\mathbf{0.918(0.609)}\) & \(\mathbf{0.786(0.335)}\) & \(\mathbf{0.820(0.316)}\) \\ & BackDiff (plain) & \(2.884(0.813)\) & \(2.507(0.654)\) & \(2.301(0.344)\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study on the bond lengths and bond angles manifold constraint sampling. Comparing configurations generated with manifold constraint sampling (BackDiff (cons)) and without the manifold constraint sampling (BackDiff (plain)) in multi-protein experiments backmapping from UNRES CG model. Both tests use the same trained CG-transferable BackDiff model.
Figure 2: Visualization of all-atom configurations sampled from different methods in multi-protein experiments backmapping from UNRES CG model.
prominent CG models underscore BackDiff's exceptional performance and unparalleled adaptability. Looking ahead, we aim to improve the sampling efficiency of the diffusion model, refine the manifold constraint sampling process, integrate a more robust dataset to further enhance the model's capabilities, and expand our experimental scope to include recent CG models with data-driven mapping protocols.
|
2310.19127 | Unified Representation for Non-compositional and Compositional
Expressions | Accurate processing of non-compositional language relies on generating good
representations for such expressions. In this work, we study the representation
of language non-compositionality by proposing a language model, PIER, that
builds on BART and can create semantically meaningful and contextually
appropriate representations for English potentially idiomatic expressions
(PIEs). PIEs are characterized by their non-compositionality and contextual
ambiguity in their literal and idiomatic interpretations. Via intrinsic
evaluation on embedding quality and extrinsic evaluation on PIE processing and
NLU tasks, we show that representations generated by PIER result in 33% higher
homogeneity score for embedding clustering than BART, whereas 3.12% and 3.29%
gains in accuracy and sequence accuracy for PIE sense classification and span
detection compared to the state-of-the-art IE representation model, GIEA. These
gains are achieved without sacrificing PIER's performance on NLU tasks (+/- 1%
accuracy) compared to BART. | Ziheng Zeng, Suma Bhat | 2023-10-29T19:28:22Z | http://arxiv.org/abs/2310.19127v1 | # Unified Representation for Non-compositional and Compositional Expressions
###### Abstract
Accurate processing of non-compositional language relies on generating good representations for such expressions. In this work, we study the representation of language non-compositionality by proposing a language model, Pier, that builds on BART and can create semantically meaningful and contextually appropriate representations for English potentially idiomatic expressions (PIEs). PIEs are characterized by their non-compositionality and contextual ambiguity in their literal and idiomatic interpretations. Via intrinsic evaluation on embedding quality and extrinsic evaluation on PIE processing and NLU tasks, we show that representations generated by Pier result in 33% higher homogeneity score for embedding clustering than BART, whereas 3.12% and 3.29% gains in accuracy and sequence accuracy for PIE sense classification and span detection compared to the state-of-the-art IE representation model, GIEA. These gains are achieved without sacrificing Pier's performance on NLU tasks (+/- 1% accuracy) compared to BART.
## 1 Introduction
Non-compositionality is a characteristic of natural language, where the meaning of the expressions cannot be deduced from its components (Baldwin and Kim, 2010). These non-compositional expressions, often referred to as being _idiomatic_, assume figurative meanings and are collectively a common occurrence appearing in nearly three out of ten sentences in English (Moon et al., 1998) across various genres (Haagsma et al., 2020). The challenges they pose to NLP systems have been acknowledged as the classical 'pain in the neck' (Sag et al., 2002) and are recently found to impact various NLP tasks negatively, such as sentiment analysis (Liu et al., 2017; Biddle et al., 2020), dialog models (Jhamtani et al., 2021), and paraphrase generation (Zhou et al., 2021). Modern NLP systems, however, are primarily driven by the notion of compositionality, which is at the core of several system components, including tokenization (Sennrich et al., 2016; Wu et al., 2016) and the self-attention mechanism (Vaswani et al., 2017). More fundamentally, recent studies (Zeng and Bhat, 2022) reveal that the pre-trained language models (PTLMs), such as GPT-3 (Brown et al., 2020) and BART (Lewis et al., 2020), are ill-equipped to represent (and comprehend) idiomatic expressions' (IE) meanings. This is demonstrated by the lack of correspondence between the IE meanings and their embeddings; IEs with similar meanings are not close in the embedding space. Conversely, IEs close in the embedding space have a significant token or syntactic overlap. From a representation standpoint, this highlights the need for language models (LMs) to handle non-compositionality through valid representations.
Efforts to generate semantically congruent representations for IEs are now coming to the fore. For instance, GIEA (Zeng and Bhat, 2022) uses a frozen pre-trained BART that is injected with trainable adapter layers (Houlsby et al., 2019; Pfeiffer et al., 2020) to generate IE embeddings for non-compositional expressions. With better meaning-representation correspondence, the non-compositional expert GIEA performs better than BART in downstream IE processing tasks. Yet, this advance is limited by the assumption that all IEs occur in their idiomatic sense and ignores their _contextual ambiguity_ that makes them _potentially idiomatic expressions_ (PIEs)-their meanings can be understood either literally or idiomatically in a context-dependent manner (Haagsma et al., 2020)1. For example, the PIE "behind closed doors" should be interpreted literally in _Always lock valuables behind closed doors_ and idiomatically in _They avoided any publicity and made all deals behind
_closed doors_. Ideally, their representations ought to be distinct in these two contexts. However, examining the representation of 235 PIEs that are largely unrelated in their literal and idiomatic context (their literal PIE embeddings and idiomatic definitions have a mean cosine similarity of 0.0047), we notice that their representations generated by the state-of-the-art (Zeng and Bhat, 2022) exhibit a high cosine similarity between their idiomatic and literal PIE embeddings (mean cosine similarity of 0.82).
Towards addressing this discrepancy, this study extends GIEA's ability in two concrete ways. First, through semantically meaningful representations for non-compositional expressions we enable effective handling of _non-compositionality_. Second, by generating context-appropriate PIE representations distinct for idiomatic and literal PIEs we enable effective _contextual disambiguation_ of PIEs. Addressing these issues involves attending to the following challenges. (1) BART and GIEA's abilities should be combined to generate good embeddings for PIE in a context-dependent manner. (2) With the self-supervised reconstruction task as the sole objective, PTLM parameters are already optimized for token reconstruction from their token embeddings. To represent PIEs, BART and GIEA's learning objectives should be revamped. To address these challenges, we propose **P**otentially **I** Idiomatic **E**xpression **R**epresentation generator (Pier). Inspired by AdapterFusion (Pfeiffer et al., 2021), which has been used to combine task-specific adapters, Pier generates embeddings by combining the output from each GIEA adapter layer and pre-trained BART transformer layer with an attention fusion layer serving as a routing mechanism that passes compositional or non-compositional embeddings based on the context.. It is trained under the supervision of external knowledge, e.g., IE dictionary definition and PIE senses, that helps the model to disambiguate and comprehend PIEs' literal and idiomatic meanings via a cosine-similarity based learning objective and a set of mask-infilling tasks with prompts.
Our main contributions are as follows.
(1) We propose Pier, a unified language model that combines pre-trained BART's compositional and GIEA's non-compositional representation abilities to generate semantically meaningful representations for both literal and idiomatic PIEs in a context-dependent manner.
(2) We perform an intrinsic evaluation of the resulting IE embeddings' semantic quality by clustering them into meaning groups; the idiomatic embeddings of Pier are superior compared to those of pre-trained BART in terms of homogeneity score (+0.15); additionally, we evaluate the distinctiveness between the literal and idiomatic embeddings and found that Pier can better differentiate PIE usage and has, on an average, a +0.49 larger cosine distance for idiomatic and literal PIEs in the embedding space than GIEA.
(4) Extrinsic evaluations validate Pier's utility; Pier outperforms both BART and GIEA on two PIE processing tasks-PIE sense classification (accuracy +3.12% over GIEA and +2.67% over BART) and PIE span detection (sequence accuracy +3.29% over GIEA and +28.54% over BART).
In two classic NLU tasks of sentiment classification and paraphrase identification, Pier compares more favorably with BART than GIEA, demonstrating that its NLU capabilities do not suffer at the cost of refining its PIE representation2.
Footnote 2: The code for Pier can be found at [https://github.com/zzeng13/PIER](https://github.com/zzeng13/PIER).
## 2 Related Work
**Non-compositional Phrase Embedding.** Traditional methods for non-compositional phrase embedding include learning adaptive weights to combine the compositional (averaging word embeddings) and non-compositional representation of the phrase (representing phrases with single tokens) (Hashimoto and Tsuruoka, 2016; Li et al., 2018, 2018). These methods cannot be adopted for contextualized embeddings. PTLMs, though producing contextualized representations, are known for their inability to handle non-compositional phrases (Zeng and Bhat, 2022; Liu and Neubig, 2022). GIEA (Zeng and Bhat, 2022), the first contextualized representation model for non-compositional phrases, efficiently adapts BART using adapter modules (Pfeiffer et al., 2020) consisting of simple, parameter-efficient projection layers added between the trained transformer layers, to produce semantically meaningful IE embeddings in a data-efficient manner compared to LM pre-training (\(\sim\)60MB vs. \(\sim\)160GB). Despite outperforming a fine-tuned full BART model, GIEA remains challenged by PIEs' semantic ambiguity. Pier addresses this limitation through architectural modifications and additional prompt-based learning objectives as detailed in Section 3.
**Architectures for Information Fusion.** Prior studies have explored different architectures to fuse information in neural networks. For instance, an attention flow module Seo et al. (2017) is proposed to combine and fuse information from two vectors (query and context) for reading comprehension. Yuan and Liu (2022) infuses external graph knowledge into pre-trained BART by adding a cross-attention module inside each BART decoder layer to infuse graph entity representation. In this work, we follow GIEA and use adapters to combine and route GIEA and BART embeddings. Adapters have also shown effectiveness in multi-task and multi-lingual transfer Pfeiffer et al. (2020); Ansell et al. (2021). Specifically, an AdapterFusion module Pfeiffer et al. (2021) combines multiple trained task-specific adapters with a single attention layer to automatically select appropriate adapters for a given task. In Pier, we utilize an attention fusion layer, a simplified version of an AdapterFusion module, to allow the LM to (a) combine BART and GIEA as the compositional and non-compositional language experts and (b) contextually select proper PIE representation depending on whether the PIE is used idiomatically or literally. The attention fusion layer is explained in Section 3 (See Figure 1).
**Auxiliary Guided Representation Learning.** Auxiliary information to aid learning of language representations has been explored by using phrase knowledge to mask and reconstruct token spans, e.g., noun phrases or named entities, during training to learn phrase representation Joshi et al. (2020), and by including dictionary definitions to learn representations of rare words Yu et al. (2022); Zeng and Bhat (2022). Prior work also suggests that semantically meaningful latent embeddings can be learned by optimizing the cosine similarity between source and target embeddings Radford et al. (2021). Similarly, Pier utilizes dictionary definition for IEs to compensate for the rarity of IEs and the relatively small IE-type training instances. We also guide the PIE representation learning by optimizing the cosine similarity between the PIE embeddings and their corresponding definition/PTLM embeddings.
## 3 Unified PIE Representation Generator
To create a single language model that produces contextually appropriate embeddings for PIEs, in Pier, we combine BART's ability to generate embeddings appropriate for compositional meanings and GIEA's to non-compositional meanings such that Pier should output GIEA-style embeddings when the PIE is used idiomatically and BART's embedding otherwise.
We implement an attention layer that acts on the outputs from each frozen GIEA's adapter layer and frozen BART's transformer layer and serves as a "routing" mechanism for the compositional and non-compositional type embeddings. To train the attention layer, the overall loss is the sum of (1) a cosine similarity-based part for the embedding to encode meaning via external dictionary definitions, and (2) a reconstruction cross-entropy part that teaches the embedding of the association between the PIE senses and sentence contexts. Optimizing these two losses jointly allows the model to link PIEs' meanings to their contextual uses. The
Figure 1: Overview of the Pier training framework.
overview of Pier framework is shown in Figure 1.
### Attention Fusion Layer
We implement attention fusion layers to route and propagate BART or GIA's outputs layer by layer. As shown in Figure 1, we insert an attention fusion layer after each GIA's adapter layer and BART's transformer layer to combine them with attention weights into a single embedding vector that is sent to the next BART's transformer layer; the last attention layer outputs the final embedding vector.
Specifically, each attention layer \(l\) has three trainable weight matrices, namely, Key (\(\mathbf{K}_{l}\)), Value (\(\mathbf{V}_{l}\)) and Query (\(\mathbf{Q}_{l}\)). The attention layer \(l\) takes two inputs, namely, GIA's \(l\)-th adapter layer output at each token position \(i\), \(\mathbf{g}_{l,i}\) and BART's \(l\)-th transformer layer output, \(\mathbf{b}_{l,i}\); then, it computes the contextually attention weighted representation as
\[\mathbf{h}_{l,i} =[\mathbf{b}_{l,i};\mathbf{g}_{l,i}]\] \[\mathbf{a}_{l,i} =\text{softmax}(\mathbf{b}_{l,i}^{\top}\mathbf{Q}_{l}\cdot \mathbf{h}_{l,i}^{\top}\mathbf{K}_{l})\] \[\tilde{\mathbf{h}}_{l,i} =\mathbf{b}_{l,i}^{\top}\mathbf{V}_{l}\] \[\mathbf{o}_{l,i} =\mathbf{a}_{l,i}^{\top}\tilde{\mathbf{u}}_{l,i}\]
Note that our attention fusion layer is a special, simplified case of AdapterFusion module; instead of fusing outputs from multiple adapters, our module fuses the embeddings from before (BART's transformer layer output) and after a GIA adapter layer. Intuitively, the attention layer at each layer uses a linearly transformed BART's transformer layer output as a query to the GIA and BART representation to determine how much of each token's BART's compositional representation needs to be substituted with GIA's non-compositional representation based on its context. The attention weight \(\mathbf{o}_{l,i}\) acts similarly to the PIE-specific weight that combines the compositional and non-compositional representations for computing PIE embeddings to adjust the balance and mixture between the compositional and non-compositional meaning in prior works Hashimoto and Tsuruoka (2016); Li et al. (2018, 2018). But, our layer-wise attention weight is more contextualized and flexible.
With the attention fusion layer, we can train the model using the _copy objective_, where the input and output sequences are identical, just as the GIA model does. However, our experiments later demonstrate that simply adding the attention fusion layer with the copy objective is not enough to effectively learn PIE representations. Therefore, we have developed and incorporated the similarity learning objective and prompt infilling objectives, which we will describe in the subsequent sections.
### Similarity Learning Objective
From prior work Zeng and Bhat (2022), we infer that the quality and quantity of sentences with PIE are insufficient for unsupervised representation learning. This prompts us to use dictionary definitions for idiomatic and original BART's embedding for literal PIEs to create contextual awareness.
Specifically, given a sentence with a PIE at training time, we first generate the _PIE embedding_ by mean pooling Pier final output embeddings of the PIE tokens. Then, we generate two embeddings that aid the refining of the PIE embedding: (1) we generate an _idiomatic embedding_ that encodes the non-compositional meaning of the PIE by using MPNet Song et al. (2020) to produce a sentence embedding on the PIE's idiomatic dictionary definition. We use MPNet because prior work Zeng and Bhat (2022) found the resulting definition embeddings help representation learning more than other models such as BART; and (2) we generate a _literal embedding_ that encodes the compositional meaning of the PIE by mean pooling a regular PTLM's (here, BART) final PIE token embeddings. Note that since the "literal" embeddings are contextualized, they may already encode idiomatic meanings for frequently used idioms, including idioms exclusively used figuratively. Our model should still provide accurate semantics for these idioms since the attention fusion layer could pass the idiomatic semantics through. However, for the vast majority of idioms that are rare in text, BART's embeddings are considered "compositional," not capturing their figurative meanings, hence we refer to them as _literal_ embeddings.
Finally, we introduce a learning objective for sentences with a literal PIE that maximizes the cosine similarity between PIE and literal embeddings while minimizing the cosine similarity between the PIE and the idiomatic embeddings. For sentences with an idiomatic PIE, we do the opposite: encourage higher cosine similarity between PIE and idiomatic embeddings and lower cosine similarity between PIE and literal embeddings.
### Prompt Infilling
To directly provide the PIE sense information to the model and help it relate PIE senses with sentence contexts, we design two types of prompt-based
mask infilling tasks: (1) _type classification_ prompts and (2) _definition generation_ prompts.
For the type classification prompts, we append the original sentence with another sentence that has a mask token, e.g., _the phrase "see red" is used in its_[mask] _sense._, and ask the model to infill the correct PIE sense, i.e., "idiomatic" or "literal", according to the context of the original sentence. As such, we directly inform the model of the existence and the distinction of the two PIE senses.
For the definition generation prompts, we append a masked sentence, e.g., _the phrase "see red" is used to mean_[mask]., and we ask the model to generate the definition of the idiomatic meaning in the place of the mask token if the PIE is idiomatic in the context; otherwise, the model should fill the mask with the PIE itself since the meaning is compositional. Through these prompts, we allow the model to learn the two PIE senses' meanings and relate them with their contexts.
We pre-defined five prompt templates for each prompt type (see Appendix A) based on our empirical observation that the variety of the prompt templates positively influences the evaluation performances (see Section 4). We append these prompts to the end of the original idiomatic or literal sentence.During training, we compute the mean cross-entropy loss for all tokens from the mask-in-filled output sentence, which we then add to the cosine similarity losses introduced in the last section to serve as the final loss. Note that unlike _prompt-based learning_Liu et al. (2022), we use prompts to teach LMs informative representations.
## 4 Experiments
### PIE Datasets
Similar to Zeng and Bhat (2022), we use MAGPIE Haagsma et al. (2020), the largest-to-date dataset for English PIEs with sentences sampled from the BNC (BNC Consortium, 2007). We selected all sentences with PIEs that were unanimously labeled as idiomatic or literal by the MAGPIE annotators and have a single idiomatic definition according to Google dictionary and Wiktionary. In all, we had 32,693 sentences (77.4% idiomatic) with 1,480 PIEs in the train set and 4,102 (77.57% idiomatic) sentences with 1,001 idioms in the test set. We use MAGPIE's official random split to divide the data into train and test sets where _all_ the PIEs in the test data appear in the train data. We also use idiom meaning groups proposed by Zeng and Bhat (2022) to perform an intrinsic evaluation of the embeddings; 129 IEs form 20 groups with distinct meanings such that any two IEs from two different groups have different meanings while two IEs from the same group have similar meanings.
### Models
We compare the performances of BART, GIAE, Pier and its variants to demonstrate the usefulness of the components in Pier.
**BART** is the pre-trained BART-base language model with six encoder and decoder layers.
**GIAE** is the non-compositional embedding generator trained with a BART-base model and adapters using the MAGPIE train set.
**BART-FT** is a BART-base model fine-tuned with the copy, similarity learning, and prompt infilling objectives.
**FusionAttn** is the model that combines BART and GIAE with the attention fusion layer and is trained with the copy objective with the cross-entropy loss.
**FusionSim** combines BART and GIAE with the attention fusion layer and is trained with the copy and similarity learning objective.
**FusionPrompt** combines BART and GIAE with the attention fusion layer and is trained with the prompt infilling objective. The above four models are used to show the usefulness of the different components of our model.
**Pier and Pier+.** Pier combines BART and GIAE with the attention fusion layer and is trained with the type classification and definition generation prompts with the reconstruction, copy, and similarity learning objectives. Only a _single_ prompt template is provided for each prompt type. Pier+ is similar to Pier, but for each prompt type, we provide five templates. This model shows the benefit of using multiple prompts for each prompt type and is considered our final model.
### Evaluation Tasks
We conduct _intrinsic_ and _extrinsic_ evaluation tasks.
#### 4.3.1 Intrinsic Evaluation
An intrinsic evaluation indicates if PIE embeddings are semantically meaningful and distinctive in the respective literal and idiomatic contexts.
**Embedding Generation.** We evaluate the embedding quality by the competing models. We use a candidate model for each sentence to compute and mean pool the PIE token embeddings to get a single embedding vector. Then, we compute and
mean pool across all idiomatic sentences to get the idiomatic embedding for the PIE. Similarly, we get the literal embeddings for the PIEs. With the embeddings, we perform two intrinsic evaluations.
**Embedding Clustering.** The procedure is in line with Zeng and Bhat (2022). Specifically, given a model, we compute the idiomatic PIE embeddings for 129 idioms and cluster them into 20 distinct meaning groups using agglomerative clustering with complete linkage and pairwise embedding cosine similarity as the distance metric. We measure clustering quality using a _homogeneity score_ and the _mean inter-group cosine distance_ between the embeddings for IEs from different groups. Because the ground truth meaning groups are distinct, the larger the homogeneity scores (the score is 1.0 if all clusters contain only IEs from the same meaning group) and the mean inter-group cosine distances, the better the clustering quality.
**Embedding Differentiation.** The clustering evaluation examines only the model's ability to produce high-quality _idiomatic_ embeddings. As discussed in Section 1, it is important for the language model to become innately aware of the difference between the idiomatic and literal meanings of the same PIE based on their context. Hence, given a model, we generate idiomatic and literal PIE embeddings for PIEs with both idiomatic and literal sentences from the MAGPIE test set. We compute _mean inter-type cosine similarity_ between a pair of idiomatic and literal PIE embeddings across all PIEs with both literal and idiomatic sentences from the MAGPIE test set (there are 235 such PIEs). Assuming a weak correlation between the literal and idiomatic meanings for PIEs, the smaller the mean inter-type cosine similarity, the better the differentiation.
#### 4.3.2 Extrinsic Evaluation
We include two classic PIE processing tasks and two NLU tasks for the extrinsic evaluation.
**PIE Sense Classification (SenseCLF)** is a classic PIE processing task (Fazly et al., 2009; Feldman and Peng, 2013; Rajani et al., 2014; Peng and Feldman, 2016; Salton et al., 2016; Liu and Hwa, 2017; Taslimipoor et al., 2018; Peng et al., 2014; Liu and Hwa, 2019), a.k.a. idiom type classification. Each sentence with a PIE is classified into two classes, _idiomatic_ (positive) and _literal_, based on the PIE uses. Given a sentence with a PIE and its location, we first use the model to generate its PIE embedding; then, the PIE embedding is passed to a linear and softmax layer to perform the binary classification. The classifier's linear layer is trained with the MAGPIE train set and is evaluated with F1 score and accuracy on the MAGPIE test set.
**PIE Span Detection (SpanDET)** is a more recent PIE processing task, a.k.a. IE identification (Zeng and Bhat, 2021; Skvore et al., 2022), which is a special case of MWE identification Baldwin and Kim (2010) focusing on PIEs. Given a sentence with a PIE, a token-level classifier is asked to classify every token as either _idiomatic_ (positive) or _literal_; when a PIE is used literally, all tokens are classified as literal; otherwise, the tokens from the PIE are labeled as idiomatic. To succeed, the classifier must correctly classify _every token_ in the input sentence, effectively identifying the presence of an idiomatic PIE and precisely detecting its boundary simultaneously. Since each MAGPIE sentence annotates a single PIE, our models identify one idiomatic PIE per sentence. For the classifier, we input each token embedding generated by the tested LM to a two-layer MLP using ReLU activation, whose input dimension is the embedding dimension, and the hidden dimensions are halved after each layer. The classifier is trained with the MAGPIE train set, and only the MLP weights are trainable while the associated language model's weights are frozen. The performance is evaluated by _sequence accuracy_ and _token recall_. In sequence accuracy, an instance is considered correct if and only if all the tokens in the sequence are classified correctly. To consider a model's ability to classify the sequence partially correct, we consider the token recall score by computing it for each test sequence and then averaging it across all test sequences.
To show that Pier does not sacrifice performance on NLU tasks, we consider two NLU tasks.
**Sentiment Classification (SentCLF)** classifies a given sentence into positive or negative sentiment. We use the SST2 (Socher et al., 2013) dataset and its default train and test splits (two classes) with 67,349 and 1,821 instances.
**Paraphrase Identification (ParaID)** classifies a pair of given sentences into paraphrase or non- paraphrase classes. We combine the MRPC (Dolan and Brockett, 2005) and PAWS (Zhang et al., 2019) datasets and their default train/test splits with a total of 53,069 train and 9,725 test instances.
For SentCLF and ParaID, we train a new adapter with the default Pfeiffer configuration (Pfeiffer et al., 2020) stacked atop the testing models, making only the paraphrase classifier adapter trainable
during training. Performances are evaluated using the F1 score and accuracy.
Note that we freeze the testing language model and deliberately constrain the complexity of the classifiers to a linear layer, MLPs, or adapter layer to ensure that the performance primarily reflects the quality of the PIE embedding. See Appendix B for more details on the general setup.
## 5 Results and Analyses
**Intrinsic Evaluation.** As shown in Table 1, BART has the lowest homogeneity score (0.45) and inter-group cosine distance (0.037), indicating that the groups using the idiomatic embeddings do not correspond to those grouped by meaning. Pier+ has an absolute 0.15-point gain in both the homogeneity score and the inter-group cosine distance. While GIA has a larger homogeneity score and inter-group cosine distance than Pier+, its inter-type cosine similarity is very high (0.82). This confirms that GIA cannot generate contextually appropriate embeddings for idiomatic and literal PIEs and instead treats them as idiomatic in all contexts, thus ignoring their contextual ambiguity. In comparison, Pier+ achieves an inter-type cosine similarity of 0.49 lower, generating more contextually distinctive PIE embeddings. We hypothesize that Pier+ achieves a lower homogeneity score and inter-group cosine distances than GIA because it contextually fuses BART and GIA embeddings, thus making its idiomatic embeddings less distinctive in terms of idiomatic meanings. However, as we will show in Section 5, Pier+ embeddings encode information that helps it to achieve even better performance in PIE processing tasks. Additionally, we emphasize that Pier+ is not a mere interpolation between BART and GIA, as it goes beyond simple combination techniques. It disambiguates idioms' literal and figurative senses based on the sentence context and generates appropriate embeddings with accurate semantics. A naive interpolation approach, such as concatenation or averaging BART and GIA's embeddings, would indeed result in high H-scores (0.6214 and 0.6154) and CosDist (0.1503 and 0.1355), but also high DiffSim scores (0.7934 and 0.7954), which is undesirable.
**Performance on PIE Processing Tasks.** Unsurprisingly, Pier+ outperforms both BART and GIA in the classic PIE processing tasks-SenseCLF and SpanDET-in all metrics as shown in Table 1. For the SenseCLF, Pier+ outperforms BART by 2.66% and GIA by 3.12%. Note that BART's type classification accuracy is already high at 93.71%, and GIA's performance is only comparable with that of BART (not better). This is because BART and GIA are only compositional and non-compositional expression experts (treating all PIEs as either literal or idiomatic), respectively. As shown by the intrinsic evaluation, none of their embeddings are distinctive enough for idiomatic and literal PIE embeddings. Because Pier+ produces different PIE embeddings based on context, it improves over the already high type classification accuracy from BART and GIA.
Similarly, for SpanDET, a much more demanding task requiring detection and tagging simultaneously, Pier+ has a sequence accuracy that is 28.54% higher than BART and 3.29% higher than GIA. We point out that 22.43% of sentences in the test set have literal PIEs, over which GIA's sequence accuracy is only 71.84% while Pier+ has a sequence accuracy of 85.76%, gaining 13.91% absolute points in accuracy. Observing the token-level recalls, Pier+'s performance is only slightly better than that of GIA, yet leads to a 3%+ gain in sequence accuracy. Plausibly, this is because of FusionMultiPrompt's better ability to recognize literal PIEs (as shown by the \(\sim\)14% sequence accuracy gain on literal sentences). So, with its ability to produce meaningful idiomatic embeddings, GIA achieves high sequence accuracy in SpanDET, whereas Pier+'s ability to generate appropriate literal embeddings allows it to improve further.
**Performance on NLU tasks.** Pier+ performs competitively with BART and GIA (F1 and accuracy differing by around +/- 1%). Given that the main purpose of Pier+ is for producing high-quality PIE embeddings and enhancing IE processing ability, the results on the NLU tasks lead us to conclude that (1) Pier+ (and GIA to a lesser extent) adequately processes sentences with or without PIEs and thus performs comparably with BART on classic NLU tasks, i.e., Pier+ does not breakdown on sentences without PIEs; and (2) given that Pier+ produces PIE embeddings with superior semantic properties and performs very well on PIE processing tasks, we believe Pier+ is overall a better LM than BART for PIE processing.
**Effect of Individual Components.** As shown by FusionAttn's performances in Table 1 and 2, naively adding an attention fusion layer to combine GIA and BART with a copy objective does
_not_ work; the homogeneity score is lower even than BART while the inter-type cosine similarity is higher than GIEA; also, FusionAttn's sequence accuracy for SpanDet is only 5.75% higher than BART yet 19.5% lower than GIEA. FusionAttn's poor performance highlights the fact that the reconstruction task with the copy objective alone is insufficient for the model to learn the intended embeddings as discussed in Section 1. Although after adding the similarity learning objective, FusionSim shows the effectiveness of similarity learning objective compared to FusionAttn ((e.g., +4.7% in SpanDET sequence accuracy), it underperforms GIEA. Similarly, FusionPrompt achieves marginal gains over BART (e.g., +3.39% in SpanDET sequence accuracy) by combining attention fusion and prompt infilling objectives, yet it severely underperforms GIEA. Moreover, having both the similarity learning and the prompt infilling objectives, BART-FT model exhibits an intrinsic quality that is similar to GIEA or BART, yet, without the attention fusion layer, it underperforms Pier+ in all intrinsic evaluation tasks and PIE processing tasks (i.e., SenseCLF and SpanDET). These results indicate that neither cosine similarity forcing nor prompt infilling objective alone is sufficient, and that it would require the utilization of the attention fusion layer to route GIEA and BART to produce appropriate PIE embeddings.
Finally, we tested the usefulness of the classification and the generation prompts through an ablation study, where we compare PIER with only the type classification prompt (P-Cls) and PIER with only the definition generation prompt (P-Defn). Each prompt type utilizes the same five prompt templates as the Pier+ model. As shown in Tables 1 and 2, even with a single prompt type, our models achieve significant improvements in both intrinsic evaluation and PIE processing tasks while maintaining a competitive performance in NLU tasks. However, when combining both prompt types, PIER+ outperforms P-Cls and P-Defn, especially in the most difficult SpanDET task, with gains of 5.6% and 7.4% in sequence accuracy, respectively. These results lead us to infer that combining the two types of prompts is beneficial and leads to further performance gains.
**Effect of Combined Components.** The salient
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline
**Model** & **H-Score (\(\uparrow\))** & **CosDist (\(\uparrow\))** & **DiffSim (\(\downarrow\))** \\ \hline BART & 0.4546 & 0.0379 & 0.7495 \\ \hline GIEA & **0.6450** & **0.2284** & 0.8224 \\ \hline BART-FT & 0.4510 & 0.0331 & 0.8198 \\ \hline FusionAttn & 0.4306 & 0.0357 & 0.8495 \\ \hline FusionSim & 0.5015 & 0.0924 & 0.6428 \\ \hline FusionPrompt & 0.4160 & 0.0495 & 0.7843 \\ \hline P-Cls & 0.5756 & 0.1527 & 0.3468 \\ \hline P-Defn & 0.5751 & 0.1546 & 0.3547 \\ \hline \hline PIER & 0.5844 & 0.1782 & 0.3272 \\ \hline PIER+ & 0.6095 & 0.1838 & **0.3230** \\ \hline \end{tabular}
\end{table}
Table 1: Intrinsic evaluations measured by clustering homogeneity score (H-Score \(\uparrow\)), mean inter-group cosine distance (CosDist \(\uparrow\)), and mean inter-type cosine similarity (DiffSim \(\downarrow\)). Best performances are **boldfaced**.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c|}{**SenseCLF**} & \multicolumn{2}{c|}{**SpanDET**} & \multicolumn{2}{c|}{**SentCLF**} & \multicolumn{2}{c|}{**ParaID**} \\ \cline{2-9} & **FI** & **Acc** & **SA** & **TR** & **FI** & **Acc** & **FI** & **Acc** \\ \hline BART & 0.9589 & 0.9371 & 0.5076 & 0.7545 & 0.9246 & 0.9232 & 0.9165 & 0.9225 \\ \hline GIEA & 0.9573 & 0.9325 & 0.7601 & 0.9075 & 0.9145 & 0.9117 & 0.9046 & 0.9103 \\ \hline BART-FT & 0.9614 & 0.9408 & 0.4720 & 0.7907 & 0.9364 & 0.9346 & 0.9152 & 0.9207 \\ \hline FusionAttn & 0.9605 & 0.9386 & 0.5651 & 0.6343 & 0.9158 & 0.9140 & 0.9031 & 0.9098 \\ \hline FusionSim & 0.9642 & 0.9447 & 0.6131 & 0.6415 & 0.9243 & 0.9232 & 0.9068 & 0.9121 \\ \hline FusionPrompt & 0.9632 & 0.9430 & 0.5405 & 0.8149 & 0.9025 & 0.9084 & **0.9270** & **0.9243** \\ \hline P-Cls & 0.9712 & 0.9558 & 0.7370 & 0.8848 & 0.9208 & 0.9197 & 0.9069 & 0.9128 \\ \hline P-Defn & 0.9720 & 0.9571 & 0.7190 & 0.8810 & 0.9315 & 0.9300 & 0.9060 & 0.9118 \\ \hline \hline PIER & 0.9749 & 0.9612 & 0.7864 & 0.9029 & 0.9181 & 0.9163 & 0.9027 & 0.9096 \\ \hline PIER+ & **0.9765** & **0.9637** & **0.7930** & **0.9101** & **0.9290** & **0.9278** & 0.9068 & 0.9122 \\ \hline \end{tabular}
\end{table}
Table 2: Performances on extrinsic evaluation tasks; binary classification tasks, namely, PIE sense classification (SenseCLF), sentiment classification (SentCLF), and paraphrase identification (ParaID), are measured by F1 score (F1) and accuracy (Acc), and the PIE span detection (SpanDET) is measured by sequence accuracy (SA) and token-level recall (TR). Best performances are **boldfaced**.
effect of combining all components is shown by comparing FusionSim and Pier. Pier gains 16.5% in homogeneity score and 0.32 in inter-type cosine similarity while achieving 1.65% higher accuracy in SenseCLF and 17.3% higher sequence accuracy in SpanDET. Moreover, comparing Pier+ to Pier, we observe a further meaningful gain in all metrics across all tasks, indicating the benefit of including multiple prompt templates.
### Performance and Error Analyses
**Effect of PIE Properties.** Psycholinguistic findings shed light on how idioms' frequency, semantic and syntactic properties affect human IE comprehension (Saban-Bezalel and Mashal, 2019; Lada et al., 2023). Informed by these results, we analyze the effect of PIE training data size and IE semantic/syntactic properties on Pier's IE processing competence through correlational analyses. We found that PIE frequency in the train set does affect the Pier+'s intrinsic embedding quality but not the downstream PIE processing task performances. Additionally, we examined the correlation between Pier+'s performances from all evaluation tasks to three IE properties _decomposability_, i.e., the degree to which an IE's constituent words contribute to its figurative meaning, _literalness_, i.e., the extent an IE can be used in a literal sense, and _flexibility_, i.e., how flexible are IEs to morphosyntactic or internal modifications. We found little to no correlation between model performance and these properties. See Appendix C for more details.
**PIE Embedding Error Analysis.** Given that linguistic properties of PIEs are not the main contributing factors to Pier+'s embedding quality, we conduct further analyses on PIE embedding errors. Specifically, we analyze the PIEs that are poorly differentiated by Pier+. Selecting from the 235 PIEs used in the embedding differentiation test, we pick PIEs whose Pier+ embeddings have inter-type cosine similarity larger than 0.7 (highly similar idiomatic and literal embeddings). Observing the resulting 60 (25.5%) PIEs, we find that 44 (73.3%) have a very skewed idiomatic/literal sentence distribution in the training data. Notably, the number of idiomatic sentences divided by the number of training sentences for that PIE is either over 0.85 or lower than 0.15, suggesting that these are either only used idiomatically (idioms) or only as literal expressions in the training data, resulting in a low embedding differentiation among the PIE senses. The remaining 16 PIEs exhibit no discernible properties, and we leave a deeper dive into these hard-to-learn PIEs for a future study.
## 6 Conclusion and Future Work
We propose Pier, a language model that uses attention fusion to combine BART and a previously proposed adapter to produce semantically meaningful and contextually appropriate representations for PIEs. Training using a prompt-infilling objective for contextual PIE-type awareness and a cosine similarity objective to guide the generated PIE embeddings toward their idiomatic or literal meanings resulted in Pier generating PIE embeddings with superior semantic properties. IE-aware Pier outperforms both BART and GIEA on IE processing tasks with BART-level performance on classic NLU tasks. These results demonstrate Pier's usefulness as an idiom-aware LM.
Future directions could explore methods to further enhance the IE awareness of broader types of non-compositional constructions beyond idioms (e.g., metaphors and similes).
### Limitations
Pier has two main limitations. First, Pier is not expected to produce embeddings for PIEs unseen during training when used in their figurative sense. This is because each PIE has a conventionalized figurative interpretation stemming from its unique origins and metaphorical linking, which requires external and PIE-specific knowledge (e.g., PIE definitions) for Pier's learning of high-quality PIE embeddings. It is likely that Pier can 'guess' the figurative meanings for certain PIEs with high decomposability or when BART already encodes their semantics during its pre-training. As such, this generalizability to unseen PIEs is not guaranteed with PIER, and we do not currently see a practical way to enable it. Second, Pier requires supervision in the form of sentences with the PIEs classified as literal or figurative. Given that BART's self-supervised learning objective during its pre-training fails to take this into account, we argue that using supervised learning with adapters is a practical alternative to capture IE semantics while reducing the number of trainable parameters and associated training data. With the availability of multi-lingual resources (Tedeschi et al., 2022) for PIEs now becoming available, and automatic PIE sense classification methods that generalize to un
seen PIEs (Zeng and Bhat, 2021), we believe this requirement of a training corpus may be less of a bottleneck. More broadly, although we show that PIER does not lose its NLU ability on available tasks, we leave the application of PIER to IE-centered NLU tasks (e.g., NLI with figurative language) to future studies.
## Ethics Statement
The intended use of our system is to serve as a pretrained language model capable of adequately handling idiomatic language. As such, the intended users are those interested in fine-tuning PIER or using PIER's embeddings rich in idiomatic semantics for downstream NLP applications, such as detecting idiomatic expressions in text, sentiment analysis of text with idioms. In case of model failure, PIER may produce embeddings that do not accurately reflect the true figurative meanings of the IEs and thus negatively impact the downstream performance. Therefore, we advise against using PIER as part of decision models in critical situations, such as medical or financial scenarios. To ensure PIER performance as expected, idioms covered by PIER's training should be used. Beyond this, to the best of our knowledge, PIER does not introduce or contain any additional bias. PIER is trained using datasets that are publicly available and reputable. We do not collect or use any data that can violate privacy rights.
## Acknowledgements
This research was supported in part by the National Science Foundation under Grant No. IIS 2230817 and by a U.S. National Science Foundation and Institute of Education Sciences grant (2229612).
|
2302.09125 | JANA: Jointly Amortized Neural Approximation of Complex Bayesian Models | This work proposes ``jointly amortized neural approximation'' (JANA) of
intractable likelihood functions and posterior densities arising in Bayesian
surrogate modeling and simulation-based inference. We train three complementary
networks in an end-to-end fashion: 1) a summary network to compress individual
data points, sets, or time series into informative embedding vectors; 2) a
posterior network to learn an amortized approximate posterior; and 3) a
likelihood network to learn an amortized approximate likelihood. Their
interaction opens a new route to amortized marginal likelihood and posterior
predictive estimation -- two important ingredients of Bayesian workflows that
are often too expensive for standard methods. We benchmark the fidelity of JANA
on a variety of simulation models against state-of-the-art Bayesian methods and
propose a powerful and interpretable diagnostic for joint calibration. In
addition, we investigate the ability of recurrent likelihood networks to
emulate complex time series models without resorting to hand-crafted summary
statistics. | Stefan T. Radev, Marvin Schmitt, Valentin Pratz, Umberto Picchini, Ullrich Köthe, Paul-Christian Bürkner | 2023-02-17T20:17:21Z | http://arxiv.org/abs/2302.09125v3 | # JANA: Jointly Amortized Neural Approximation of Complex
###### Abstract
This work proposes "jointly amortized neural approximation" (JANA) of intractable likelihood functions and posterior densities arising in Bayesian surrogate modeling and simulation-based inference. We train three complementary networks in an end-to-end fashion: 1) a summary network to compress individual data points, sets, or time series into informative embedding vectors; 2) a posterior network to learn an amortized approximate posterior; and 3) a likelihood network to learn an amortized approximate likelihood. Their interaction opens a new route to amortized marginal likelihood and posterior predictive estimation - two important ingredients of Bayesian workflows that are often too expensive for standard methods. We benchmark the fidelity of JANA on a variety of simulation models against state-of-the-art Bayesian methods and propose a powerful and interpretable diagnostic for joint calibration. In addition, we investigate the ability of recurrent likelihood networks to emulate complex time series models without resorting to hand-crafted summary statistics.
## 1 Introduction
Surrogate modeling (SM) and simulation-based inference (SBI) are two crucial ingredients of a new generation of methods for simulation science (Lavin et al., 2021). From a Bayesian perspective, SM seeks to approximate the intractable likelihood function, whereas SBI aims to approximate the intractable posterior distribution of a complex generative model. Both problems are hard, as they involve multidimensional integrals which cannot be solved with standard analytical or numerical methods. Thus, specialized neural approximators have emerged as novel tools for taming the intractable (Cranmer et al., 2020).
We propose JANA ("Jointly Amortized Neural Approximation"), a Bayesian neural framework for _simultaneous amortized_ SM and SBI, and show how it enables accurate solutions to challenging downstream tasks like the estimation of marginal and posterior predictive distributions (see Figure 1). JANA also presents a major qualitative upgrade to the BayesFlow method (Radev et al., 2020), which was originally designed for amortized SBI alone1.
Footnote 1: All capabilities explored in this work are implemented in the BayesFlow library: [https://github.com/stefanradev93/BayesFlow](https://github.com/stefanradev93/BayesFlow).
It is commonly presumed that amortized SBI is wasteful (Greenberg et al., 2019; Papamakarios and Murray, 2016) and requires much larger simulation budgets than case-based SBI to make up for the much larger prediction domain. Our results challenge this premise. Given identical simulation budgets, JANA outperforms or is on par with ABC-SMC, SNL, SNPE, SNRE, and SNPLA (see Figure 4). We hypothesize that modern neural networks benefit strongly from a broad simulation scope. Thanks to their excellent generalization capabilities, they can exploit outcomes from the entire prior predictive distribution of a simulation to improve local accuracy for each specific case. In this sense, amortized inference seems to be a natural by-product of deep neural modeling, and the initial training effort more than repays with global diagnostics, nearly instant estimation at test time, and no loss in accuracy.
JANA is thus a promising catalyst for the fully Bayesian analysis of massive data sets that used to be out of reach for existing methods. We show that jointly amortized SM and SBI unlock the potential of powerful Bayesian tools for model comparison, validation, and calibration, which are essential in Bayesian workflows (Gelman et al., 2020), but widely underutilized in current simulation-based contexts. For one, JANA offers an efficient way to compute _marginal likelihoods_ via the probabilistic change-of-variables formula (instead of integration over the model's entire prior space) as a prerequisite for _prior predictive_ model selection (i.e., probabilistic Occam's razor). For another, it can rapidly produce
both posterior samples and normalized likelihood estimates of new data instances, as are needed in strong validation procedures of the _posterior predictive_ performance (Vehtari and Ojanen, 2012). In other words, JANA can directly quantify both prior and posterior predictive performance without resorting to Markov chain Monte Carlo (MCMC) sampling or costly model re-fits, in addition to the well-studied advantages of individual posterior or likelihood networks (see Figure 1). Our key contributions are:
1. We develop a neural architecture for fully amortized joint posterior estimation and likelihood emulation;
2. We propose a powerful and interpretable method to test for joint calibration of the networks;
3. We extensively validate our new architecture on analytic toy examples and complex simulation models;
4. We show how our joint architecture solves the challenges of computing both out-of-sample predictive performance and intractable marginal likelihoods;
5. We demonstrate a recurrent neural likelihood for surrogate simulations in a complex time series model.
## 2 Method
### Problem Formulation
Bayesian ModelsWe focus on generative Bayesian models specified as a triple \(\mathcal{M}\!\!=\!\!\big{(}G(\mathbf{\theta},\mathbf{\xi}),p(\mathbf{\xi}\,|\,\mathbf{\theta}),p(\mathbf{\theta})\big{)}\). Such models yield observables \(\mathbf{x}\in\mathcal{X}\) according to the system
\[\mathbf{x}=G(\mathbf{\theta},\mathbf{\xi})\quad\text{with}\quad\mathbf{\xi}\sim p(\mathbf{\xi}\,| \,\mathbf{\theta}),\ \mathbf{\theta}\sim p(\mathbf{\theta}), \tag{1}\]
where \(G\) denotes a simulation program, \(\mathbf{\xi}\in\Xi\) denotes externalized randomness (i.e., noise or pseudorandom program states) with density function \(p(\mathbf{\xi}\,|\,\mathbf{\theta})\), and \(p(\mathbf{\theta})\) encodes prior knowledge about plausible simulation parameters \(\mathbf{\theta}\in\Theta\).
Forward InferenceRunning the simulator \(G\) with a fixed parameter configuration \(\mathbf{\theta}\) and different values of \(\mathbf{\xi}\) is equivalent to random draws from an _implicit likelihood_\(p(\mathbf{x}\,|\,\mathbf{\theta})\):
\[\mathbf{x}\sim p(\mathbf{x}\,|\,\mathbf{\theta})\Longleftrightarrow\mathbf{x}=G(\mathbf{\theta}, \mathbf{\xi})\quad\text{with}\quad\mathbf{\xi}\sim p(\mathbf{\xi}\,|\,\mathbf{\theta}) \tag{2}\]
In theory, implicit likelihoods can be obtained by marginalizing the joint distribution \(p(\mathbf{\xi},\mathbf{x}\,|\,\mathbf{\theta})\) over all possible execution trajectories of the simulation program (i.e., over \(\mathbf{\xi}\)), but this is typically intractable (Cranmer et al., 2020).
Inverse InferenceIn Bayesian analysis, we want to infer a model's latent parameters \(\mathbf{\theta}\) from manifest data \(\mathbf{x}\) through the probabilistic factorization of the joint distribution into prior and (implicit) likelihood:
\[p(\mathbf{\theta}\,|\,\mathbf{x})\propto p(\mathbf{\theta},\mathbf{x})=p(\mathbf{\theta})\int_{ \Xi}p(\mathbf{\xi},\mathbf{x}\,|\,\mathbf{\theta})\,\mathrm{d}\mathbf{\xi}. \tag{3}\]
Since we assume that the likelihood is not available in closed form, we also cannot access the posterior \(p(\mathbf{\theta}\,|\,\mathbf{x})\) and perform parameter estimation through gold-standard Bayesian methods, such as Hamiltonian Monte Carlo (HMC)-MCMC (Carpenter et al., 2017).
Marginal LikelihoodsIn addition to estimating parameters, modelers often want to compare and assign preferences to competing models. From a Bayesian perspective, the
Figure 1: A conceptual illustration of our method for jointly amortized neural approximation (JANA). On the one hand, the summary and posterior network can perform amortized posterior estimation and detect model misspecification. On the other hand, the likelihood network can perform amortized likelihood estimation, surrogate simulations, and interact with probabilistic programming languages (PPLs). Together, the two networks enable posterior predictive and marginal likelihood estimation, which allow for amortized Bayesian model comparison and validation.
canonical measure of evidence for a given model is the _marginal likelihood_ (aka the _prior predictive distribution_),
\[p(\mathbf{x})=\int_{\Theta}\int_{\Xi}p(\mathbf{\theta})\,p(\mathbf{\xi},\mathbf{x}\,|\,\mathbf{\theta })\,\mathrm{d}\mathbf{\xi}\,\mathrm{d}\mathbf{\theta}, \tag{4}\]
which is doubly intractable for complex models because both involved integrals are highly difficult to approximate with sufficient precision (Meng and Wong, 1996). The estimation of the marginal likelihood is central to Bayesian model comparison (BMC), since it naturally embodies a probabilistic version of Occam's razor by penalizing the prior complexity of a model (MacKay, 2003). Thus, it allows us to express our preference for a simpler model over a more complex one, given that both models can account for the observed data equally well.
Posterior Predictive DistributionBayesian models can also be compared and validated on the basis of their posterior predictive performance (Vehtari and Ojanen, 2012). However, many posterior predictive metrics rely on the likelihood density being available analytically. In particular, this is true for the expected log-predictive density (ELPD), which is a widely-applied, general-purpose metric to measure (out-of-sample) posterior predictive performance when no application-specific utilities are known (Vehtari et al., 2017). For \(K\) (new) observations \(\mathbf{x}_{new}^{(k)}\) not previously seen by the model, the ELPD can be defined as
\[\text{ELPD}=\sum_{k=1}^{K}\log\int_{\Theta}p(\mathbf{x}_{\text{new}}^{(k)}\,|\,\bm {\theta})\,p(\mathbf{\theta}\,|\,\mathbf{x})\,\mathrm{d}\mathbf{\theta}. \tag{5}\]
The ELPD has a strong connection to information theory (Vehtari and Ojanen, 2012) and is widely used in Bayesian cross-validation (Vehtari et al., 2017), where it is one of the most prominent sources of computational intractability.
Probabilistic SymmetryOur joint training will leverage the symmetry in the arguments of \(p(\mathbf{\theta}\,|\,\mathbf{x})\) and \(p(\mathbf{x}\,|\,\mathbf{\theta})\), along with the fact that a single run of the simulator (Eq. 1) yields a reusable tuple of parameters and synthetic data \((\mathbf{\theta},\mathbf{x})\). However, many simulation models are characterized by a relatively low-dimensional parameter space \(\Theta\) (e.g., low-dimensional vectors) and a rather high-dimensional data space \(\mathcal{X}\) (e.g., multivariate time series or sets of exchangeable observations). Thus, we need different neural architectures, each separately aligned with the structural properties of \(p(\mathbf{\theta}\,|\,\mathbf{x})\) and \(p(\mathbf{x}\,|\,\mathbf{\theta})\).
### Posterior Network
The posterior network \(\mathcal{P}_{\mathbf{\phi}}\) implements a normalizing flow between \(\mathbf{\theta}\) and a latent variable \(\mathbf{z}_{\mathbf{\theta}}\) with a simple density (e.g., Gaussian) given observed or simulated data \(\mathbf{x}\):
\[p_{\mathbf{\phi}}(\mathbf{\theta}\,|\,\mathbf{x}) =p(\mathbf{z}_{\mathbf{\theta}})\,\left|\det\left(\frac{\partial\mathbf{z}_{ \mathbf{\theta}}}{\partial\mathbf{\theta}}\right)\right| \tag{6}\] \[\mathbf{z}_{\mathbf{\theta}} =\mathcal{P}_{\mathbf{\phi}}(\mathbf{\theta};\mathbf{x}). \tag{7}\]
The normalizing flow is realized via a conditional invertible neural network (cINN) composed by a series of conditional coupling layers, as utilized in Ardizzone et al. (2019) and Radev et al. (2020). Since the observed or simulated data will typically have a complex structure and/or contain varying numbers of observations, the posterior cINN includes a trainable summary network sub-module \(\mathcal{H}_{\mathbf{\psi}}\)(see Radev et al., 2020) which we optimize alongside to extract maximally informative data representations \(\mathcal{H}_{\mathbf{\psi}}(\mathbf{x})\) in an end-to-end manner.
The design of the conditional coupling layers follows the work of (Ardizzone et al., 2019; Ardizzone et al., 2019; Radev et al., 2020), since compositions of such layers exhibit favorable theoretical properties (Draxler et al., 2022) and remarkable empirical performance on high-dimensional unstructured data (Dinh et al., 2016; Kingma and Dhariwal, 2018) or complex Bayesian models in various domains (Bellingette et al., 2022; Bieringer et al., 2021; Radev et al., 2021; von Krause et al., 2022). However, any other coupling design can be used as a plug-in replacement.
### Likelihood Network
The likelihood network \(\mathcal{L}_{\mathbf{\eta}}\) implements a normalizing flow between \(\mathbf{x}\) and a (multivariate) Gaussian latent variable \(\mathbf{z}_{\mathbf{x}}=\mathcal{L}_{\mathbf{\eta}}(\mathbf{x};\mathbf{\theta})\) given a parameter configuration \(\mathbf{\theta}\),
\[l_{\mathbf{\eta}}(\mathbf{x}\,|\,\mathbf{\theta})=p(\mathbf{z}_{\mathbf{x}})\,\left|\det\left( \frac{\partial\mathbf{z}_{\mathbf{x}}}{\partial\mathbf{x}}\right)\right|. \tag{8}\]
Figure 2: Recurrent likelihood networks can emulate complex Bayesian stochastic differential equation (SDE) models of disease outbreaks (see **Experiment 4**).The top and bottom row each depict \(1\,000\) simulations (same \(\mathbf{\theta}\)) from the surrogate and the actual simulator, respectively.
This formulation is similar to the pushforward expression for the posterior network (Eq. 6), but with \(\mathbf{\theta}\) swapped for \(\mathbf{x}\). The likelihood network, like the posterior network, is also implemented as a cINN. As the conditioning information is now the parameter vector \(\mathbf{\theta}\) (and not a complex data structure), it can be fed directly to the conditional coupling layers of the cINN without an additional summary network.
However, since the data \(\mathbf{x}\) (i.e., simulator outputs) is typically in non-vector form, the design of the coupling layers needs to be tailored according to the probabilistic symmetry of \(p(\mathbf{x}\,|\,\mathbf{\theta})\). Learning \(p(\mathbf{x}\,|\,\mathbf{\theta})\) in its raw form is typically much harder than learning the likelihood \(p(\mathcal{H}(\mathbf{x})\,|\,\mathbf{\theta})\) of some (learned or hand-crafted) summary statistics \(\mathcal{H}(\mathbf{x})\), since the latter are already in a compressed vector form and do not require specialized architectures. JANA can learn either \(p(\mathcal{H}(\mathbf{x})\,|\,\mathbf{\theta})\) or \(p(\mathbf{x}\,|\,\mathbf{\theta})\), as required by the particular application or dictated by the (un-)availability of good summary statistics. In our experiments, we directly target \(p(\mathbf{x}\,|\,\mathbf{\theta})\) and the **Appendix** details how to design likelihood networks for exchangeable or Markovian data.
### Simulation-based training
In contrast to previous joint learning approaches (Wiqvist et al., 2021), we aim for a fully amortized approach: Once the networks have converged, we want to evaluate the normalized densities \(p_{\mathbf{\phi}}(\mathbf{\theta}\,|\,\mathbf{x})\) and \(l_{\mathbf{\eta}}(\mathbf{x}\,|\,\mathbf{\theta})\) for _any_ pair \((\mathbf{\theta},\mathbf{x})\) consistent with a generative model \(\mathcal{M}\). In addition, we want to generate conditional random draws \(\mathbf{\theta}\,|\,\mathbf{x}\) and \(\mathbf{x}\,|\,\mathbf{\theta}\) from both networks for parameter estimation and surrogate modeling. Finally, we want to prescribe a simple distribution to the summary network outputs \(q\big{(}\mathcal{H}_{\mathbf{\psi}}(\mathbf{x})\big{)}\) in order to detect atypical data during inference (i.e., model misspecification) and highlight potential posterior errors (Schmitt et al., 2022). Thus, we minimize the following criterion:
\[\begin{split}\min_{\mathbf{\phi},\mathbf{\psi},\mathbf{\eta}}\mathbb{E}_{p( \mathbf{\theta},\mathbf{x})}\big{[}&-\left(\log p_{\mathbf{\phi}}(\mathbf{\theta }\,|\,\mathcal{H}_{\mathbf{\psi}}(\mathbf{x}))+\log l_{\mathbf{\eta}}(\mathbf{x}\,|\,\mathbf{ \theta})\right)\big{]}\\ &+\lambda\cdot\mathbb{M}\mathbb{M}\mathbb{D}^{2}\big{[}p(\mathcal{ H}_{\mathbf{\psi}}(\mathbf{x}))\,||\mathcal{N}(\mathbb{0},\mathbb{I})\big{]}\end{split} \tag{9}\]
where \(\mathbb{M}\mathbb{M}\mathbb{D}^{2}\) is the maximum mean discrepancy (MMD; Gretton et al., 2012) between the distribution of summary network outputs and a unit Gaussian density. This divergence imposes a probabilistic structure on the summary space learned by \(\mathcal{H}_{\mathbf{\psi}}(\mathbf{x})\) and enables error detection and model criticism during inference (to be explained shortly, see also Schmitt et al., 2022). We approximate the expectation over \(p(\mathbf{\theta},\mathbf{x})\) via simulations from the generative model \(\mathcal{M}\) and repeat this process until the networks converge (i.e., simulation-based training).
Proper minimization of the criterion in Eq. 9 results in correct posterior and likelihood approximation, along with an interpretable summary space. However, the objective promises self-consistency only in the "small world", as it does not guarantee correct posterior inference or likelihood evaluation in the real world when there may be a simulation gap. This is due to the fact that simulation-based training optimizes the expectation with respect to the Bayesian joint model \(p(\mathbf{\theta},\mathbf{x})\), but not (necessarily) the empirical data distribution \(p^{*}(\mathbf{x})\). Thus, the MMD term allows us to detect potential simulation gaps during inference via distribution matching (Schmitt et al., 2022). Moreover, the posterior network can serve as a "critic" for the likelihood network by rejecting surrogate simulations which are judged to be highly unlikely under the true simulator.
### Validation Methodology: Joint Calibration
Faithful uncertainty representation (i.e., calibration) is an essential precondition for self-consistent and interpretable simulation-based inference. Simulation-based calibration (SBC; Talts et al., 2018) is a general diagnostic method which considers the performance of a sampling algorithm over the entire joint distribution \(p(\mathbf{\theta},\mathbf{x})\), regardless of the specific probabilistic structure of a model.
SBC leverages the generative nature of Bayesian models as well as the self-consistency of the Bayesian joint model \(G(\mathbf{\theta},\mathbf{\xi})\) in the following sense: For all quantiles \(q\in(0,1)\), all uncertainty regions \(U_{q}(\mathbf{\theta}\,|\,\mathbf{x})\) of \(p(\mathbf{\theta}\,|\,\mathbf{x})\) are well calibrated, as long as the generating distribution of the assumed model is equal to true data-generating distribution and posterior computation is exact (Talts et al., 2018). We can formally write this property as
\[q=\int_{\mathcal{X}}\int_{\Theta}\mathbb{I}_{[\mathbf{\theta}^{*}\in U_{q}(\mathbf{ \theta}\,|\,\mathbf{x})]}\,p(\mathbf{x}\,|\,\mathbf{\theta}^{*})\,p(\mathbf{\theta}^{*})\, \mathrm{d}\mathbf{x}\,\mathrm{d}\mathbf{\theta}^{*}, \tag{10}\]
where \(\mathbf{\theta}^{*}\) is the true data-generating parameter and \(\mathbb{I}_{[\cdot]}\) is the indicator function. If the posterior network \(\mathcal{P}_{\mathbf{\phi}}\) generates draws from the true posterior and the likelihood network \(\mathcal{L}_{\mathbf{\eta}}\) mimics the simulator perfectly, then the equality implied by Eq. 10 holds regardless of the particular form of the true likelihood or the true posterior. Thus, any violation of this equality indicates some error incurred by joint training, so we refer to our validation procedure as joint simulation-based calibration (JSBC).
The reasons for faulty JSBC can be any combination of (i) inaccurate representation of the posterior; (ii) inaccurate representation of the likelihood; or (iii) an erroneous implementation of the simulation model itself. To differentiate between (i) and (ii), we can first run standard SBC for the posterior network using data draws from the actual simulator instead of the likelihood network. If this check passes, but subsequently JSBC fails, the calibration problems must stem from the likelihood network. Thereby, we can use the posterior network for _model criticism_ of the likelihood network, which would otherwise be infeasible for most Bayesian models.
As part of a Bayesian workflow (Gelman et al., 2020), SBC can quickly become infeasible for case-based methods, as
it requires independent posterior draws from hundreds or thousands of simulated data sets. However, it is effortless for amortized methods, as we can obtain many posterior draws from multiple data sets in a matter of seconds. In practice, we follow Sailynoja et al. (2022) by transforming the posterior draws into fractional rank statistics and computing their empirical cumulative distribution functions (ECDFs). This method provides _simultaneous confidence bands_ and eliminates the need to manually select a binning parameter (e.g., as required by histogram-based methods).
### Use Cases for Joint Learning
Posterior Predictive EstimationEstimating the expected predictive performance of a Bayesian model (Eq. 5) requires an analytic expression for the pointwise (i.e., per-observation) likelihood function \(p(\mathbf{x}_{\mathrm{new}}^{(k)}\,|\,\mathbf{\theta})\) at arbitrary new data \(\mathbf{x}_{\mathrm{new}}^{(k)}\)(Burkner et al., 2021). For this reason, the ELPD cannot be computed for Bayesian models with intractable likelihoods or sequential neural estimators.
Moreover, even if the likelihood itself were analytic, the integral in Eq. (5) would still be intractable for most models. It can be efficiently approximated using posterior draws, but doing so in the context of cross-validation requires importance sampling or costly model refits (Vehtari et al., 2017). Hence, evaluating the ELPD for arbitrary cross-validation schemes critically requires both the amortized likelihood and posterior approximator.
Given data used for model fitting \(\mathbf{x}\) and upcoming data \(\mathbf{x}_{\mathrm{new}}^{(k)}\), the two networks can estimate a model's expected predictive performance in two steps. First, we can obtain a large amount of \(S\) random draws from the amortized posterior given \(\mathbf{x}\):
\[\mathbf{\theta}^{(s)}\sim p_{\mathbf{\phi}}(\mathbf{\theta}\,|\,\mathcal{H}_{\mathbf{\psi}}( \mathbf{x}))\text{ for }s=1,...,S. \tag{11}\]
Then, the likelihood network can approximate the ELPD at all \(\mathbf{x}_{\mathrm{new}}^{(k)}\) given \(\{\mathbf{\theta}^{(s)}\}\) via its Monte Carlo estimate:
\[\widehat{\text{ELPD}}=\sum_{k=1}^{K}\log\frac{1}{S}\sum_{s=1}^{S}l_{\mathbf{\psi}} (\mathbf{x}_{\mathrm{new}}^{(k)}\,|\,\mathbf{\theta}^{(s)}) \tag{12}\]
In the context of cross-validation (CV), \(\mathbf{x}\) and \(\mathbf{x}_{\mathrm{new}}\) refer to a random data split, and we can estimate the predictive performance of a Bayesian model by summing over the \(\widehat{\text{ELPD}}\)s from all data splits. Through amortization, JANA can efficiently compute the ELPD for any CV scheme. In **Experiment 3**, we demonstrate this for leave-one-out (LOO)-CV, which is one of the most expensive validation methods.
Marginal Likelihood EstimationBayesian (prior) predictive model comparison depends on computing a marginal likelihood (Eq. 4). We can leverage the probabilistic change of variable, which results directly from Bayes' rule:
\[\log\widehat{p}(\mathbf{x}) =\log l_{\mathbf{\eta}}(\mathbf{x}\,|\,\mathbf{\theta})+\log p(\mathbf{\theta}) \tag{13}\] \[-\log p_{\mathbf{\phi}}(\mathbf{\theta}\,|\,\mathcal{H}_{\mathbf{\psi}}(\mathbf{x })).\]
Thus, for any data set, we can obtain an estimate of the log marginal likelihood (LML) by evaluating Eq. 6 and Eq. 8, along with the prior density \(p(\mathbf{\theta})\). Evaluating all above terms is infeasible with standard Bayesian methods, since either the normalized posterior, the likelihood, or both quantities are typically intractable. Bridge sampling (Meng and Wong, 1996) enables the approximation of marginal likelihoods from posterior draws, but only works for models with analytical likelihoods and in tandem with non-amortized MCMC.
From a Bayesian perspective, evaluating Eq. 13 across multiple data sets amounts to _amortized bridge sampling_. At the same time, we can use Eq. 13 for assessing non-convergence or problems during inference by evaluating the RHS for a fixed \(\mathbf{x}\) and different \(\mathbf{\theta}\) drawn from the approximate posterior. Under perfect convergence, the RHS of Eq. 13 is independent of \(\mathbf{\theta}\), so any ensuing variation is a measure of pure approximation error.
Surrogate SimulatorsIn some modeling scenarios, the simulator might be a large-scale computer program implementing a complex generative algorithm (Lavin et al., 2021). Thus, a simulation-based inference workflow might be severely limited by the inability to obtain a large amount of simulations in a reasonable time. In such cases, an amortized surrogate simulator can generate additional data for the posterior network or a black-box optimizer (Gutmann and Corander, 2016). A notable advantage of neural surrogate simulators is that they can directly emulate complex data without summary statistics (see Figure 2). In addition, they can render a non-differentiable simulator differentiable for downstream tasks, such as amortized design optimization (Ivanova et al., 2021) or interact with MCMC samplers (Boelts et al., 2022; Fengler et al., 2021).
## 3 Related Work
Approximate Bayesian ComputationAn established approach to SBI is embodied by approximate Bayesian computation (ABC; Marin et al., 2012; Sisson et al., 2018). ABC is a family of algorithms where the simplest one, "ABC rejection", generates draws from an approximate posterior by repeatedly proposing parameters from the prior distribution, and then simulating a corresponding synthetic data set by running the simulator with the proposed parameters. More sophisticated ABC samplers are Sequential Monte Carlo (ABC-SMC; Beaumont et al., 2009; Del Moral et al., 2012; Picchini and Tamborrino, 2022; Toni, 2011) and Markov chain Monte Carlo ABC (ABC-MCMC; Marjoram et al., 2003; Picchini, 2014). In ABC, raw data are typically reduced via summary functions. However, _hand
crafted_ summary statistics are often insufficient, which results in a leak of information about the parameters (Marin et al., 2018). Recent work has used neural networks to learn informative summary statistics of model parameters in ABC (Chen et al., 2021; Jiang et al., 2017; Wiqvist et al., 2019).
Synthetic Likelihoods and Particle MCMCDespite being intuitive to grasp and use, the above ABC methods are notoriously inefficient, typically requiring millions of model simulations, which can be prohibitive for expensive simulators. Another established SBI alternative, also based on data-reduction via summary statistics, is _synthetic likelihood_(Price et al., 2018; Wood, 2010), which is more suitable for high-dimensional summary statistics. However, since synthetic likelihood is typically implemented as an MCMC sampler where multiple data sets are simulated at each proposed \(\mathbf{\theta}\), it can also be computationally intensive. Particle MCMC (Andrieu et al., 2010) is a simulation-based method for exact Bayesian inference which has found considerable success, especially for state-space models. However, particle MCMC could be infeasible when multiple inference runs are required to separately fit several different data sets.
Neural Posterior EstimationMethods for neural posterior estimation either specialize a neural approximator for inference on a single observation2(Deistler et al., 2022; Durkan et al., 2020; Greenberg et al., 2019; Lueckmann et al., 2017; Papamakarios Murray, 2016), or inference across arbitrary many observations (Ardizzone, Kruse, et al., 2019; Avecilla et al., 2022; Goncalves et al., 2020; Pacchiardi and Dutta, 2022; Radev et al., 2020). The former methods perform _sequential estimation_ by iteratively refining the prior to generate simulations in the vicinity of the observation. Thus, they are _not amortized_, as each new observation necessitates a costly re-training of the neural approximator. In contrast, the latter methods can perform _amortized inference_, as the neural approximator is trained to generalize over the entire prior predictive distribution and can be queried for any observation assumed to arise from the Bayesian model. Importantly, amortization can be performed over any aspect of the model, including data sets (Goncalves et al., 2020) or other contextual factors, such as the number of observations in a data set or the number of time points in a time series (Radev et al., 2020).
Neural Likelihood EstimationA related family of neural methods directly targets the intractable likelihood function instead of the posterior (Boelts et al., 2022; Fengler et al., 2021; Hermans et al., 2020; Lueckmann et al., 2019; Papamakarios et al., 2019). The endpoint of these methods is an _amortized likelihood approximator_ which can mimic a complex simulator or be used in tandem with a (non-amortized) MCMC sampler for posterior estimation. The latter can be prohibitively time-consuming, since it not only requires expensive simulation-based training, but also integrating likelihood approximators into non-amortized MCMC. This makes validating the posteriors (e.g., via simulation-based-calibration; SBC; Sailynoja et al., 2022; Talts et al., 2018) challenging or even impossible in practice. Nevertheless, likelihood approximators have certain advantages over posterior approximators, for instance, they do not need to be retrained for different priors and can emulate the behavior of large-scale simulators (Lavin et al., 2021).
Joint Posterior and Likelihood EstimationIn a pioneering work, Wiqvist et al. (2021) propose the SNPLA method to embody the best of both worlds by jointly training a posterior and a likelihood approximator. However, SNPLA operates in a sequential manner and thus does not yield fully amortized approximators. Moreover, it relies on summary statistics for the likelihood approximation. Accordingly, in the current work, we unify BayesFlow (Radev et al., 2020) and SNPLA (Wiqvist et al., 2021) into a framework for fully amortized joint estimation without manual summary statistics and explore the ensuing benefits for Bayesian modeling.
Figure 3: **Experiment 1.** Example calibration tests for 2 of the more challenging benchmarks. _Top row_: Good posterior and joint calibration of JANA for the Gaussian Mixture model. _Bottom row_: Posterior and joint calibration can be used in tandem to detect an underperforming likelihood network in the SIR model. The posterior network alone induces no systematic deviations when applied to simulator outputs (bottom left), but overestimates the parameters given the outputs of the surrogate network (bottom right).
## 4 Experiments
In the following, we will illustrate the utility of JANA in twelve Bayesian models across four experiments. For **Experiments 1-3**, we train the networks without the MMD criterion in Eq. 9 (i.e., \(\lambda=0\)), because our validations feature no model misspecification. The **Appendix** contains code for reproducing all experiments.
### Ten Benchmark Experiments
SetupThis experiment demonstrates the fidelity of our proposed architecture as well as the utility of our calibration checks to diagnose approximation faults on a set of ten benchmark simulation models proposed by Lueckmann et al. (2021). Since these benchmarks were originally designed for (non-amortized) neural posterior estimation, we deviate from the original problem setting by (i) approximating both posterior and likelihood; and (ii) validating our results on a much larger held-out set of \(1\,000\) simulations (as compared to just 10). Our goal here is _not to propose a better method for posterior estimation_, but to demonstrate the feasibility of joint amortization and the utility of the JSBC diagnostic on a set of popular and rather diverse models.
For each benchmark, we train our networks with a fixed budget of \(10\,000\) simulations, as we consider this to be a challenging setting in the low-to-medium training data availability. All benchmarks and neural approximators are implemented in the BayesFlow library. See the **Appendix** and the accompanying code for more details and complete diagnostics.
ResultsOverall, we observe stable training and good calibration across the ten benchmarks models, with Bernoulli GLM Raw and SIR exhibiting systematic joint miscalibration due likelihood approximations errors (see **Appendix** for detailed results). Figure 3 illustrates the utility of our JSBC diagnostic to reveal both good calibration (i.e., ECDF trajectories completely contained in the confidence ellipsis for the Gaussian Mixture benchmark) as well as systematic deviations owing to the likelihood network (i.e., ECDF trajectories partially outside the confidence ellipsis for SIR). Moreover, due to the inherent interpretability of JSBC, we can pinpoint the reasons for joint miscalibration of the SIR model. The likelihood network tends to generate more rapid synthetic outbreaks than the actual model, which leads to the posterior network overestimating the parameters of surrogate simulations.
### Two Moons: Method Comparison
SetupHere, we focus specifically on the Two Moons benchmark (Greenberg et al., 2019; Lueckmann et al., 2021) and use the code from Wiqvist et al. (2021) to compare JANA with the popular sequential methods SNL (Papamakarios et al., 2019), SNPE-C (Greenberg et al., 2019), SNRE-B (Durkan et al., 2020), SNPLA (Wiqvist et al., 2021), and a recent ABC-SMC algorithm with "guided particles" (here abbreviated with g-SMC, which is the method called "hybrid" in Picchini & Tamborrino, 2022). The model is characterized by a bimodal posterior with two separated crescent moments for the observed point \(\pi_{\text{new}}=(0,0)^{\top}\) which a posterior approximator needs to recover. We train SNL, SNPE-C, SNRE-B, SNPLA, g-SMC, and JANA following the same setup from Wiqvist et al. (2021). For each method, we repeat the experiment ten times using a fixed budget of \(2\,000\), \(6\,000\), and \(10\,000\) simulations and subsequently obtain \(1\,000\) posterior draws from the converged methods. For a numerical evaluation, we apply MMD between the approximate and analytical distributions.
ResultsJANA consistently explores both crescent moons throughout all repetitions, and already captures the local patterns of the posterior after \(2\,000\) training samples (see Figure 4). With respect to posterior performance, JANA even outperforms all sequential methods which are tailored to one observed data set (see Figure 4(a)). The joint performance of our amortized method is comparable to non-amortized sequential methods, see Figure 4(b). In light of these and previous results, _amortization across data sets_ seems to be a reasonable choice even with limited simulation budgets, especially since sequential (non-amortized) methods may be infeasible for large data sets (Hermans et al., 2021).
Figure 4: **Experiment 2**. Samples from the approximate posterior distribution for one repetition of the Two Moons experiment (repetition #2).
Figure 5: **Experiment 2**. Performance with \(N{=}2\,000\) training simulations, as indexed by sampling-based MMD estimate to analytical distribution (lower is better).
### Exchangeable Diffusion Model
SetupThis example demonstrates amortized marginal likelihood and ELPD estimation based on a mechanistic model of decision making: the diffusion model (DM; Ratcliff and McKoon, 2008). We benchmark our results on state-of-the-art likelihood-based methods. First, we compare our marginal likelihood estimates with those obtained using bridge sampling (Gronau et al., 2017). Second, we compare our ELPD estimates with those obtained using Pareto smoothed importance sampling (PSIS)-LOO (Vehtari et al., 2017). Both methods use random draws obtained via HMC-MCMC as implemented in Stan (Carpenter et al., 2017).
ResultsOur results indicate well-calibrated joint approximation (see Figure 5(b)) as well as accurate posterior and likelihood estimation (see Figure 5(c) and 5(d)). For the approximation of marginal likelihoods, we first perform amortized posterior sampling on the 100 held-out data sets. We then evaluate the approximate likelihood on these samples and finally apply Eq. 13 to compute the log marginal likelihood (LML). Our results show a very close correspondence between our neural LMLs and those obtained via MCMC-based bridge sampling (see Figure 5(c)). Furthermore, our amortized LOO-CV estimates align very closely with the estimates obtained via PSIS-LOO (see Figure 5(d)).
MCMC integrationSurrogate likelihoods provide all information that is needed for MCMC sampling. We provide an interface to PyMC (Salvatier et al., 2016) to allow for easy model building and use of existing samplers. The performance of gradient-based samplers, such as HMC, critically depends on the precision of partial log-likelihood derivatives. Using PyMC's No-U-Turn sampler (NUTS) with our neural likelihood, we obtained results similar to those using Stan. If gradient-based sampling methods fail, we advise to use gradient-free sampling methods, such as slice sampling. For detailed information, see the **Appendix**.
### Markovian Compartmental Model
SetupThis experiment demonstrates surrogate simulations of a complex non-exchangeable model of infectious diseases. The model features 34 parameters and thus represents a considerable extension of the two-parameter toy SIR model (Lucekmann et al., 2021; Radev et al., 2020). We use the model specification and posterior network from Radev et al. (2021). We implement the likelihood network as a recurrent cINN (see Section 2.3) to test its ability to emulate raw and noisy time series. Further, we train the posterior network using the MMD criterion (Eq. 9) with \(\lambda=1\) to _quantify_ the quality of the surrogate simulations.
ResultsUpon convergence, we use the likelihood network to generate synthetic outbreak trajectories and compare them visually with the outputs of the original simulator. We observe good emulation across a variety of different parameter configurations, each leading to a qualitatively different simulated scenario (see Figure 2 for an example and the **Appendix** for detailed results). Moreover, it seems that the surrogate network is not only able to accurately approximate the median trajectory, but also the variability (i.e., _aleatoric uncertainty_) in simulated trajectories.
Beyond purely visual comparisons, we also compute the posterior and joint calibration of the two networks using (J)SBC on \(1\,000\) held-out simulations. We confirm the good posterior calibration observed by (Radev, Graw, et al., 2021). In addition, the joint calibration results help us highlight some subtle deficiencies of the likelihood network. For instance, it tends to overestimate the variability of simulated time series, thus "tricking" the posterior network into estimating higher values for the noise parameters (see **Appendix**). We attribute this deficiency to the extremely wide magnitude range of the simulated data (incidence in the order of millions) which is not captured by our simple input standardization procedure.
Figure 6: **Experiment 3. The true and synthetic likelihood align perfectly (a). The joint approximation of all parameters is well calibrated (b). Both the log marginal likelihood (c) and ELPD (d) estimates of JANA closely approximate those obtained via bridge sampling and PSIS-LOO. Each point in (c) and (d) represents one out of 100 held-out simulations.**
Conclusion
We investigated the utility of JANA for Bayesian surrogate modeling and simulation-based inference within the BayesFlow framework. We believe that JANA can greatly enrich applications of amortized Bayesian inference. Future work should investigate weight sharing schemes for the various network components and advance a framework-independent benchmark database for joint estimation of non-trivial scientific models.
## Acknowledgements
We thank Samuel Wiqvist for the fruitful discussions and his help with running the SNPLA experiments. STR was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy -- EXC-2181 - 390900948 (the Heidelberg Cluster of Excellence STRUCTUREWES) and Google Cloud through the Academic Research Grants program. MS was supported by the Cyber Valley Research Fund (grant number: CyV-RF-2021-16). MS and PCB were supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2075 - 390740016 (the Stuttgart Cluster of Excellence SimTech). VP was supported by the state of Baden-Wurttemberg through bwHPC. UP was supported by the Swedish National Research Council (Vetenskapsradet 2019-03924) and the Chalmers AI Research Centre. UK was supported by the the Informatics for Life initiative funded by the Klaus Tschira Foundation The authors gratefully acknowledge the support and funding.
|
2310.06007 | Anomaly and Brownian fluid particle in Navier-Stokes turbulence | We investigate the Navier-Stokes turbulence driven by a stochastic random
Gaussian force. Using a field-theoretic approach, we uncover an anomaly that
brings hidden structure to the theory. The anomaly is generated by a
non-self-adjoint operator of the Jacobian and it follows the symmetries of the
stochastic Navier-Stokes equation. We calculate the anomaly and demonstrate
that by forcing the anomaly to vanish, the velocity field is constrained and a
monopole-type object with a constant charge is formed. When the viscosity is
zero, the anomaly can be interpreted as the Brownian damping coefficient of a
random fluid particle. We provide the Brownian particle equation and its
solution in the presence of a pump and viscosity. Our results suggest that the
anomaly is an inherent feature of stochastic turbulence and must be taken into
account in all stochastic turbulence calculations. This constitutes an
additional law for the original set of stochastic Navier-Stokes equations. | Timo Aukusti Laine | 2023-10-09T17:31:44Z | http://arxiv.org/abs/2310.06007v3 | # Anomaly and Brownian fluid particle in Navier-Stokes turbulence
###### Abstract
We investigate the Navier-Stokes turbulence driven by a stochastic random Gaussian force. Using a field-theoretic approach, we uncover an anomaly that brings hidden structure to the theory. The anomaly is generated by a non-self-adjoint operator of the Jacobian and it follows the symmetries of the stochastic Navier-Stokes equation. We calculate the anomaly and demonstrate that by forcing the anomaly to vanish, the velocity field is constrained and a monopole-type object with a constant charge is formed. When the viscosity is zero, the anomaly can be interpreted as the Brownian damping coefficient of a random fluid particle. We provide the Brownian particle equation and its solution in the presence of a pump and viscosity. Our results suggest that the anomaly is an inherent feature of stochastic turbulence and must be taken into account in all stochastic turbulence calculations. This constitutes an additional law for the original set of stochastic Navier-Stokes equations.
## I Introduction
Navier-Stokes turbulence is a fascinating and complex phenomenon that represents one of the most challenging and elusive problems in classical physics. It is a branch of fluid dynamics that seeks to clarify the chaotic and turbulent motion of fluids, which is characterized by rapid and irregular swirling of fluid particles. These phenomena are typically described by the Navier-Stokes equations, partial differential equations that govern the conservation of momentum and mass in fluid flow. Although these equations have helped to understand turbulence, due to their nonlinearity, they are still an active area of research in flow dynamics.
Navier-Stokes turbulence within field theory represents an intersection of flow dynamics and theoretical physics. This approach extends the traditional understanding of the Navier-Stokes equations to field theory and brings a new perspective to the behavior of turbulent flows. It allows us to dive into the complex interaction of various physical quantities in a turbulent flow, from velocity and pressure fields to vortices and energy distribution. This unique perspective is invaluable in understanding the statistical properties of turbulence that often elude traditional approaches. The application of field theory to turbulence brings a new perspective, where fluid properties are treated as fields and an attempt is made to understand their behavior through quantum field theory, statistical mechanics or other field theoretical approaches, Refs. 1-4.
In this article we will explore the stochastic Navier-Stokes turbulence and extend the results of Burgers turbulence to cover the Navier-Stokes equation, Ref. 5.
## II Navier-Stokes turbulence
In this section, we derive the generating functional of correlation functions based on the stochastic Navier-Stokes equation. This is a basis function that can be used to calculate various types of correlations in turbulent motion, Ref. 3.
**Notations**
We explore the following stochastic Navier-Stokes equations
\[\mathrm{A)}\quad\frac{\partial\vec{U}}{\partial t}+(\vec{U}\cdot \nabla)\vec{U}=\frac{1}{\rho}\nabla p+\nu\nabla^{2}\vec{U}+\vec{f}, \tag{1}\] \[\mathrm{B)}\quad\nabla\cdot\vec{U}=0, \tag{2}\]
where
\[\vec{U}=\vec{U}(t,\vec{x})=u_{1}\hat{i}+u_{2}\hat{j}+u_{3}\hat{k} \tag{3}\]
is the velocity field, \(\rho\) is the mass density, \(p\) is the pressure, \(\nu\) is the viscosity, and \(\vec{f}=\vec{f}(t,\vec{x})\) is the Gaussian random force satisfying the condition
\[\langle f_{i}(t,\vec{x})f_{j}(t^{\prime},\vec{y})\rangle=\kappa(\vec{x}-\vec{y })\delta_{ij}\delta(t-t^{\prime}). \tag{4}\]
Function \(\kappa(\vec{x}-\vec{y})\) defines the spatial correlation of the random forces. While the density is constant, the fluid is assumed incompressible (2). We introduce the notation
\[x_{\mu}\ \ \ \ \mu=0,1,2,3\ \ \mbox{with}\ \ x_{0}=t,\ \ (x_{1},x_{2},x_{3})=(x_{i})= \vec{x}, \tag{5}\]
i.e.
\[x_{\mu}=(x_{0},x_{i})=(t,\vec{x}),\ \ \ i=1,2,3. \tag{6}\]
We write equation (1) as
\[\partial_{t}u_{i}+(u_{j}\partial_{j})u_{i}=\partial_{i}\tilde{p}+\nu\partial_ {j}\partial_{j}u_{i}+f_{i}, \tag{7}\]
where \(\tilde{p}=p/\rho\), \(\partial_{j}=\partial/\partial x_{j}\) and \(i=1,2,3\). We have also used a shortened notation
\[\vec{U}\cdot\nabla=\sum_{j=1}^{3}u_{j}\partial_{j}=u_{j}\partial_{j}, \tag{8}\]
where repeated indices are summed expect when otherwise indicated.
**Generating functional**
The generating functional of correlation functions is defines as, Ref. [3],
\[\langle F[\lambda]\rangle=\int DfDuF[\lambda]\exp(-S[f,u]). \tag{9}\]
The \(\langle F[\lambda]\rangle\) can be written with the help of \(\delta\)-functions (when \(i=1\)),
\[\int Df_{1}Du_{1}\delta(\partial_{t}u_{1}+(u_{j}\partial_{j})u_{ 1}-\partial_{1}\tilde{p}-\nu\partial_{j}\partial_{j}u_{1}-f_{1})J[u]\exp\Bigl{(} -\frac{1}{2}\int dtd^{3}xd^{3}yf_{1}(t,\vec{x})\kappa^{-1}(\vec{x}-\vec{y})f_ {1}(t,\vec{y})\Bigr{)} \tag{10}\] \[=\int Du_{1}D\mu_{1}J[u]\exp\Bigl{(}-\frac{1}{2}\int dtd^{3}xd^{3 }y\mu_{1}(t,\vec{x})\kappa(\vec{x}-\vec{y})\mu_{1}(t,\vec{y})+i\int d^{4}x\mu_ {1}(\partial_{t}u_{1}+(u_{j}\partial_{j})u_{1}-\partial_{1}\tilde{p}-\nu \partial_{j}\partial_{j}u_{1})\Bigr{)}.\]
Here, \(J[u]\) is the Jacobian or determinant of the function inside the delta function. Since \(\vec{U}\) is a vector, we could write 3 similar equations for each axis \(1,2,3\). Alternatively, we introduce a matrix or metric \(h^{ij}\) which is zero except for \(h^{11}=\hat{i}\), \(h^{22}=\hat{j}\), and \(h^{33}=\hat{k}\) and has the property
\[\mu^{i}=h^{ij}\mu_{j}. \tag{11}\]
Also, for example,
\[\mu^{i}f_{i}=\mu_{1}f_{1}\hat{i}+\mu_{2}f_{2}\hat{j}+\mu_{3}f_{3}\hat{k}. \tag{12}\]
Then we may write the generating functional as
\[\langle F[\lambda]\rangle=\int DuD\mu F[\lambda]J[u]\exp(-S[u,\mu]), \tag{13}\]
where the action \(S[u(t,\vec{x}),\mu(t,\vec{x})]\) is
\[S[u,\mu]=\frac{1}{2}\int dtd^{3}xd^{3}y\mu^{i}(t,\vec{x})\kappa(\vec{x}-\vec{y })\mu_{i}(t,\vec{y})-i\int d^{4}x\mu^{i}(\partial_{t}u_{i}+u_{j}\partial_{j}u_{ i}-\partial_{i}\vec{p}-\nu\partial_{j}\partial_{j}u_{i}). \tag{14}\]
\(S\) is a kind of vector, but it can be understood as consisting of 3 different equations, all of which are described by one equation.
**Jacobian**
We determine the Jacobian \(J[u]\). It is not necessarily obvious what the functional derivative of a vector is. Usually a functional derivative is applied to the scalar field. That's why we use the definition
\[\vec{U}=u\vec{e}, \tag{15}\]
where \(\vec{e}=e_{1}\hat{i}+e_{2}\hat{j}+e_{3}\hat{k}\) and write the Jacobian of the stochastic Navier-Stokes equation as
\[J[u]=\det\biggl{|}\frac{\delta\vec{f}}{\delta u}\biggr{|}=\frac{\delta}{ \delta u}\biggl{(}\frac{\partial u}{\partial t}+u(e_{j}\partial_{j})u-\nu \partial_{j}\partial_{j}u\biggr{)}\vec{e}. \tag{16}\]
We have dropped the pressure term because it does not depend on \(u\). The Jacobian is also a vector function consisting of a determinant and a unit vector \(\vec{e}\). The functional derivative can now be calculated and the determinant becomes
\[J[u]=\det|\partial_{t}+(e_{j}\partial_{j})u+u(e_{j}\partial_{j})-\nu\partial _{j}\partial_{j}|\vec{e}. \tag{17}\]
The value of the determinant is the same for all three directions. By using anticommuting functions, \(\Psi=\Psi(t,\vec{x})\) and \(\tilde{\Psi}=\tilde{\Psi}(t,\vec{x})\), Refs. 3-4, the determinant can be written as
\[J[u]=\int D\tilde{\Psi}D\Psi\exp(-S_{A}), \tag{18}\]
where the action of the determinant is
\[S_{A}=-\vec{e}\int d^{4}x\tilde{\Psi}(\partial_{t}+(e_{j}\partial_{j})u+u(e_{ j}\partial_{j})-\nu\partial_{j}\partial_{j})\Psi. \tag{19}\]
The action is also a vector with 3 components.
**Full action**
We collect the results. Action (14) together with the Jacobian (19) form the complete action,
\[S_{full}[u,\mu,\Psi,\tilde{\Psi}]= \frac{1}{2}\int dtd^{3}xd^{3}y\mu^{i}(t,\vec{x})\kappa(\vec{x}- \vec{y})\mu_{i}(t,\vec{y})-i\int d^{4}x\mu^{i}(\partial_{t}u_{i}+u_{j}\partial _{j}u_{i}-\partial_{i}\vec{p}-\nu\partial_{j}\partial_{j}u_{i}) \tag{20}\] \[-\vec{e}\int d^{4}x\tilde{\Psi}(\partial_{t}+(e_{j}\partial_{j})u +u(e_{j}\partial_{j})-\nu\partial_{j}\partial_{j})\Psi.\]
The action (20) has all the information of the stochastic Navier-Stokes equation in one equation.
## III Effective action when viscosity is zero
In this section we calculate the determinant or Jacobian (17). First, it should be noted that the determinant depends on the velocity field \(u\)
\[\frac{\delta}{\delta u}S_{A}=\vec{e}e_{j}\partial_{j}\bar{\Psi}\Psi\neq 0, \tag{21}\]
so it cannot be ignored. The problem is that the operator in the determinant is non-self-adjoint, and if we want to express the action without the Jacobi fields, i.e. using only \(u\), we need to calculate the square of the determinant. This is analogous to chiral field theories. Therefore we first need to define the "right-handed" and "left-handed" actions or determinants after which the square can be calculated.
**Chiral determinants**
We observe that
\[\int d^{4}x\bar{\Psi}(\partial_{t}+(e_{j}\partial_{j})u+u(e_{j}\partial_{j})- \nu\partial_{j}\partial_{j})\Psi=\int d^{4}x(-\partial_{t}-u(e_{j}\partial_{j}) -\nu\partial_{j}\partial_{j})\bar{\Psi}\Psi. \tag{22}\]
We get the right-handed determinant, which is
\[S_{R}=-\vec{e}\int d^{4}x\bar{\Psi}(\partial_{t}+(e_{j}\partial_{j})u+u(e_{j} \partial_{j})-\nu\partial_{j}\partial_{j})\Psi, \tag{23}\]
and the left-handed determinant is
\[S_{L}=\vec{e}\int d^{4}x(-\partial_{t}-u(e_{j}\partial_{j})-\nu\partial_{j} \partial_{j})\bar{\Psi}\Psi=-\vec{e}\int d^{4}x\Psi(-\partial_{t}-u(e_{j} \partial_{j})-\nu\partial_{j}\partial_{j})\bar{\Psi}. \tag{24}\]
An additional "\(-\)" sign comes from the integral,
\[\int D\bar{\Psi}D\Psi\exp^{S[\bar{\Psi},\Psi]}=-\int D\Psi D\bar{\Psi}\exp^{- S[\bar{\Psi},\bar{\Psi}]}=\int D\Psi D\bar{\Psi}\exp^{S[\Psi,\bar{\Psi}]}. \tag{25}\]
The square of the chiral determinant is
\[\det|\partial_{t}+(e_{j}\partial_{j})u+u(e_{j}\partial_{j})-\nu\partial_{j} \partial_{j}|=\Big{[}\det|\partial_{t}+(e_{j}\partial_{j})u+u(e_{j}\partial_{ j})-\nu\partial_{j}\partial_{j}|\det|-\partial_{t}-u(e_{j}\partial_{j})-\nu \partial_{j}\partial_{j}|\Big{]}^{1/2}. \tag{26}\]
The causality needs to be preserved when calculating the determinant.
**Square of the determinant**
First we set the viscosity to zero. The challenge of calculating the square of the determinant (26) is that the usual methods of two-dimensional chiral methods are not applicable. They use complex values and vectors, but now all is real. Alternatively, we require that \(S_{R}=S_{L}\) and use the definition of the determinant, which is the determinant is the product of its eigenvalues. Therefore, we consider the following eigenvalue equations
\[\left\{\begin{array}{l}(\partial_{t}+(e_{j}\partial_{j})u+u(e_{j}\partial_{ j}))A=\lambda A,\\ (-\partial_{t}-u(e_{j}\partial_{j}))A=\lambda A,\end{array}\right. \tag{27}\]
where \(\lambda=\lambda(t,\vec{x},u)\) is an eigenvalue and \(A=A(t,\vec{x},u)\) is the corresponding eigenfunction. For consistency, both determinants must produce the same eigenvalue functions. Putting the equations (27) together gives
\[\lambda=\frac{1}{2}(e_{j}\partial_{j})u(t,\vec{x}). \tag{28}\]
By analogy with chiral theories, the result is already the "square root". The determinant with the Jacobi fields is then
\[S_{D}(\nu=0)=-\vec{e}\int d^{4}x\bar{\Psi}\frac{1}{2}(e_{j}\partial_{j})u(t, \vec{x})\Psi. \tag{29}\]
One can use symmetries to verify that the result is correct, Ref. [5]. An alternative way to derive the anomaly is to note that
\[\frac{\delta}{\delta u}S_{D}=\frac{\delta}{\delta u}S_{R}+\frac{\delta}{ \delta u}S_{L}. \tag{30}\]
As the definition of the determinant states, the value of the determinant is then
\[\det|\partial_{t}+(e_{j}\partial_{j})u+u(e_{j}\partial_{j})|=\frac{1}{2}(e_{j }\partial_{j})u(t,\vec{x}). \tag{31}\]
The determinant cannot be zero, because then the generating functional (13) is zero and we have no physics (or some type of regularization is needed to treat the divergence). We write the determinant as a path-integral. By using the identity
\[\text{detM}=\exp\text{Tr}[\text{lnM}], \tag{32}\]
where M is a matrix, we may write
\[\text{detM}=\exp(-S_{D}), \tag{33}\]
and
\[S_{D}(\nu=0)=-\vec{e}\int d^{4}x\ln(\frac{1}{2}e_{j}\partial_{j}u(t,\vec{x})). \tag{34}\]
This is a path-integral representation of the square root of the determinant. The anomaly is due to the second term of the stochastic Navier-Stokes equation (1). This is consistent with the observation that
\[\vec{\mu}\cdot(\vec{U}\cdot\nabla)\vec{U} \tag{35}\]
can also be understood as the interaction Lagrangian of a charged particle with a Dirac monopole or a magnetic point vortex, Ref. [6].
#### Constraint
In path-integrals, local symmetry transformations are particularly interesting because they describe physical phenomena typically observed in nature. We have shown that in general the determinant of the stochastic Navier-Stokes equation depends on the velocity field (31) and it cannot be neglected. However, one can consider situations where the anomaly is forced to vanish, i.e. the determinant is required to be a number which cause constraints on the field \(u\). The requirement of the determinant to be constant gives the condition
\[\det|\partial_{t}+(e_{j}\partial_{j})u+u(e_{j}\partial_{j})|=\frac{1}{2}e_{j} \partial_{j}u(t,\vec{x})=\text{constant}. \tag{36}\]
We call this "fixing the gauge", i.e. we have chosen the determinant to be a constant number. We could select some other gauge as well. Later we will show that if we want to examine the symmetries of the stochastic Navier-Stokes turbulence equation, this gauge is the only correct choice.
From (36) we derive the constraint condition
\[\frac{\delta}{\delta u}{\rm det}=0, \tag{37}\]
or
\[(e_{j}\partial_{j})\delta u(t,\vec{x})=0. \tag{38}\]
These are the conditions we expect from the anomaly. The variation of the function (34) is
\[\delta S_{D}=-2\vec{e}\int d^{4}x\frac{1}{e_{i}\partial_{i}u(t,\vec{x})}e_{j} \partial_{j}\delta u(t,\vec{x}). \tag{39}\]
This integral is zero if
\[\delta u={\rm constant},\ \ \ \ {\rm or} \tag{40}\] \[e_{j}\partial_{j}\delta u=0,\ \ \ \ {\rm or}\] (41) \[e_{j}\partial_{j}e_{i}\partial_{i}u=0. \tag{42}\]
The equations show the constraint correctly.
#### Zero eigenvalue
Zero eigenvalues are especially interesting because they typically enable some additional physical property or observable. In the stochastic Navier-Stokes turbulence there is a zero eigenvalue in the determinant. We have the eigenvalue equation (27)
\[(\partial_{t}+(e_{j}\partial_{j})u+u(e_{j}\partial_{j}))A(t,\vec{x},u)=\lambda A (t,\vec{x},u)=\frac{1}{2}(e_{j}\partial_{j})u(t,\vec{x})A(t,\vec{x},u). \tag{43}\]
From (43) we get the zero eigenvalue equation
\[(\partial_{t}+\frac{1}{2}e_{j}\partial_{j}u+ue_{j}\partial_{j})A(t,\vec{x},u)= 0\cdot A(t,\vec{x},u)=0. \tag{44}\]
We note that the constraint imposes no restrictions on \(t\). Therefore we have
\[\partial_{t}A(t,\vec{x},u)=0,\ \ \ \ {\rm and} \tag{45}\] \[ue_{j}\partial_{j}A(t,\vec{x},u)=-\frac{1}{2}e_{j}\partial_{j} uA(t,\vec{x},u). \tag{46}\]
We will use these values later when calculating the viscosity of the anomaly.
#### Effective action
We collect the results when the viscosity is zero. The effective action is
\[S_{eff}[\nu=0]=\frac{1}{2}\int dtd^{3}xd^{3}y\mu^{i}(t,\vec{x})\kappa(\vec{x}- \vec{y})\mu_{i}(t,\vec{y})-i\int d^{4}x\mu^{i}(\partial_{t}u_{i}+u_{j}\partial _{j}u_{i}-\partial_{i}\vec{p})-\vec{e}\int d^{4}x\bar{\Psi}\frac{1}{2}e_{j} \partial_{j}u\Psi, \tag{47}\]
or
\[S_{eff}[\nu=0]=\frac{1}{2}\int dtd^{3}xd^{3}y\mu^{i}(t,\vec{x})\kappa(\vec{x}- \vec{y})\mu_{i}(t,\vec{y})-i\int d^{4}x\mu^{i}(\partial_{t}u_{i}+u_{j}\partial _{j}u_{i}-\partial_{i}\vec{p})-\vec{e}\int d^{4}x\bar{\Psi}\frac{1}{2}e_{j} \partial_{j}u\Psi, \tag{48}\]
where \(\vec{e}\) is the Euler-Lagrange vector. The effective action is
\[S_{eff}[\nu=0]=\frac{1}{2}\int dtd^{3}xd^{3}y\mu^{i}(t,\vec{x})\kappa(\vec{x}- \vec{y})\mu_{i}(t,\vec{y})-i\int d^{4}x\mu^{i}(\partial_{t}u_{i}+u_{j}\partial _{j}u_{i}-\partial_{i}\vec{p})-\vec{e}\int d^{4}x\bar{\Psi}\frac{1}{2}e_{j} \partial_{j}u\Psi, \tag{49}\]
or
\[S_{eff}[\nu=0]=\frac{1}{2}\int dtd^{3}xd^{3}y\mu^{i}(t,\vec{x})\kappa(\vec{x}- \vec{y})\mu_{i}(t,\vec{y})-i\int d^{4}x\mu^{i}(\partial_{t}u_{i}+u_{j}\partial _{j}u_{i}-\partial_{i}\vec{p})-\vec{e}\int d^{4}x\bar{\Psi}\frac{1}{2}e_{j} \partial_{j}u\Psi, \tag{50}\]
or
\[S_{eff}[\nu=0]=\frac{1}{2}\int dtd^{3}xd^{3}y\mu^{i}(t,\vec{x})\kappa(\vec{x}- \vec{y})\mu_{i}(t,\vec{y})-i\int d^{4}x\mu^{i}(\partial_{t}u_{i}+u_{j}\partial _{j}u_{i}-\partial_{i}\vec{p})-\vec{e}\int d^{4}x\bar{\Psi}\frac{1}{2}e_{j} \partial_{j}u\Psi, \tag{51}\]
or
\[S_{eff}[\nu=0]=\frac{1}{2}\int dtd^{3}xd^{3}y\mu^{i}(t,\vec{x})\kappa(\vec{x}- \vec{y})\mu_{i}(t,\vec{y})-i\int d^{4}x\mu^{i}(\partial_{t}u_{i}+u_{j}\partial _{j}u_{i}-\partial_{i}\vec{p})-\vec{e}\int d^{4}x\bar{\Psi}\frac{1}{2}e_{j} \partial_{j}u\Psi, \tag{52}\]
or
\[S_{eff}[\nu=0]=\frac{1}{2}\int dtd^{3}xd^{3}y\mu^{i}(t,\vec{x})\kappa(\vec{x}- \vec{y})\mu_{i}(t,\vec{y})-i\int d^{4}x\mu^{i}(\partial_{t}u_{i}+u_{j}\partial _{j}u_{i}-\partial_{i}\vec{p})-\vec{e}\int d^{4}x\bar{\Psi}\frac{1}{2}e_{j} \partial_{j}u\Psi, \tag{53}\]
or
\[S_{eff}[\nu=0]=\frac{1}{2}\int dtd^{3}xd^{3}y\mu^{i}(t,\vec{x})\kappa(\vec{x}- \vec{y})\mu_{i}(t,\vec{y})-i\int d^{4}x\mu^{i}(\partial_{t}u_{i}+u_{j}\partial _{j}u_{i}-\partial_{i}\vec{p})-\vec{e}\int d^{4}x\bar{\Psi}\frac{1}{2}e_{j} \partial_{j}u\Psi, \tag{54}\]
or
\[S_{eff}[\nu=0]=\frac{1}{2}\int dtd^{3}xd^{3}y\mu^{i}(t,\vec{x})\kappa(\vec{x}- \vec{y})\mu_{i}(t,\vec{y})-i\int d^{4}x\mu^{i}(\partial_{t}u_{i}+u_{j}\partial _{j}u_{i}-\partial_{i}\vec{p})-\vec{e}\int d^{4}x\bar{\Psi}\frac{1}{2}e_{j} \partial_{j}u\Psi, \tag{55}\]
or
\[S_{eff}[\nu=0]=\frac{1}{2}\int dtd^{3}xd^{3}y\mu^{i}(t,\vec{x})\kappa(\vec{x}- \vec{y})\mu_{i}(t,\vec{y})-i\int d^{4}x\mu^{i}(\partial_{t}u_{i}+u_{j}\partial _{j}u_{i}-\partial_{i}\vec{p})-\vec{e}\int d^{4}x\bar{\Psi}\frac{1}{2}e_{j} \partial_{j}u\Psi, \tag{56}\]
or
\[S_{eff}[\nu=0]=\frac{1}{2}\int dtd^{3}xd^{3}y\mu^{i}(t,\vec{x})\kappa(\vec{x}- \vec{y})\mu_{i}(t,\vec{y})-i\int d^{4}x\mu^{i}(\partial_{t}u_{i}+u_{j}\partial _{j}u_{i}-\partial_{i}\vec{p})-\vec{e}\int d^{4}x\bar{\Psi}\frac{1}{2}e_{j} \partial_{j}u\Psi, \tag{57}\]
or
\[S_{eff}[\nu=0]=\frac{1}{2}\int dtd^{3}xd^{3}y\mu^{i}(t,\vec{x})\kappa(\vec{x}- \vec{y})\mu_{i}(t,\vec{y})-i\int d^{4}x\mu^{i}(\partial_{t}u_{i}+u_{j}\partial _{j}u_{i}-\partial_{i}\vec{p})-\vec{e}\int d^{4}x\bar{\Psi}\frac{1}{2}e_{j} \partial_{j}u\Psi, \tag{58}\]
or
\[S_{eff}[\nu=0]=\frac{1}{2}\int dtd^{3}xd^{3}y\mu^{i}(t,\vec{x})\kappa(\vec{x}- \vec{y})\mu_{i}(t,\vec{y})-i\int d^{4}x\mu^{i}(\partial_{t}u_{i}+u_{j}\partial _{j}u_{i}-\partial_{i}\vec{p})-\vec{e}\int d^{4}x\bar{\Psi}\frac{1}{2}e_{j} \partial_{j}u\Psi, \tag{59}\]
or
\[S_{eff}[\nu=0]=\frac{1}{2}\int dtd^{3}xd^{3}y\mu^{i}(t,\vec{x})\kappa(\vec{x}- \vec{y})\mu_{i}(t,\vec{y})-i\int d^{4}x\mu^{i}(\partial_{t}u_{i}+u_{j} \partial_{j}u_{i}-\partial_{i}\vec{p})-\vec{e}\int d^{4}x\bar{\Psi}\frac{1}{2}e_{j}
\[S_{eff}[\nu=0]=\frac{1}{2}\int dtd^{3}xd^{3}y\mu^{i}(t,\vec{x})\kappa(\vec{x}- \vec{y})\mu_{i}(t,\vec{y})-i\int d^{4}x\mu^{i}(\partial_{t}u_{i}+u_{j}\partial_{ j}u_{i}-\partial_{i}\vec{p})-\vec{e}\int d^{4}x\ln(\frac{1}{2}e_{j}\partial_{j}u(t,x)). \tag{48}\]
The constraint
\[e_{j}\partial_{j}\delta u(t,\vec{x})=0 \tag{49}\]
is a condition for which the anomaly vanishes. Equation (47) is the same as the equation (20), but in (47) the determinant is a square root, which can then be expressed in terms of the field \(u\), and this is the equation (48).
## IV Effective action when viscosity is non-zero
Next, we consider the situation where the viscosity is non-zero. Again, we need to calculate the square root of the determinant. The action with the Jacobi fields is
\[S_{eff} = \frac{1}{2}\int dtd^{3}xd^{3}y\mu^{i}(t,\vec{x})\kappa(\vec{x}- \vec{y})\mu_{i}(t,\vec{y})-i\int d^{4}x\mu^{i}(\partial_{t}u_{i}+u_{j}\partial _{j}u_{i}-\partial_{i}\vec{p}-\nu\partial_{j}\partial_{j}u_{i}) \tag{50}\] \[-\vec{e}\int d^{4}x\bar{\Psi}(\frac{1}{2}e_{j}\partial_{j}u-\nu \partial_{j}\partial_{j})\Psi.\]
We calculate the determinant. We use the following set of eigenvalue equations
\[\left\{\begin{array}{l}(\partial_{t}+(e_{j}\partial_{j})u+u(e_{j}\partial_{ j})-\nu\partial_{j}\partial_{j})A^{\prime}=\lambda^{\prime}A^{\prime},\\ (-\partial_{t}-u(e_{j}\partial_{j})-\nu\partial_{j}\partial_{j})A^{\prime}= \lambda^{\prime}A^{\prime},\end{array}\right. \tag{51}\]
where \(\lambda^{\prime}=\lambda^{\prime}(t,\vec{x},u)\) is the eigenvalue and \(A^{\prime}=A^{\prime}(t,\vec{x},u)\) is the corresponding eigenfunction. This gives the result
\[\lambda^{\prime}=\frac{1}{2}e_{j}\partial_{j}u(t,\vec{x})-\nu\frac{\partial_{ j}\partial_{j}A^{\prime}}{A^{\prime}}=\lambda-\nu\frac{\partial_{j}\partial_{j}A^{ \prime}}{A^{\prime}}, \tag{52}\]
where \(\lambda\) is the eigenvalue of the non-viscosity equations. We add (52) back into the equation (51). This gives
\[\left\{\begin{array}{l}(\partial_{t}+(e_{j}\partial_{j})u+u(e_{j}\partial_{ j}))A^{\prime}=\lambda A^{\prime},\\ (-\partial_{t}-u(e_{j}\partial_{j}))A^{\prime}=\lambda A^{\prime},\end{array}\right. \tag{53}\]
from which it follows that \(A^{\prime}=A\), and \(A\) is the eigenvector of the non-viscosity equations. This means that even in the presence of viscosity, the viscosity does not couple to or change the eigenvector. We have then
\[\lambda^{\prime}=\frac{1}{2}e_{j}\partial_{j}u-\nu\frac{\partial_{j}\partial_ {j}A}{A}. \tag{54}\]
The condition (46) can be solved with respect to \(A\)
\[A(t,\vec{x},u)=Au(t,\vec{x})^{-1/2}. \tag{55}\]
The eigenvalue \(\lambda^{\prime}\) can be calculated
\[\lambda^{\prime}=\frac{1}{2}e_{j}\partial_{j}u-\nu F[u]=\frac{1}{2}e_{j} \partial_{j}u-\nu\Big{(}\frac{3}{4}\frac{(\partial_{j}u)^{2}}{u^{2}}-\frac{ \partial_{j}\partial_{j}u}{2u}\Big{)}. \tag{56}\]
The Jacobian is
\[J[u]=\exp\Bigl{(}\vec{e}\int d^{4}x\ln(\frac{1}{2}e_{j}\partial_{j}u-\nu\Bigl{(} \frac{3}{4}\frac{(\partial_{j}u)^{2}}{u^{2}}-\frac{\partial_{j}\partial_{j}u}{2u }\Bigr{)})\Bigr{)}, \tag{57}\]
and the effective action then becomes
\[S_{eff} = \frac{1}{2}\int dtd^{4}xd^{3}yu^{i}(t,\vec{x})\kappa(\vec{x}-\vec {y})\mu_{i}(t,\vec{y})-i\int d^{4}x\mu^{i}(\partial_{t}u_{i}+u_{j}\partial_{j}u _{i}-\partial_{i}\vec{p}-\nu\partial_{j}\partial_{j}u_{i}) \tag{58}\] \[-\vec{e}\int d^{4}x\ln(\frac{1}{2}e_{j}\partial_{j}u-\nu\Bigl{(} \frac{3}{4}\frac{(\partial_{j}u)^{2}}{u^{2}}-\frac{\partial_{j}\partial_{j}u}{ 2u}\Bigr{)})\.\]
When the velocity field is complex differentiable in an open set, it can be shown by the power series that
\[F[u]=\frac{3}{4}\frac{(\partial_{j}u)^{2}}{u^{2}}-\frac{\partial_{j}\partial_ {j}u}{2u}>0. \tag{59}\]
The viscosity becomes part of the constraint.
## V Brownian particle
When the viscosity is zero, the action integral can be written without the anomaly term, i.e. by forcing the anomaly to vanish. We reformulate the action of the stochastic Navier-Stokes equation so that it does not contain the anomaly.
**Notations**
We define an object \(L\),
\[L=L[u]=e_{j}\partial_{j}u. \tag{60}\]
\(L\) has the property
\[L=\mbox{constant}\neq 0, \tag{61}\]
and it does not change as \(u\) varies
\[\frac{\delta}{\delta u}L=0,\ \ \ \ \mbox{or}\ \ \ \ \delta L=0. \tag{62}\]
Equations (61) and (62) are the anomaly constraint conditions. Additionally, \(L\) has symmetry properties
\[\tilde{L}[\tilde{t},\tilde{\vec{x}},\tilde{u}]=f_{1}(t,\vec{x},u)L[f_{2}(t, \vec{x},u)], \tag{63}\]
where \(f_{1}\) and \(f_{2}\) are some functions. These are shown in the next section.
**Fluid particle**
We may now write
\[(\vec{U}\cdot\nabla)\vec{U}=u\epsilon_{j}\partial_{j}u\vec{e}=uL\vec{e}=L\vec{ U}. \tag{64}\]
Note that the equation (64) can be interpreted as an eigenvalue equation \(\vec{U}\cdot\nabla\) with eigenvalue \(L\) and eigenvector \(\vec{U}\). The stochastic Navier-Stokes equation (1) becomes
\[\frac{\partial\vec{U}}{\partial t}+L\vec{U}=\frac{1}{\rho}\nabla p+\nu\nabla^{2} \vec{U}+\vec{f}. \tag{65}\]
When setting \(p=0\) and \(\nu=0\) in (65) we get
\[\frac{\partial\vec{U}}{\partial t}+L\vec{U}=\vec{f}. \tag{66}\]
This equation describes Brownian motion, the seemingly random motion of a particle in a fluid caused by collisions with molecules in the fluid, Refs. 2-3. \(L\) is the damping coefficient. The general solution of the equation of motion is
\[\vec{U}(t)=\vec{U}(0)e^{-Lt}+\int_{0}^{t}dt^{\prime}\vec{f}(t^{\prime})e^{-(t-t ^{\prime})}. \tag{67}\]
We can add the pressure term to the equation (66)
\[\frac{\partial\vec{U}}{\partial t}+L\vec{U}-\frac{1}{\rho}\nabla p=\vec{f}, \tag{68}\]
and the solution is
\[\vec{U}(t)=\vec{U}(0)e^{-Lt}+\int_{0}^{t}dt^{\prime}[\vec{f}(t^{\prime})e^{-( t-t^{\prime})}+\frac{1}{\rho}\nabla p]. \tag{69}\]
#### Brownian action
Next, we look at the path-integral representation. The path-integral (50) without the anomaly is
\[S_{eff}[\nu=0]=\frac{1}{2}\int dtd^{3}xd^{3}yu^{i}(t,\vec{x})\kappa(\vec{x}- \vec{y})\mu_{i}(t,\vec{y})-i\int d^{4}x\mu^{i}(\partial_{t}u_{i}+Lu_{i}- \partial_{i}\vec{p}). \tag{70}\]
We consider the determinant and use the result (56)
\[\det|\partial_{t}+(e_{j}\partial_{j})u+u(e_{j}\partial_{j})-\nu \partial_{j}\partial_{j}| = \frac{1}{2}L[u]-\nu F[u] \tag{71}\] \[= \mbox{constant}-\nu F[U]\] (72) \[= 1-\nu F[U]. \tag{73}\]
Here we have added a constant to the right-hand side of the determinant, which together with \(L[u]/2\) forms the constant 1. This addition applies only when \(L[U]\) is constant and examining a change in \(u\) in the determinant where
\[\frac{\delta}{\delta u}\det|\partial_{t}+(e_{j}\partial_{j})u+u(e_{j}\partial _{j})-\nu\partial_{j}\partial_{j}|=-\nu\frac{\delta}{\delta u}F[u]. \tag{74}\]
The action of the determinant is then
\[S_{D} = -\vec{e}\int d^{4}x\ln(1-\nu F[u]) \tag{75}\] \[\approx -\vec{e}\int d^{4}x\Big{(}(-\nu F[u])-\frac{1}{2}(-\nu F[u])^{2} +...\Big{)}. \tag{76}\]
In (76) we have used the expansion of the logarithmic function. The path-integral can be written in the form
\[S_{eff} = \frac{1}{2}\int dtd^{3}xd^{3}y\mu^{i}(t,\vec{x})\kappa(\vec{x}- \vec{y})\mu_{i}(t,\vec{y})-i\int d^{4}x\mu^{i}(\partial_{t}u_{i}+L[u]u_{i}- \partial_{i}\tilde{p}-\nu\partial_{j}\partial_{j}u_{i}) \tag{77}\] \[-\vec{e}\int d^{4}x\ln(\frac{1}{2}L[u]-\nu F[u])\] \[\approx \frac{1}{2}\int dtd^{3}xd^{3}y\mu^{i}(t,\vec{x})\kappa(\vec{x}- \vec{y})\mu_{i}(t,\vec{y})-i\int d^{4}x\mu^{i}(\partial_{t}u_{i}+L[u]u_{i}- \partial_{i}\tilde{p})\] (78) \[+\nu\int d^{4}x(i\mu^{i}\partial_{j}\partial_{j}u_{i}+\vec{e}F[u ]).\]
Here we have a fluid particle in random Brownian motion which is affected by the pump (first term) and the viscosity (last term).
## VI Symmetry properties of \(L\)
In this section, we investigate how the damping coefficient \(L\) changes under the Navier-Stokes symmetries. The stochastic Navier-Stokes equation (1), \(f=0\), is invariant with the following symmetries, Ref. [7].
a. Space-translations \[\vec{U}(t,\vec{x})=\vec{U}(t,\vec{x}+\vec{a})\ \ \ \ \vec{a}\in\mathbb{R}^{3}\] b. Time-translations \[\vec{U}(t,\vec{x})=\vec{U}(t+\tau,\vec{x})\ \ \ \ \tau\in\mathbb{R}\] c. Space-reflections (parity) \[P\vec{U}(t,\vec{x})=-\vec{U}(t,-\vec{x})\] d. Galilean transformations \[\vec{U}(t,\vec{x})=\vec{U}(t,\vec{x}+\vec{a}t)-\vec{a}\ \ \ \ \vec{a}\in\mathbb{R}^{3}\] e. Space-rotations \[\vec{U}(t,\vec{x})=R^{T}\vec{U}(t,R\vec{x})\ \ \ \ R\in SO(3)\] f. Scale invariance in space (\(\nu=0\)) \[\vec{U}(t,\vec{x})=\lambda^{h}\vec{U}(t,\lambda^{h}\vec{x})\ \ \ \ \lambda\in\mathbb{R}^{+}\] g. Scale invariance in time (\(\nu=0\)) \[\vec{U}(t,\vec{x})=\lambda^{-h}\vec{U}(\lambda^{h}t,\vec{x})\ \ \ \ \lambda\in\mathbb{R}^{+}\]
Next, we consider how these symmetries affect \(L\).
**a. Space-translations**
The generator of the space translations is
\[\delta u_{i}=a_{j}\partial_{j}u_{i}, \tag{79}\]
where \(a_{j}\) is constant. The variation of the damping coefficient is then
\[\delta L=e_{j}\partial_{j}\delta u_{i}e_{i}=e_{k}\partial_{k}(a_{j}\partial_{ j}u_{i})e_{i}=a_{j}\partial_{j}L=0, \tag{80}\]
and the \(L\) is
\[\tilde{L}(t,\vec{x})=L(t,\vec{x}+\vec{a})=\text{constant}. \tag{81}\]
**b. Time-translations**
In a same way, the generator of the time-translation is
\[\delta u_{i}=a\partial_{t}u_{i}, \tag{82}\]
and the \(L\) is
\[\tilde{L}(t,\vec{x})=L(t+a,\vec{x})=\mbox{constant}. \tag{83}\]
## c. Parity
Parity cannot be represented by a sequence of infinitesimal transformations. It's easy to see though
\[\tilde{L}(t,\vec{x})=L(t,-\vec{x})=\mbox{constant}. \tag{84}\]
## d. Galilean symmetry
The infinitesimal change is
\[\delta u_{i} = a_{j}t\partial_{j}u_{i}-a_{i}, \tag{85}\] \[\delta\mu_{i} = a_{j}t\partial_{j}\mu_{i},\] (86) \[\delta\Psi_{i} = a_{j}t\partial_{j}\Psi_{i},\] (87) \[\delta\bar{\Psi}_{i} = a_{j}t\partial_{j}\bar{\Psi}_{i}, \tag{88}\]
and the damping coefficient gives
\[\delta L=e_{j}\partial_{j}\delta u_{i}e_{i}=at\partial_{t}L_{i}e_{i}=0. \tag{89}\]
This corresponds to
\[\tilde{L} = L(t,\vec{x}+\vec{a}t)=\mbox{constant}, \tag{90}\] \[\tilde{\vec{u}} = \vec{u}(t,\vec{x}+\vec{a}t)-\vec{a}. \tag{91}\]
## e. Space-rotations
The field variations of the space-rotations are
\[\delta u_{i} = a(q_{kl}x_{l}\partial_{k}u_{i}-q_{ik}u_{k}), \tag{92}\] \[\delta\mu_{i} = a(q_{kl}x_{l}\partial_{k}\mu_{i}+q_{ki}\mu_{k}),\] (93) \[\delta\Psi_{i} = a(q_{kl}x_{l}\partial_{k}\Psi_{i}-q_{ik}\Psi_{k}),\] (94) \[\delta\bar{\Psi}_{i} = a(q_{kl}x_{l}\partial_{k}\bar{\Psi}_{i}+q_{ki}\bar{\Psi}_{k}), \tag{95}\]
where \(q_{ij}\) is the rotation in \(SO(3)\). The change in \(L\) is
\[\delta L=(R\cdot\nabla)L=0. \tag{96}\]
This means that the damping coefficient is invariant under the rotational symmetry,
\[\tilde{L}(t,\vec{x})=L(t,R\vec{x})=\mbox{constant}. \tag{97}\]
The transformation rules are
\[\delta u_{i} = a(x_{j}\partial_{j}u_{i}-u_{i}), \tag{98}\] \[\delta\mu_{i} = a(x_{j}\partial_{j}\mu_{i}+2\mu_{i}),\] (99) \[\delta\tilde{p} = a(x_{j}\partial_{j}\tilde{p}-2\tilde{p}),\] (100) \[\delta\Psi_{i} = a(x_{j}\partial_{j}\Psi_{i}+\frac{1}{2}\Psi_{i}),\] (101) \[\delta\tilde{\Psi}_{i} = a(x_{j}\partial_{j}\tilde{\Psi}_{i}+\frac{1}{2}\bar{\Psi}_{i}). \tag{102}\]
The change in \(L\) is
\[\tilde{L}(t,\vec{x})=L(t,\lambda^{h}\vec{x})=\mbox{constant}. \tag{103}\]
**g. Scale invariance in time (\(\nu=0\))**
The transformation rules are
\[\delta u_{i} = a(t\partial_{t}u_{i}+u_{i}), \tag{104}\] \[\delta\mu_{i} = a(t\partial_{t}\mu_{i}-\mu_{i}),\] (105) \[\delta\tilde{p} = a(t\partial_{t}\tilde{p}+2\tilde{p}),\] (106) \[\delta\Psi_{i} = at\partial_{t}\Psi_{i},\] (107) \[\delta\bar{\Psi}_{i} = at\partial_{t}\bar{\Psi}_{i}. \tag{108}\]
The change in \(L\) is
\[\tilde{L}(t,\vec{x})=\lambda^{-h}L(\lambda^{h}t,\vec{x})=\mbox{constant}. \tag{109}\]
We have shown that the damping coefficient obeys all the same symmetries that the stochastic Navier-Stokes equation has. In all transformations, it is constant, as required by the constraint. The symmetry transformations do not change the coefficient expect in the scale invariance in time. This is a feature with implications.
**Symmetries of the determinant actions**
In this section, we show that \(L[u]\) must be constant due to the symmetries of the stochastic Navier-Stokes equation. First, suppose that \(L[u]\) is not constant.
We start by applying symmetry transformations to the anomaly of the action, equation (29),
\[S_{D1}(\nu=0)=-\vec{e}\int d^{4}x\bar{\Psi}\frac{1}{2}(e_{j}\partial_{j})u(t, \vec{x})\Psi. \tag{110}\]
As can be verified, using the variations of the fields \(u\), \(\Psi\) and \(\bar{\Psi}\), the action (110) is invariant under all local symmetry transformations a.-g.. In other words, for these local symmetries
\[\delta S_{D1}(\nu=0)_{\rm a.-g.}=0, \tag{111}\]
and the result is independent of whether \(L[u]\) is constant. We then write (110) as
\[S_{D1}(\nu=0)=-\frac{1}{2}\vec{e}\int d^{4}x\bar{\Psi}L[u]\Psi. \tag{112}\]
The variation of (112) is
\[\delta S_{D1}(\nu=0) = -\frac{1}{2}\vec{e}\int d^{4}x(\delta\bar{\Psi}L[u]\Psi+\bar{\Psi} \delta L[u]\Psi+\bar{\Psi}L[u]\delta\Psi) \tag{113}\] \[= -\frac{1}{2}\vec{e}\int d^{4}x(\bar{\Psi}\delta L[u]\Psi+L[u]( \delta\bar{\Psi}\Psi+\bar{\Psi}\delta\Psi)). \tag{114}\]
We need and we have an additional condition. Now looking at the eigenvalue equation (27), we find that
\[\Psi=\bar{\Psi}, \tag{115}\]
and because the \(\Psi\) and \(\bar{\Psi}\) are anticommuting fields, the result is
\[\delta S_{D1}(\nu=0)=-\frac{1}{2}\vec{e}\int d^{4}x\bar{\Psi}\delta L[u]\Psi. \tag{116}\]
Variation (116) is zero only if \(\delta L[u]=0\) or \(L[u]\) is constant.
We derive the same result in a different way. We have also another formula for the anomalous action, equation (34),
\[S_{D2}(\nu=0)=-\vec{e}\int d^{4}x\ln(\frac{1}{2}e_{j}\partial_{j}u(t,\vec{x})). \tag{117}\]
The Jacobi fields are missing from this action which enabled the vanishing action in field variations, (110) and (111). However, for consistency reasons, for the same local symmetries a.-g., the variation of the action (117) must also vanish under the same symmetry transformations, i.e.
\[\delta S_{D2}(\nu=0)_{\rm a.-g.}=0. \tag{118}\]
Therefore, the constraint condition
\[L[u]=\rm constant, \tag{119}\]
is the only valid choice of gauge that the stochastic Navier-Stokes turbulence equation can have when examining the symmetries of this equation. Otherwise, the symmetry properties of the anomalous actions (110) and (117) are different.
In summary, we have shown in two different ways that \(L[u]\) must be constant.
## VII Vortices and Monopoles
A Vortex is a stable time-independent solution to a set of classical field equations that has finite energy in two spatial dimensions; it is a two-dimensional soliton. In three spatial dimensions, a vortex becomes a string, a classical solution with finite energy per unit length, Ref. [8].
In this section, we examine \(L\) for a Brownian fluid particle solution, (67),
\[\vec{U}(t,\vec{x})=\vec{U}(0,\vec{x})e^{-Lt}+\int_{0}^{t}dt^{\prime}\vec{f}(t ^{\prime},\vec{x})e^{-(t-t^{\prime})}. \tag{120}\]
The constraint \(L\) becomes
\[L\vec{e}=e_{j}\partial_{j}u\vec{e}=\int_{0}^{t}dt^{\prime}e_{j}\partial_{j} \vec{f}(t^{\prime},\vec{x})e^{-(t-t^{\prime})}=\rm constant\cdot\vec{e}. \tag{121}\]
We have assumed that the first term of (120) can be neglected in large values of time. Taking a derivative with respect to time we have
\[e_{j}\partial_{j}f_{i}(t,\vec{x})=0. \tag{122}\]
We assume a pulse which has a Gaussian shape
\[f_{i}\sim\exp(-a_{j}x_{j}^{2}), \tag{123}\]
where \(a_{i}\) is constant. Equation (122) gives
\[\sum_{j=1}^{3}a_{j}e_{j}x_{j}=0. \tag{124}\]
The condition (124) create a constraint on the shape of the soliton. In two dimensions, the vortex has a Gaussian shape and it is rotationally symmetric in \(u\). In three spatial dimensions, a vortex has a shape
\[f_{i}\sim\exp(-a_{1}x_{1}^{2}-a_{2}x_{2}^{2}-[(-a_{1}e_{1}x_{1}-a_{2}e_{2}x_{2} )/(a_{3}e_{3})]^{2}), \tag{125}\]
and this is a string.
There are two things to note. First, the stochastic force \(f\) is the source of the anomaly and the vortex. Without a force there are no vortices. And conversely, when a stochastic force is present, there is an anomaly or vortex in the theory. Second, as considered earlier in this article,
\[\vec{\mu}\cdot(\vec{U}\cdot\nabla)\vec{U} \tag{126}\]
indeed describes the interaction term between the field and the vortex.
## VIII Brownian particle in scaling symmetry
As a more concrete example, we consider the following local time reparameterization, \(\beta=\beta(t)\), \(a\) and \(b\) are constants,
\[\tilde{t} = \beta(t)^{a}, \tag{127}\] \[\tilde{\vec{x}} = \vec{x}\beta^{\prime}(t)^{b},\] (128) \[\tilde{u}_{i} = u_{i}\beta^{\prime}(t)^{a-b},\] (129) \[\tilde{\mu}_{i} = \mu_{i}\beta^{\prime}(t)^{2b-a},\] (130) \[\tilde{\vec{p}} = \tilde{p}\beta^{\prime}(t)^{2a-2b}. \tag{131}\]
This combines the scaling of time and space symmetries. The transformation corresponds to the following field variations
\[\delta u_{i} = a\beta\partial_{t}u_{i}+b\beta^{\prime}x_{j}\partial_{j}u_{i}+(a -b)\beta^{\prime}u_{i}, \tag{132}\] \[\delta\mu_{i} = a\beta\partial_{t}\mu_{i}+b\beta^{\prime}x_{j}\partial_{j}\mu_{ i}+(2b-a)\beta^{\prime}\mu_{i},\] (133) \[\delta\tilde{p} = a\beta\partial_{t}\tilde{p}+b\beta^{\prime}x_{j}\partial_{j} \tilde{p}+(2a-2b)\beta^{\prime}\tilde{p}. \tag{134}\]
The change in the action is
\[\frac{\delta S_{eff}}{b} = \frac{3h-1}{2}\int dtd^{3}xd^{3}y\beta^{\prime}\mu^{i}(t,\vec{x} )\kappa(\vec{x}-\vec{y})\mu_{i}(t,\vec{y})-i\int dx^{4}\beta^{\prime\prime} \mu^{i}(x_{j}\partial_{j}u_{i}-hu_{i}) \tag{135}\] \[+i(h+1)\nu\int dx^{4}\beta^{\prime}\mu^{i}\partial_{j}\partial_{ j}u_{i}-\vec{e}\frac{1}{b}\delta\int dx^{4}\Big{(}\frac{1}{2}L[u]-\nu F[u] \Big{)},\]
where \(h=(b-a)/b\) The second term vanishes either if
\[\mbox{a)}\ \ \beta^{\prime\prime}=0,\ \
Conclusions
Navier-Stokes turbulence is a chaotic and turbulent flow of a fluid characterized by rapid and irregular movement of fluid particles. On the other hand, Brownian motion is the random movement of particles suspended in a liquid, which is caused by their collision with fast-moving atoms or molecules in the surrounding medium. Random Brownian motion in Navier-Stokes turbulence represents a complex interaction between two separate phenomena in fluid dynamics and statistical physics. The connection between Brownian motion and Navier-Stokes turbulence arises when looking at small particles or tracer particles suspended in a turbulent fluid.
In this article, we have shown that there is an anomaly in the Navier-Stokes turbulence driven by the Gaussian random force. The anomaly must somehow be handled in stochastic turbulence calculations. If no choice is made, an inherent feature of the anomaly is lost. Therefore, our result suggests a third law for the original Navier-Stokes equations (1)-(2)
\[\mathrm{A)}\quad\frac{\partial\vec{U}}{\partial t}+(\vec{U}\cdot \nabla)\vec{U}=\frac{1}{\rho}\nabla p+\nu\nabla^{2}\vec{U}+\vec{f}, \tag{148}\] \[\mathrm{B)}\quad\nabla\cdot\vec{U}=0,\] (149) \[\mathrm{C)}\quad\mathrm{Rule}\;\mathrm{how}\;\mathrm{to}\; \mathrm{treat}\;\mathrm{the}\;\mathrm{anomaly}\;(=\mathrm{fixing}\;\mathrm{gauge}). \tag{150}\]
We need a rule, Eq. (150), that defines how to fix the "gauge" for the velocity field. In this article we selected the gauge that adds a constant constraint
\[\mathrm{A)}\quad\frac{\partial\vec{U}}{\partial t}+(\vec{U}\cdot \nabla)\vec{U}=\frac{1}{\rho}\nabla p+\nu\nabla^{2}\vec{U}+\vec{f}, \tag{151}\] \[\mathrm{B)}\quad\nabla\cdot\vec{U}=0,\] (152) \[\mathrm{C)}\quad(\vec{e}\cdot\nabla)\vec{U}=\mathrm{constant}. \tag{153}\]
This choice of gauge, Eq. (153), forces the anomaly to be a number and removes the anomalous term from the theory. Instead, a monopoly-type object is formed, whose "charge" is "constant". If we want to examine the local symmetries of the stochastic Navier-Stokes equation, this gauge is the only correct choice. The gauge (153) is consistent with the Brownian damping coefficient of a random fluid particle. The difference to the traditional Brownian factor is that in this gauge selection the coefficient depends on the velocity field and it also follows all the symmetries of the Navier-Stokes equation. Another selection of the gauge is also possible, and the choice may depend on the physical problem under investigation.
|
2301.10124 | Dynamics of blood cells during a routine laboratory examination | Centrifugation is a commonly performed laboratory procedure that helps to
separate blood cells such as $RBCs$, $WBCs$, and platelets from plasma or
serum. Although centrifugation is a routine procedure in most medical
laboratories, the factors that affect the efficacy of the centrifugation
process have never been studied analytically. In this paper, we examine the
effect of the centrifugation time on the efficacy of the centrifugation process
by studying the dynamics of the blood cells via the well-known Langevin
equation or equivalently, by solving the Fokker-Plank equation. Our result
depicts that the speed of the centrifuge is one of the determinant factors
concerning the efficacy of the centrifugation process. As the angular speed
increases, the centrifugal force steps up and as result, the particles are
forced to separate from the plasma or serum. The room temperature also
considerably affects the dynamics of analyse during centrifugation. Most
importantly, the generation of heat during centrifugation steps up the
temperature within a centrifuge and as a result, not only the stability of the
sample but also mobility of analyse is affected. We show that as the centrifuge
temperature steps up, the velocity of the cells as well as the displacement of
the cell in the fluid decreases. We then study the dynamics of the whole blood
during capillary action where in this case the blood flows upward in a narrow
space without the assistance of external forces. Previous investigations show
that the height that the fluid rises increases as the surface tension steps up. | Mesfin Taye | 2023-01-24T16:44:24Z | http://arxiv.org/abs/2301.10124v1 | # Dynamics of blood cells during a routine laboratory examination
###### Abstract
Centrifugation is a commonly performed laboratory procedure that helps to separate blood cells such as \(RBCs\), \(WBCs\), and platelets from plasma or serum. Although centrifugation is a routine procedure in most medical laboratories, the factors that affect the efficacy of the centrifugation process have never been studied analytically. In this paper, we examine the effect of the centrifugation time on the efficacy of the centrifugation process by studying the dynamics of the blood cells via the well-known Langevin equation or equivalently, by solving the Fokker-Plank equation. Our result depicts that the speed of the centrifuge is one of the determinant factors concerning the efficacy of the centrifugation process. As the angular speed increases, the centrifugal force steps up and as result, the particles are forced to separate from the plasma or serum. The room temperature also considerably affects the dynamics of analyse during centrifugation. Most importantly, the generation of heat during centrifugation steps up the temperature within a centrifuge and as a result, not only the stability of the sample but also mobility of analyse is affected. We show that as the centrifuge temperature steps up, the velocity of the cells as well as the displacement of the cell in the fluid decreases. We then study the dynamics of the whole blood during capillary action where in this case the blood flows upward in a narrow space without the assistance of external forces. Previous investigations show that the height that the fluid rises increases as the surface tension steps up. The viscosity of the fluid also affects the capillary action but to date, the dependence of the height on viscosity has never been explored due to the lack of a mathematical correlation between the viscosity of blood and surface tension [1]. In this work, we first examine the correlation between surface tension and viscous friction via data fitting. Our result exhibits that the viscosity of the blood increases linearly as the surface tension increases. The mathematical relation between the height and viscous friction is derived. It is shown that the height of the blood that rises in capillary increases as the viscous friction steps up. As the temperature of the room steps up, the height also decreases. The dependence of erythrocytes sedimentation rate on surface tension is also studied. The results obtained in this work show that the ESR increases as surface tension steps down
## I Introduction
Medical laboratory examinations are vital since these examinations help to diagnose any abnormalities and treat a patient based on the observed results. Particularly, blood tests are routinely performed to evaluate any abnormal conditions. Most of these blood works require sedimentation either via centrifugation or gravity. Often, the observed diagnostic test results are affected by external factors such as temperature. To understand the factors that affect the outcome of the routine examinations, it is vital to explore the dynamics of the whole blood, erythrocytes (RBCs), leukocytes (WBCs), and thrombocytes (platelets). In this work, using physiological parameters, we study the dynamics of blood cells during routine lab exams.
Particularly, centrifugation is one of the commonly performed laboratory procedures that help to separate blood cells such as \(RBCs\), \(WBCs\), and platelets from plasma or serum. When blood is first mixed with an anticoagulant and allowed to be centrifuged for a few minutes, the blood cells sediments by leaving the plasma at the top. In the absence of anticoagulant, the blood clots, and when it is centrifuged, the blood cells sediment leaving the serum at the top. The serum is an ideal sample for diagnostic tests since it lacks leukocytes, erythrocytes, platelets, and other clotting factors. Although centrifugation is a routine procedure in most medical laboratories, the factors that affect the efficacy of the centrifugation process have never been studied analytically. As discussed in the work [1], the centrifugation time, temperature, the length of the test tube, and the speed of the centrifuge are the determinant factors with regards to the efficacy of the centrifugation process. In this paper, via an exact analytical solution, we
study the factors that affect the efficacy of the centrifugation process. First, we examine the effect of the centrifugation time on the efficacy of the centrifugation process. Since blood cells are microscopic in size, their dynamics can be modeled as a Brownian particle walking in a viscous medium. As blood is a highly viscous medium, the chance for the blood cells to accelerate is negligible and the corresponding dynamics can be studied via Langevin equation or Fokker Planck equation [2; 3; 4; 5; 6; 7; 8]. Solving the Fokker Planck equation analytically, we explore how the dynamics of blood cells behave as a function of the model parameters. Because our study is performed by considering real physiological parameters, the results obtained in this work non only agree with the experimental observations but also help to understand most hematological experiments that are conducted in vitro. In a medical laboratory, the standard test tube has a length of \(L=150\)\(mm\) and the cells separate from the plasma or serum at the result of the centrifugal force \(f=Nm\omega^{2}r\) where \(N\) denotes the number of cells that form a cluster while \(m\) designates the mass of \(RBC\) or platelets. Since the \(WBCs\) are heavy, they move way before \(RBCs\), and as an approximation one can disregard their dynamics.
Our result depicts that the speed of the centrifuge is one of the determinant factors concerning the efficacy of the centrifugation process. As the angular speed increases, the centrifugal force steps up and as result, the particles are forced to separate from the plasma or serum. Depending on the size and fragility of the sample, the centrifugation speed should be adjected. To increase the efficacy of the centrifugation process, as the size of the particle decreases, the centrifugation speed should step up. The length of the test tube affects the effectiveness of the centrifugation process. As the length of the test, tube steps up, the centrifugal force increases. The room temperature considerably affects the dynamics of analyse during centrifugation. Most importantly, the generation of heat during centrifugation steps up the temperature within a centrifuge and as a result, not only the stability of the sample but also mobility of analyse is affected. The effect of temperature near the test tube causes difficulties in the current experimental procedure by inducing additional false-positive results. In this regard, developing a mathematical model and exploring the model system using the powerful tools of statistical mechanics provides insight as well as guidance regarding the dynamics of RBCs in vitro. Our result shows that as the centrifuge temperature steps up, the velocity of the cells as well as the average distance of the cell in the fluid decreases. This effect of temperature can be corrected by increasing the centrifugation time. However, a longer centrifugation time might lead to a considerable amount of heat generation. This, in turn, may cause the red blood cells to lyse. Prolonged centrifugation at high speed also causes structural damage to the cells and as a result, hemolysis occurs. On the contrary, low-speed centrifugation leads to insufficient separation of plasma and serum from cellular blood components.
Our analysis also indicates that, when the RBC forms rouleaux (as \(N\) increases), the velocity of the cells increases. This is because as \(N\) steps up, the centrifugal force increases. This also implies since the cells in serum form aggregates due to clotting factors, they need less centrifugation time than plasma. As shown in Fig. 1, the size of WBC is considerably large in comparison with the RBC. Since the platelet has the smallest size, its velocity and displacement along the test tube are significantly small. As a result, the RBCs separate far more than platelets showing that to avoid the effect of clotting factors, one has to increase the centrifugation time.
Furthermore, we study the dynamics of the whole blood during capillary action where in this case the blood flows upward in narrow spaces without the assistance of external forces. Previous investigations show that the height that the fluid rises increases as the surface tension steps up. The viscosity of the fluid also affects the capillary action but to date, the dependence of the height on viscosity has never been explored due to the lack of a mathematical correlation between the viscosity of blood and surface tension [10]. In the past, for non-Newtonian fluids, Pelofsky [11] has studied the relation between surface tension and viscosity. His empirical equation depicts that the surface tension increases as the viscosity steps up. In this work, we first examine the correlation between surface tension and
Figure 1: A blood smear that shows the composition of RBCs, WBCs, and platelets [9].
viscous friction via data fitting. We show that the viscosity of the blood increases as the surface tension increases. The mathematical relation between the height and viscous friction is also derived. It is shown that the height that the blood rises increases as the viscous friction steps up. As the temperature of the room steps up, the height also decreases.
Moreover, in this work, we also explored, the dependence of the erythrocyte sedimentation rate (ESR) on model parameters. As discussed in our recent paper [12], the erythrocyte sedimentation rate (ESR) often measures how fast a blood sample sediments along a test tube in one hour in a clinical laboratory. This analysis is performed by mixing whole blood with an anticoagulant. The blood is placed in an upright Wintrobe or Westergren tube and allowed to sediments for an hour. The normal values of the ESR varies from 0-3mm/hr for men and 0-7 mm/hr for women [13; 14; 15]. High ESR is associated with diseases that cause inflammation. In the case of inflammatory disease, the blood level of fibrinogen becomes too high [16; 15]. The presence of fibrinogen forces the RBCs to stick each other and as a result, they form aggregates of RBC called rouleaux. As the mass of the rouleaux increases, the weight of the rouleaux dominates the vicious friction and as a result, the RBCs start to precipitate. The temperature of the laboratory (blood sample) also significantly affects the test result [17]. As the temperature of the sample steps up, the ESR increases. In the past, a mathematical model was developed by Sharma \(.et.\)\(al\) to study the effect of blood concentration on the erythrocyte sedimentation rate [18]. Later the effect of concentration of nutrients on the red blood cell sedimentation rate was investigated in the work [19]. More recently, the sedimentation rate of RBC was explored via a model that uses Caputo fraction derivative [20]. The theoretical work obtained in this work was compared with the sedimentation rate that was analyzed experimentally. All of these experimental and theoretical works exposed the factors that affect the sedimentation rate of RBCs. In this paper, extending our recent work [12], we study how surface tension as well at the tilt in test tube angle affects the ESR.
The rest of the paper is organized as follows. In section II, we present the model system. In section III, we study the factors that affect the efficacy of the centrifugation process. The dynamics of the whole blood during capillary action is studied in section IV. In section V, the dependence of erythrocytes sedimentation rate on model parameters is studied. Section VI deals with summary and conclusion.
## II The model
Since RBC is microscopic in size, its dynamics (in vitro) can be modeled as a Brownian particle that undergoes a biased random walk on one-dimensional test tube. In the routine hematology test, the erythrocyte dynamics is also affected by gravitational force
\[f=Nmg \tag{1}\]
and centrifugal force
\[f=Nm\omega^{2}r \tag{2}\]
where \(g=9.8m/s^{2}\) is the gravitational acceleration. \(N\) denotes the number of blood cells that forms rouleaux. \(\omega\) denotes the angular speed of the centrifuge (see Fig.2) and \(r\) designates the radius of the shaft. The speed of clinical centrifuge varies from 200 rpm ( \(21rad/s\)) and 21000 rpm ( \(2198rad/s\)). The mass of the red blood cells \(m=27X10^{-15}\) kg. Since platelets are only 20 percent of the size of \(RBC\), we infer the mass of the platelets to be \(m=5.2X10^{-15}\) kg. The average size of RBC and platelets is given as \(r^{\prime}=4X10^{-6}\) and \(r^{\prime}=7.5X10^{-7}\) meter, repectively. The normal value of RBC on average varies from \(5X10^{6}-6X10^{6}/mm^{3}\)[21; 22; 23].
Figure 2: A centrifuge in a clinical laboratory is used to separate the blood cells such as \(RBCs\), \(WBCs\), and platelets from the plasma or serum. Here \(r\) designates the radius of the shaft.
_Underdamped case :--_ The dynamics of the RBC in vitro is governed by the well-known Langevin equation
\[\frac{dV}{dt}\ =\ -\gamma V-f+\sqrt{2k_{B}\gamma T}\xi(t). \tag{3}\]
The random noise \(\xi(t)\) is assumed to be Gaussian white noise satisfying the relations \(\langle\xi(t)\rangle=0\) and \(\langle\xi(t)\xi(t^{\prime})\rangle=\delta(t-t^{\prime})\). The viscous friction \(\gamma\) and \(T\) are assumed to be spatially invariant along with the medium. For a non-Newtonian fluid such blood, it is reasonable to assume that when the temperature of the blood sample increases by 1 degree Celsius, its viscosity steps down by 2 percent [24] as \(\gamma^{\prime}=B-\frac{2B}{100}(T-T^{R})\) where \(B=4X10^{-3}kg/ms\) is the dynamical viscosity of blood at a room temperature (\(T^{R}=20\) degree Celsius) and \(T\) is the temperature [7]. On the other hand, from Stokes's theorem, the viscosity \(\gamma=6r^{\prime}\pi\gamma^{\prime}\). Here \(k_{B}=1.38X10^{-23}m^{2}kgs^{-2}K^{-1}\) is the Boltzmann constant.
Alternatively, Eq. (3) can be rewritten as a Fokker-Plank equation
\[\frac{\partial P}{\partial t} = -\frac{\partial(vP)}{\partial x}-\frac{1}{m}\frac{\partial(fP)} {\partial v}+ \tag{4}\] \[\frac{\gamma}{m}\frac{\partial(vP)}{\partial v}+\frac{\gamma T}{ m^{2}}\frac{\partial^{2}P}{\partial v^{2}}\]
where \(P(x,v,t)\) is the probability of finding the particle at particular position \(x\), velocity \(v\) and time \(t\).
For convenience, Eq. (4) can be rearranged as
\[\frac{\partial P}{\partial t}\ =\ -(k+\frac{\partial J^{\prime}}{\partial v}) \tag{5}\]
where
\[k=v\frac{\partial P}{\partial x}=\frac{\partial J}{\partial x} \tag{6}\]
and
\[J^{\prime}=-\frac{\gamma(x,t)}{m}vP+\frac{1}{m}(U^{\prime}P)-\frac{\gamma T}{ m^{2}}\frac{\partial P}{\partial v}. \tag{7}\]
From Eqs. (6) and (7), one gets
\[\frac{\partial P}{\partial v}=-\frac{m^{2}J^{\prime}}{\gamma T}+\frac{mU^{ \prime}P}{\gamma T}-\frac{mvP}{T} \tag{8}\]
and
\[\frac{\partial P}{\partial x}=\frac{k}{v}. \tag{9}\]
After some algebra, the expression for the probability distribution \(P(v,t)\) is given as
\[P(v,t)\ =\ \frac{e^{-\frac{m(-\frac{t(1-e^{-\frac{\gamma t}{m}})}{m})T}} \sqrt{\frac{m}{(1-e^{-\frac{\gamma t}{m}})t}}}{\sqrt{2\pi}}. \tag{10}\]
The velocity of the cell can be evaluated as
\[V(t) = \int_{0}^{\infty}P(v^{\prime},t)v^{\prime}dv^{\prime} \tag{11}\] \[= \left(\frac{1-e^{-\frac{\gamma t}{m}}}{\gamma}\right)f.\]
Once the velocity of the cell is calculated, the position of the cell is then given by
\[x\ =\ \int_{0}^{t}V(t^{\prime})dt^{\prime}. \tag{12}\]
Here one should note that the solutions for overdamped and underdamped cases are not new since such cases are presented by a well-known Ornstein-Uhlenbeck process.
_Overdamped case:--_ Blood is a highly viscous medium, as the result, the chance for the RBC to accelerate is negligible. One can then neglect the inertia effect and the corresponding dynamics can be studied via the Langevin equation
\[\gamma\frac{dx}{dt} = -f+\sqrt{2k_{B}\gamma T}\xi(t). \tag{13}\]
One can also write Eq. (13) as a Fokker-Plank equation
\[\frac{\partial P(x,t)}{\partial t}=\frac{\partial}{\partial x}\left[\frac{f}{ \gamma}P(x,t)+\frac{\partial}{\partial x}\left(\frac{k_{B}T}{\gamma}P(x,t) \right)\right] \tag{14}\]
where \(P(x,t)\) is the probability density of finding the particle (the cell) at position \(x\) and time \(t\).
To calculate the desired thermodynamic quantity, let us first find the probability distribution. After imposing a periodic boundary condition \(P(0,t)=P(L,t)\), we solve Eq. (14). After some algebra, one finds the probability distribution as
\[P(x,t) = \sum_{n=0}^{\infty}\cos[\frac{n\pi}{L}(x+t\frac{f}{\gamma})]e^{-( \frac{n\pi}{L})^{2}t\frac{k_{B}T}{\gamma}} \tag{15}\]
where \(T\) is the temperature of the medium. For detailed mathematical analysis, please refer to my previous work [7]. Next we use Eq. (15) to find the particle current, the velocity as well as the position of the particle. The particle current is then given by
\[J(x,t) = -\left[fP(x,t)+k_{B}T\frac{\partial P(x,t)}{\partial x}\right]. \tag{16}\]
After substituting \(P(x,t)\) shown in Eq. (15), one can find the velocity of the cells at any time as
\[V(x,t) = \int_{0}^{x}J(x^{\prime},t)dx^{\prime} \tag{17}\]
while the position of the cells can be found via
\[x = \int_{0}^{x}P(x^{\prime},t)x^{\prime}dx^{\prime}. \tag{18}\]
The fact that blood is a highly viscous medium (since \(\gamma\) is considerably high), the numerical value of velocity calculated via Eq. (11) is approximately the same as the velocity calculated via Eq. (17). At steady state (in long time limit), the velocity (Eqs. (11) and (17) ) approach \(V=f/\gamma\). One should also note that the diffusion constant for the model system is given by \(D=\frac{k_{B}T}{\gamma}\). This equation is valid when viscous friction is temperature-dependent showing that the effect of temperature on the mobility of the cells is significant. When temperature increases, the viscous friction gets attenuated and as a result the diffusibility of the particle increases. Various experimental studies also showed that the viscosity of the medium tends to decrease as the temperature of the medium increases [25]. This is because increasing the temperature steps up the speed of the molecules, and this in turn creates a reduction in the interaction time between neighboring molecules. As a result, the intermolecular force between the molecules decreases, and hence the magnitude of the viscous friction decreases.
## III The dynamics of blood cells during centrifugation
As a common standard practice, in a clinical laboratory, centrifugation is vital to separate the blood cells such as \(RBCs\), \(WBCs\), and platelets from the plasma or serum. When blood is first mixed with an anticoagulant and allowed to be centrifuged for a few minutes, the blood cells separate from the plasma. In the absence of anticoagulant, the blood clips, and when it is centrifuged, the blood cells segregate from the serum. In this section, we study the factors that affect the efficacy of the centrifugation process analytically. We show that the centrifugation time, temperature, the length of the test tube, and the speed of the centrifuge are the determinant factors with regards to the efficacy of the centrifugation process.
First, let us examine the effect of the centrifugation time on the efficacy of the centrifugation process. This can be investigated by tracking the dynamics of the blood cells during centrifugation. The dynamics of the cells in vitro is governed by the well-known Langevin equations (3) or (13). Equivalently, by solving Fokker- Plank equations (4) or (14), the information regarding the mobility of the RBCs can be extracted. In a medical laboratory, the standard test tube has a length of \(L=150\ mm\) and the cells separate from the plasma or serum at the result of the centrifugal force \(f=Nm\omega^{2}r\) where \(N\) denotes the number of cells that form a cluster while \(m\) designates the mass of \(RBC\) or platelets. Since the \(WBCs\) are heavy, they move way before \(RBCs\), and as an approximation one can disregard their dynamics.
Exploiting Eqs. (11) or (17) one can see that the velocity of the RBC steps up and saturates to a constant value (see Fig. 3a). In the figure, we fix the parameters as \(N=1\) (at 20 degree celsius), \(\omega=300rad/s\), \(r=0.5m\) and \(L=0.15m\). Via Eqs. (12) or (18), one can track the position of RBC along the test tube during the centrifugation processes. As depicted in Fig. 4a, the particle walks away towards the bottom of the test tube as time steps up. Unlike serum, plasma contains platelets and this indicates that more configuration time is needed for plasma since platelets are small in size. Fig. 4a is plotted by fixing \(N=1\), \(T=20\) degree celsius, \(\omega=300rad/s\), \(r=0.5m\) and \(L=0.15m\). The three-dimensional plot depicted in Fig. 5 also confirms that as the centrifugation time steps up, the cells segregate and move towards the bottom of the test tube.
The speed of the centrifuge also affects the efficacy of the centrifugation process. As the angular speed increases (see Eq. (2) ), the centrifugal force steps up and as result, the particles are forced to separate from the plasma or serum. Depending on the size and fragility of the analyse, the centrifugation speed should be adjusted. To increase the efficacy of the centrifugation process, as the size of the particle decreases, the centrifugation speed should step up. The dependence of the speed of the particle on angular speed is explored. As depicted in Fig. 3b, the velocity of the RBC steps up monotonously as the angular speed steps up. In the figure we fix the parameters as \(N=1\) (at 20 degree celsius), \(\omega=300rad/s\), \(r=0.5m\) and \(L=0.15m\). One can also track the position of RBC along the test tube as a function of angular speed. As shown in Fig. 4b, the particle moves towards the bottom of the test tube as angular speed steps up. The figure is plotted by fixing \(N=1\), \(T=20\) degree celsius, \(\omega=300rad/s\), \(r=0.5m\) and \(L=0.15m\).
The length of the test tube affects the effectiveness of the centrifugation process. As the length of the test tube steps, the centrifugal force increases. The room temperature considerably affects the dynamics of analyse during centrifugation. Most importantly, the generation of heat during centrifugation steps up the temperature within a centrifuge and as a result, not only the stability of the sample but also mobility of analyse is affected. The effect of temperature near the test tube is unavoidable and causes difficulties in the current experimental procedure by inducing additional false-positive results. In this regard, developing a mathematical model and exploring the model system using the powerful tools of statistical mechanics provides insight as well as guidance regarding the dynamics of RBCs in vitro.
The role of temperature on the mobility of the cells during centrifugation can be appreciated by analyzing Eq. (11). Substituting Eq. (2) into Eq. (11), one gets
\[V(t)\ =\ Nm\omega^{2}r\left(\frac{1-e^{-\frac{\gamma t}{m}}}{\gamma}\right). \tag{19}\]
The viscosity \(\gamma\) of the blood is the function temperature as \(\gamma=6r^{\prime}\pi\gamma^{\prime}(B-\frac{2B}{100}(T-T^{R}))\) where \(r^{\prime}\) is the radius of the cells. This implies the velocity \(V(t)\) and position \(x\) are also the function of temperature. From Eq. (19) one can
Figure 3: (Color online)(a) The velocity (\(V(m/s)\)) of RBC as a function of time \(t\) (in seconds) for a single RBC \(N=1\), \(T=20\) degree celsius, \(\omega=300rad/s\), \(r=0.5m\) and \(L=0.15m\). (b) The velocity (\(V(m/s)\)) of RBC as a function of angular velocity \(\omega\) (in rad/s) for fixed values of \(T=20\) degree celsius, \(t=180s\), \(N=1\), \(r=0.5m\) and \(L=0.15m\).
see that as the centrifuge temperature increases, the velocity of the cells as well as the displacement of the cell in the fluid decreases. As discussed before, the effect of temperature can be corrected by increasing the centrifugation time. However, a longer centrifugation time might lead to a considerable amount of heat generation. This, in turn, may cause the red blood cells to lyse. Prolonged centrifugation at high speed also causes structural damage to the cells and as a result, hemolysis occurs. On the contrary, low-speed centrifugation leads to insufficient separation of plasma and serum from cellular blood components. By analyzing Eq. (19), one can also deduce that when the RBC form rouleaux (as \(N\) increases), the velocity and the position of the particle step. This is because as \(N\) steps up, the centrifugal force increases. This also implies since the RBC in serum forms aggregates due to clotting factors, it needs less centrifugation time than plasma. As shown in Fig. 1, the size of WBC is considerably large in comparison with the RBC. Since the platelet has the smallest size, its velocity and displacement along the text tube are significantly small. As depicted in Fig. 6a, the velocity for platelets is considerably lower than red blood cells due to the small size of platelets. Moreover, as shown in Fig. 6b, the RBC moves far more than platelets showing that to avoid the effect of clotting factors, one has to increase the centrifugation time.
## IV The dynamics of whole blood during capillary action
In this section, we study the dynamics of the whole blood during capillary action where in this case the blood flows upward in narrow spaces without the assistance of external forces. Previous investigations show that the height that the fluid rises increases as the surface tension steps up. The viscosity of the fluid also affects the capillary action but to date, the dependence of the height on viscosity has never been explored due to the lack of mathematical correlation between the viscosity of blood and surface tension [10]. In the past, for non-Newtonian fluids, Pelofsky [11] has studied the relation between surface tension and viscosity. His empirical equation depicts that the surface tension increases as the viscosity steps up. In this work, we first analyzed the correlation between surface tension and viscous friction via data fitting.
Figure 4: (Color online)(a) The sedimentation displacement (\(x(m)\)) of RBC as a function of time \(t\) (in seconds) for fixed \(\omega=100rad/s\). (b) The sedimentation distance (\(x(m)\)) of RBC as a function of \(\omega\) (in rad/s) for a given \(t=180s\). For both figures, we fix \(T=20\) degree celsius, \(r=0.5m\) and \(L=0.15m\).
Figure 5: (Color online) The displacement \(x\) as a function of time \(t\) and \(\omega\) for fixed values of \(T=20\) degree celsius, \(r=0.5m\) and \(L=0.15m\).
The surface tension of blood is a function of temperature \(T\) and it is given by [10]
\[S=(-0.473T+70.105)X10^{-3}N/m. \tag{20}\]
As discussed before, the dynamical viscous friction of blood depends on the temperature as
\[\gamma^{\prime}=B-\frac{2B}{100}(T-T^{R}) \tag{21}\]
where \(B=4X10^{-3}kg/ms\) is the dynamical viscosity of blood at a room temperature (\(T^{R}=20\) degree Celsius) and \(T\) is the temperature [7]. The correlation between surface tension can be inferred from data fitting. Via Eqs. (20) and (21), the dependence of \(S\) and \(\gamma^{\prime}\) on \(T\) is explored as shown in Table I. Since the experiments show the surface tension \(S\) to have functional dependence on \(\gamma^{\prime}\), by fitting the data depicted in Table I, one finds
\[S = 0.03699+5.9125(B-\frac{2B}{100}(T-T^{R})) \tag{22}\] \[= 0.03699+5.9125\gamma^{\prime}.\]
Furthermore, the surface tension is responsible for a liquid to rise up in the test tube against the downward
\begin{table}
\begin{tabular}{||c|c|c||} \hline \(s\) & \(\gamma^{\prime}\) & \(T(e^{2})\) \\ \hline
[MISSING_PAGE_POST]
\hline \end{tabular}
\end{table}
Table 1: Data that show the dependence of surface tension \(S\) and viscous friction \(\gamma^{\prime}\) on \(T\).
Figure 6: (Color online) (a) The sedimentation velocity \(V\) of RBC (red line) and platelet (black line) as a function of time \(t\). (b) The displacement of RBC (red line) and platelet (black line) as a function of time \(t\). In the figures, we fix \(T=20\) degree celsius, \(\omega=200rad/s\), \(r=0.5m\) and \(L=0.15m\).
gravitational force. As a result of the capillary action, the fluid rises to the height
\[h=\frac{2Scos(\theta)}{\rho gz} \tag{23}\]
where \(S\) is the surface tension of the blood, \(\rho=1060kg/m^{3}\) is the density of the fluid, \(g=9.8m/s^{2}\) is the gravitational acceleration and \(z\) is the radius of the test tube. \(\theta\) is the contact angle between the blood and the test tube while \(h\) is the height that the blood rises up in the test tube. For the case where \(\theta=0\), we rewrite Eq. (23) as
\[s=\frac{h\rho gz}{2}. \tag{24}\]
From Eqs. (22) and (24), after some algebra we get
\[h=\frac{0.074+11.824\gamma^{\prime}}{gz\rho}. \tag{25}\]
Equation (25) exhibits that the height \(h\) is a function of test tube radius \(z\) and the density of the fluid \(\rho\). In a clinical laboratory, the Haematocrit tube has a height of \(75mm\) and inner diameter \(z=0.4mm\). Substituting these parameters in Eq. (25), the height that the blood rises is found to be \(58.4mm\) at 20 degree Celsius. The height \(h\) that the blood rises in the capillary tube is also sensitive to temperature. Via Eqs. (21) and (25), one can see that as the room temperature steps up, \(h\) decreases (see Fig. 6).
The above analysis points that the increase in the room temperature results in a false positive or negative result. As shown in Fig. 7, up to 8mm sedimentation can be observed when the temperature of the room varies from 20 to 40 degree celsius. Note that the shape of RBC, plasma viscosity, and inclination of the test tube also affect the magnitude of surface tension. In anemic patients, a low hematocrit level is observed, and consequently the surface tension or viscosity of the blood decreases which in turn results in lower \(h\). Excessive use of anticoagulants results in lower \(h\). This is because adding too much anticoagulant decreases the viscosity or surface tension of the blood.
V The role of surface tension on erythrocytes sedimentation rate and the dynamics of blood cells during centrifugation
In this section, we explore how the surface tension of the blood affects the mobility of RBC by solving the model system analytically.
_Case 1: The effect of surface tension on the erythrocyte sedimentation rate.--_ The erythrocyte sedimentation rate (ESR) often measures how fast a blood sample sediments along a test tube in one hour in a clinical laboratory as shown in Fig. 11. By mixing whole blood with an anticoagulant, the blood is placed in an upright Wintrobe or Westergren tube and allowed to sediments for an hour. In the Westergren method, the test tube is \(200mm\) long while in the Wintrobe method the tube is only \(100mm\) long. To investigate the effect of surface tension, let us rewrite Eq. (22) as
\[\gamma^{\prime}=\frac{(S-0.03699)}{5.9125} \tag{26}\]
Figure 7: (Color online) The height \(h\) that the blood rises in the capillary tube as a function of temperature for fixed-parameters \(z=0.4mm\) and \(\rho=1060kg/m^{3}\).
and after some algebra we get
\[\gamma=6r^{\prime}\pi\frac{(S-0.03699)}{5.9125}. \tag{27}\]
Substituting Eqs. (1) and (27) in Eq. (11), one gets
\[V(t) = Nmg\left(\frac{1-e^{-\frac{6r^{\prime}\pi\frac{(S-0.03699)}{m}t}{5.9125} }}{6r^{\prime}\pi\frac{(S-0.03699)}{5.9125}}\right). \tag{28}\]
The position of the cell is then given by
\[x = \int_{0}^{t}V(t^{\prime})dt^{\prime}. \tag{29}\]
From Eqs. (28) and (29), it is evident that as the surface tension steps up, the velocity and displacement of the cells step down. Since the surface tension (Eq. 20) depends on temperature, the dynamics of the cells are also affected by the room temperature. Exploring Eqs. (28) and (29), one can see that the velocity of the cell is significantly affected by the temperature of the room, and the size of the particle as shown in Fig. 8a. As the temperature steps up, the velocity increases. The same figure also depicts that the RBC precipitates faster than platelets which is reasonable since platelets are small in size. The plot of the position of the cell as a function of time is depicted in Fig. 8a. The figure exhibits that the RBC has a faster sedimentation rate than platelets.
From Eqs. (28) and (29), one can see that as the RBC forms rouleaux (as \(N\) increases) the sedimentation rate increases (see Fig. 9). Figure 9 depicts the plot of ESR as a function of the number of RBCs (at 20 degree celsius). As shown in the figure, the ESR increases as \(N\) steps up.
The background temperature of the fluid also affects the viscous friction of the fluid. As temperature increases, the viscous friction decreases, and on the contrary, the diffusibility of the cells increases as depicted in the works [3; 25].
Figure 8: (Color online)(a) The velocity (\(V\)) of cell as a function of temperature \(T\) for a single cell \(N=1\) and \(t=1000s\). (b) The displacement of the cell as a function of time \(T\) for a single cell \(N=1\) and \(T=20\) degree celsius. In the figures, the red and black lines indicate the plot for the position of RBC and platelets, respectively.
Figure 9: (Color online) Erythrocyte sedimentation rate in one hour at 20 degree celsius. The ESR steps up as the number of red blood cells that form rouleaux increases.
For highly viscous fluid such as blood, when the temperature of the blood sample increases by 1 degree celsius, its viscosity steps down by 2 percent. Exploiting Eqs. (28) and (29), the dependence of the erythrocyte sedimentation rate as a function of temperature (in degree celsius) is depicted in Figure 10. In the figure, the number of RBCs that form clusters is fixed as \(N=15X10^{3}\), \(N=1X10^{4}\), and \(N=5X10^{3}\) from top to bottom, respectively. The figure depicts that the ESR steps up as the number of red blood cells (\(N\)) that form rouleaux increases as well as when the temperature of the room steps up. The same figure shows that up to 3mm/hr sedimentation rate difference can be observed when the temperature of the room varies from 20 to 45 degree celsius.
_Case 2: The role of surface tension on the dynamics of blood cells during centrifugation.--_ As blood cells are microscopic, they undergo a random motion in vitro when there is no external force exerted on them. Since Blood is a highly viscous medium, the chance for the RBC to accelerate is negligible even in the presence of external force. In the presence of external force (centrifugal forces), these cells undergo one-directional motion and they separate from the serum or plasma as the angular speed increases. The surface tension also affects the dynamics of the blood cells. This can be appreciated by substituting Eqs. (2) and (27) in Eq. (11). After some algebra one gets
\[V(t) = Nm\omega^{2}r\left(\frac{1-e^{-\frac{6r^{\prime}\cdot(S-0.03609) \cdot t}{5.125}}}{6r^{\prime}\pi\frac{(S-0.03609)}{5.9125}}\right). \tag{30}\]
The position of the cell is then given by \(x=\int_{0}^{t}V(t^{\prime})dt^{\prime}.\) Exploiting Eq. (30), one can see that \(V(t)\) or \(x\) decreases as the surface tension steps up.
Moreover, the erythrocyte sedimentation rate depends on how the test tube is positioned. An inclination of the test tube by 3 degrees increases the ESR up to 30 percent [28]. Since blood is a highly viscous fluid, its viscosity is strongly related to the surface tension as shown in Eq. (22). On the other hand, the surface tension has a functional dependence on the degree of tilt. When the test tube becomes tilted, the surface tension of the blood decreases, and consequently its viscosity decreases. As a result a higher ESR is observed.
Figure 11: Figure that shows the sedimentation rate of RBC on Westergren pipet [27]. This hematology test is performed by mixing whole blood with an anticoagulant. The blood is then placed in an upright Wintrobe or Westergren tube. The sedimentation rate of the red blood cells is measured in millimeters (mm) at the end of one hour.
Figure 10: (Color online) Erythrocyte sedimentation rate in one hour as a function of temperature in degree celsius. The number of RBCs that form clusters is fixed as \(N=15X10^{3}\), \(N=1X10^{4}\), and \(N=5X10^{3}\) from top to bottom, respectively. The ESR steps up as the number of red blood cells (\(N\)) that form rouleaux increases as well as when the temperature of the room steps up.
Summary and conclusion
In this paper, via an exact analytical solution, we study the factors that affect the efficacy of the centrifugation process. The effect of the centrifugation time on the efficacy of the centrifugation process is explored by studying the dynamics of the blood cells via the well-known Langevin equations or equivalently, by solving Fokker- Plank equations. As blood cells are microscopic in size, their dynamics can be modeled as a Brownian particle walking in a viscous medium. Since blood is a highly viscous medium, the chance for the blood cells to accelerate is negligible and the corresponding dynamics can be studied via the Langevin equation or Fokker Planck equation. Solving the Fokker Planck equation analytically, we explore how the dynamics of blood cells behave as a function of the model parameters. Because our study is performed by considering real physiological parameters, the results obtained in this work not only agree with the experimental observations but also help to understand most hematological experiments that are conducted in vitro.
In a medical laboratory, the standard test tube has a length of \(L=150mm\) and the cells separate from the plasma or serum at the result of the centrifugal force \(f=Nm\omega^{2}r\) where \(N\) denotes the number of cells that form a cluster while \(m\) designates the mass of \(RBC\) or platelets. Since the \(WBCs\) are heavy, they move way before \(RBCs\), and as an approximation, we disregard their dynamics.
The speed of the centrifuge is one of the main factors concerning the efficacy of the centrifugation process. It is shown that as the angular speed increases, the centrifugal force steps up and as result, the particles are forced to separate from the plasma or serum fast. Based on the size and fragility of the sample, the centrifugation speed should be adjected. To increase the efficiency of the centrifugation process, as the size of the particle decreases, the centrifugation speed should step up. For instance, our work depicts that the velocity for platelets is considerably lower than red blood cells due to the small size of platelets. The RBC separates far more than platelets showing that to avoid the effect of clotting factors, one has to increase the centrifugation time. We also show that as the length of the test tube steps, the centrifugal force increases. The dynamics of analyse during centrifugation is also affected by the room temperature. The generation of heat during centrifugation steps up the temperature within a centrifuge and as a result, not only the stability of the sample but also the mobility of the sample is affected. Our result shows that as the centrifuge temperature steps up, the velocity of the cells as well as the average distance of the cell in the fluid decreases. This effect of temperature can be corrected by increasing the centrifugation time. However, a longer centrifugation time might lead to a considerable amount of heat generation. This, in turn, may cause the red blood cells to lyse. Prolonged centrifugation at high speed also causes structural damage to the cells and as a result, hemolysis occurs. On the contrary, low-speed centrifugation leads to insufficient separation of plasma and serum from cellular blood components. When the RBC forms rouleaux (as \(N\) increases), the velocity and the position of the particle step up. This is because as \(N\) increases, the centrifugal force increases. This also implies since the RBC in serum forms aggregates due to clotting factors, it needs less centrifugation time than plasma.
The dynamics of the whole blood during capillary action is studied where in this case the blood flows upward in narrow spaces without the assistance of external forces. In this work, we first analyzed the correlation between surface tension and viscous friction via data fitting. We show that the viscosity steps up linearly as the surface tension increases. The mathematical relation between the height and viscous friction is derived. It is shown that the height the blood rises increases as the viscous friction increases. As the temperature of the room steps up, the height decreases. The dependence of the erythrocyte sedimentation rate (ESR) on model parameters is also studied.
Most medical laboratory examinations are affected by external factors such as temperature. To understand the factors that affect the outcome of these routine examinations, it is vital to explore the dynamics of the whole blood, erythrocytes (RBCs), leukocytes (WBCs), and thrombocytes (platelets). Since our mathematical analysis is performed by using physiological parameters, all of the results depicted in this work can be reconfirmed experimentally. The simplified model presented in this work can also help to understand most hematological experiments that are conducted in vitro.
_Acknowledgment.--_ I would like to thank Mulu Zebene and Blyanesh Bezabih for the constant encouragement.
_Author contribution statements.--_ Mesfin Taye conceived the research idea, developed the theory, and performed the analytical computations. He also contributes to the writing of the manuscript.
|
2301.07458 | On the $ Γ$-convergence of the Allen-Cahn functional with boundary
conditions | We study minimizers of the Allen-Cahn system. We consider the $ \varepsilon
$-energy functional with Dirichlet values and we establish the $ \Gamma
$-limit. The minimizers of the limiting functional are closely related to
minimizing partitions of the domain. Finally, utilizing that the triod and the
straight line are the only minimal cones in the plane together with regularity
results for minimal curves, we determine the precise structure of the
minimizers of the limiting functional, and thus the limit of minimizers of the
$ \varepsilon $-energy functional as $ \varepsilon \rightarrow 0 $. | Dimitrios Gazoulis | 2023-01-18T12:04:52Z | http://arxiv.org/abs/2301.07458v2 | # On the \(\Gamma-\)convergence of the Allen-Cahn functional with boundary conditions
###### Abstract.
We study minimizers of the Allen-Cahn system. We consider the \(\varepsilon-\)energy functional with Dirichlet values and we establish the \(\Gamma\)-limit. The minimizers of the limiting functional are closely related to minimizing partitions of the domain. Finally, utilizing that the triod and the straight line are the only minimal cones in the plane together with regularity results for minimal curves, we determine the precise structure of the minimizers of the limiting functional, and thus the limit of minimizers of the \(\varepsilon\)-energy functional as \(\varepsilon\to 0\).
## 1. Introduction
In this work we are concerned with the study of vector minimizers of the Allen-Cahn \(\varepsilon\)-functional,
\[J_{\varepsilon}(u,\Omega) =\int_{\Omega}(\frac{\varepsilon}{2}|\nabla u|^{2}+\frac{1}{ \varepsilon}W(u))dx\] \[\quad u:\Omega\to\mathbb{R}^{m} \tag{1.1}\]
where \(\Omega\subset\mathbb{R}^{n}\) is an open set and \(W\) is a \(N\)-well potential with \(N\) global minima.
Let
\[u_{\varepsilon}:=\min_{v\in W^{1,2}(\Omega;\mathbb{R}^{m})}\{J_{\varepsilon}( v,\Omega):v|_{\partial\Omega}=g_{\varepsilon}|_{\partial\Omega}\}\,\ \text{where}\ \ g_{\varepsilon}\in W^{1,2}(\Omega;\mathbb{R}^{m}) \tag{1.2}\]
Thus \(u_{\varepsilon}\) is a weak solution of the system
\[\begin{cases}\varepsilon\Delta u_{\varepsilon}-\frac{1}{\varepsilon}W_{u}(u_ {\varepsilon})=0\ \,\ \text{in}\ \Omega\\ u_{\varepsilon}=g_{\varepsilon}\ \,\ \text{on}\ \ \partial\Omega\end{cases} \tag{1.3}\]
We study the asymptotic behavior of \(u_{\varepsilon}\) within the framework of \(\Gamma\)-convergence. Moreover, we analyze the relationship between minimizers of the Allen-Cahn system and minimizing partitions subject to Dirichlet boundary conditions. For some particular choices of boundary conditions, we will determine the structure of the minimizers of the limiting functional.
We now briefly introduce some of the well known results in the scalar case. The notion of \(\Gamma\)-convergence was introduced by E. De Giorgi and relates phase transition type problems with the theory of minimal surfaces. One application of \(\Gamma\)-convergence is the proof of existence of minimizers of a limiting functional, say \(F_{0}\), by utilizing an appropriate sequence of functionals
\(F_{\varepsilon}\) that we know they admit a minimizer and the \(\Gamma\)-limit of \(F_{\varepsilon}\) is \(F_{0}\). And also vice versa ([11]). We can think of this notion as a generalization of the Direct Method in the Calculus of Variations i.e. if \(F_{0}\) is lower semicontinuous and coercive we can take \(F_{\varepsilon}=F_{0}\) and then \(\Gamma-\)lim \(F_{\varepsilon}=F_{0}\).
There are many other ways of thinking of this notion, such as a proper tool in studying the asymptotic behavior of minimizers of functionals.
Let \(X\) be the space of the measurable functions endowed with the \(L^{1}\) norm and
\[F_{\varepsilon}(u,\Omega)=\begin{cases}\int_{\Omega}\frac{\varepsilon}{2}| \nabla u|^{2}+\frac{1}{\varepsilon}W(u)dx\;\;,\;u\in W^{1,2}(\Omega;\mathbb{R} )\cap X\\ +\infty\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\
**Acknowledgements:** I wish to thank my advisor Professor Nicholas Alikakos for his guidance and for suggesting this topic as a part of my thesis for the Department of Mathematics and Applied Mathematics at the University of Crete. Also, I would like to thank Professor P. Sternberg and Professor F. Morgan for their valuable comments on a previous version of this paper, which let to various improvements.
## 2. Hypotheses and Basic Lemmas
**Hypothesis on \(W\):**
**(H1)**\(W\in C^{2}(\mathbb{R}^{m};[0,+\infty))\), \(\{W=0\}=\{a_{1},a_{2},...,a_{N}\}\), \(N\in\mathbb{N}\),\(a_{i}\) are the global minima of \(W\) and such that
\[\xi^{T}W_{uu}(a_{i})\xi\geq 2c^{2}|\xi|^{2}\ \ \,\ i=1,2,...,N.\]
Assume also that \(\ W_{u}(u)\cdot u>0\) and \(W(u)\geq c_{1}|u|^{2}\), if \(|u|>M\).
**Hypothesis on the Dirichlet Data:**
**(H2) (i)**\(g_{\varepsilon}\in C^{1,\alpha}(\overline{\Omega})\), \(|g_{\varepsilon}|\leq M\) (uniformly in \(\varepsilon\)), \(|g_{\varepsilon}|_{1,\alpha}\leq\frac{M}{\varepsilon^{1+\alpha}}\) and \(g_{\varepsilon}\to g_{0}\) in \(L^{1}(\Omega)\).
**(ii)**\(J_{\varepsilon}(g_{\varepsilon},\Omega_{2}\setminus\Omega)\leq C_{0}\) (independent of \(\varepsilon>0\)), where \(\Omega_{\rho}:=\{\rho x:x\in\Omega\}\), \(\rho>0\) and for simplicity suppose \(\Omega\) is a convex set that contains the origin in \(\mathbb{R}^{n}\). \(g_{\varepsilon}\) is trivially extended in each \(\Omega_{\rho}\), for \(\rho>1\), being constant in each line segment that connects \(\partial\Omega\) with \(\partial\Omega_{\rho}\) and the extension of this line segment passes through the origin.
For \(i\neq j\), \(i,j\in\{1,2,...,N\}\), let \(U\in W^{1,2}(\mathbb{R};\mathbb{R}^{m})\) be the 1D minimizer of the action
\[\begin{gathered}\sigma_{ij}:=\min\int_{-\infty}^{+\infty}(\frac{1 }{2}|U^{\prime}|^{2}+W(U))dt<+\infty\ \,\\ \lim_{t\rightarrow-\infty}U(t)=a_{i}\ \,\ \ \lim_{t\rightarrow+ \infty}U(t)=a_{j}\ \,\ U(\mathbb{R})\in\mathbb{R}^{m}\setminus\{W=0\}\end{gathered} \tag{2.1}\]
where \(U\) is the connection that connects \(a_{i}\) to \(a_{j}\), \(i,j\in\{1,2,...,N\}\) and suppose that all of which have equal energy, i.e. \(\sigma_{ij}=\sigma>0\), \(i,j\in\{1,2,3\}\). If we have a symmetric potential \(W\) for example, then (2.1) \(\sigma_{ij}=\sigma\) holds. 1
**Note:** We note that the convexity assumption of the domain in **(H2)(ii)** could be relaxed by considering the convex hull of a set in general as we see in the example in Figure 1 below. However, we assume convexity for convenience.
\(\Omega\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{2}\)\(g_{0}=a_{1}\)\(g_{0}
**Lemma 2.2**.: _Let \(u_{\varepsilon}\) defined in (1.2), then \(J_{\varepsilon}(u_{\varepsilon})=\int_{\Omega}(\frac{\varepsilon}{2}|\nabla u_{ \varepsilon}|^{2}+\frac{1}{\varepsilon}W(u_{\varepsilon}))dx\,\leq\,C\ \,\ C\) independent of \(\varepsilon>0\), if \(\Omega\) is bounded._
Proof.: Without loss of generality we will prove Lemma 2.2 for \(\Omega=B_{1}\) (or else we can cover \(\Omega\) with finite number of unit balls and the outside part is bounded by **(H2)**(ii)).
Substituting \(y=\frac{x}{\varepsilon}\),
\[J_{\varepsilon}(u_{\varepsilon})=\int_{B_{\frac{1}{\varepsilon}}}(\frac{ \varepsilon}{2}|\nabla_{y}\tilde{u}_{\varepsilon}|^{2}\,\frac{1}{\varepsilon ^{2}}+\frac{1}{\varepsilon}W(\tilde{u}_{\varepsilon}))\varepsilon^{n}dy\]
where \(\tilde{u}_{\varepsilon}=u_{\varepsilon}(\varepsilon y)\) and for \(\varepsilon=\frac{1}{R}\),
\[\Rightarrow J_{\varepsilon}(u_{\varepsilon})=\varepsilon^{n-1}\int_{B_{ \frac{1}{\varepsilon}}}(\frac{1}{2}|\nabla_{y}\tilde{u}_{\varepsilon}|^{2}+W( \tilde{u}_{\varepsilon}))dy=\frac{1}{R^{n-1}}\int_{B_{R}}(\frac{1}{2}|\nabla_ {y}\tilde{u}_{R}|^{2}+W(\tilde{u}_{R}))dy=\frac{1}{R^{n-1}}\tilde{J}_{R}( \tilde{u}_{R})\]
So, \(\tilde{u}_{R}\) is minimizer of \(\tilde{J}_{R}(v)=\int_{B_{R}}(\frac{1}{2}|\nabla v|^{2}+W(v))dx\) and from Lemma 2.1, it holds \(|\tilde{u}_{R}|+|\nabla\tilde{u}_{R}|\leq M\) and via the comparison function (see [1] p.135), for \(r>1\)
\[v(x)=\begin{cases}a_{1}\;,&\text{for}\;\;|x|\leq r-1\\ (r-|x|)a_{1}+(|x|-r+1)\tilde{u}_{R}(x)\;,&\text{for}\;\;|x|\in(r-1,r]\\ \tilde{u}_{R}(x)\;,&\text{for}\;\;|x|>r\end{cases}\]
we have
\[\tilde{J}_{R}(\tilde{u}_{R})\leq J(v)\leq CR^{n-1}\ \,\ C\ \ \text{ independent of}\;\,R.\]
Thus
\[J_{\varepsilon}(u_{\varepsilon})=\frac{1}{R^{n-1}}\tilde{J}_{R}(\tilde{u}_{R })\leq C\ \ (C\ \text{independent of}\;\,\varepsilon>0)\]
**Lemma 2.3**.: _Let \(u_{\varepsilon}\) defined in (1.2), then \(u_{\varepsilon}\to u_{0}\) in \(L^{1}\) along subsequences and \(u_{0}\in BV(\Omega;\mathbb{R}^{m})\). In addition, \(u_{0}=\sum_{i=1}^{N}a_{i}\chi_{\Omega_{i}}\ \,\ \mathcal{H}^{n-1}(\partial\Omega_{i})<\infty\) and \(|\Omega\setminus\cup_{i=1}^{N}\Omega_{i}|=0\)._
Proof.: **Claim:**\(||u_{\varepsilon}||_{BV(\Omega;\mathbb{R}^{m})}<C\).
This claim together with the \(L^{1}\) convergence of \(u_{\varepsilon}\) holds by Proposition 4.1 in [4].2 Also
\(u_{0}\in BV(\Omega;\mathbb{R}^{m}).\)
From Lemma 2.2, we have
\[\frac{1}{\varepsilon}\int_{\Omega}W(u_{\varepsilon}(x))dx\leq C\ \ \ (C\ \text{ independent of}\ \varepsilon>0)\]
Since \(|u_{\varepsilon}|\leq M\) and \(W\) is continuous in \(\overline{B}_{M}\subset\mathbb{R}^{m}\ \Rightarrow W(u_{\varepsilon})\leq\tilde{M}\), therefore by the dominated convergence theorem we obtain
\[\int_{\Omega}W(u_{0}(x))dx=0\Rightarrow u_{0}\in\left\{W=0\right\}\ a.e.\ \ \Rightarrow u_{0}=\sum_{i=1}^{N}a_{i}\chi_{\Omega_{i}}\]
where \(\chi_{\Omega_{i}}\) have finite perimeter since \(u_{0}\in BV(\Omega;\mathbb{R}^{m})\) (see [9]).
The proof of Lemma 2.3 is complete. \(\square\)
**Proposition 2.4**.: _It holds that_
\[\int_{\Omega^{\prime}}|D(\phi_{k}\circ u_{0})|=\sum_{i=1,i\neq k}^{N}\sigma \mathcal{H}^{n-1}(\partial^{*}\Omega_{k}\cap\partial^{*}\Omega_{i}\cap\Omega^ {\prime}) \tag{2.2}\]
\[k=1,2,..,N\ \,\text{for every open}\ \Omega^{\prime}\subset\Omega\]
_where \(\partial^{*}\Omega_{k}\) is the reduced boundary of \(\Omega_{k}\) and \(\phi_{k}(z)=d(z,a_{k})\,\)\(k=1,2,...,N,\) where \(a_{k}\) are the zeros of \(W\) and \(d\) is the riemannian metric derived from \(W^{1/2}\), that is_
\[d(z_{1},z_{2})=\inf\{\int_{0}^{1}\sqrt{2}W^{1/2}(\gamma(t))|\gamma^{\prime}(t )|dt:\gamma\in C^{1}([0,1];\mathbb{R}^{2}),\gamma(0)=z_{1},\ \gamma(1)=z_{2}\} \tag{2.3}\]
Proof.: By Lemma 2.3, \(u_{0}=\sum_{i=1}^{N}a_{i}\chi_{\Omega_{i}}\,\)\(\Omega_{i}\cap\Omega_{j}=\emptyset\,\ i\neq j\) and \(|\Omega\setminus\cup_{i=1}^{N}\Omega_{i}|=0.\)
Arguing as in the proof of Proposition 2.2 in [4], we utilize the coarea formula for \(BV\) functions (see [9] Theorem 5.9)
\[\int_{\Omega^{\prime}}|D(\phi_{k}\circ u_{0})|=\int_{-\infty}^{+\infty} \mathcal{P}_{\Omega^{\prime}}(\{x\in\Omega^{\prime}:\phi_{k}(u_{0}(x))\leq t\})dt \tag{2.4}\]
for every open \(\Omega^{\prime}\subset\Omega\) where \(\mathcal{P}_{\Omega^{\prime}}(V)=\mathcal{H}^{n-1}(\partial^{*}V\cap\Omega^{ \prime}).\)
We now observe that \(\phi_{i}(a_{j})=d(a_{j},a_{i})=\sigma>0\) (see (2.1)), for every \(i\neq j\,i,j\in\{1,...,N\}\) (and \(\phi_{i}(a_{i})=0\)). This can be seen by the equipartition of the energy for 1D minimizers (see Theorem 2.1 in [1]).
It holds that
\[\{x\in\Omega^{\prime}:\phi_{k}(u_{0}(x))\leq t\}=\begin{cases}\Omega^{\prime}&, \;\text{for}\;\;t\geq\sigma\\ \Omega_{k}\cap\Omega^{\prime}&,\;\text{for}\;\;t\in[0,\sigma)\\ \emptyset&,\;\text{for}\;\;t<0\end{cases} \tag{2.5}\]
So, we have
\[\begin{split}\int_{\Omega^{\prime}}|D(\phi_{k}\circ u_{0})|& =\int_{0}^{\sigma}\mathcal{P}_{\Omega^{\prime}}(\{x\in\Omega^{ \prime}:\phi_{k}(u_{0}(x))\leq t\})dt=\sigma\mathcal{H}^{n-1}(\partial^{*} \Omega_{k}\cap\Omega^{\prime})\\ &\qquad=\sigma\sum_{i=1,i\neq k}^{N}\mathcal{H}^{n-1}(\partial^{*} \Omega_{k}\cap\partial^{*}\Omega_{i}\cap\Omega^{\prime})\end{split} \tag{2.6}\]
and for the last equality we utilized the fact that \(\partial^{*}\Omega_{k}=\cup_{i=1,i\neq k}^{N}(\partial^{*}\Omega_{k}\cap \partial^{*}\Omega_{i})\cup N\) where \(\mathcal{H}^{n-1}(N)=0\) (see Proposition 2.2 in [4]).
The proof of Proposition 2.4 is complete. \(\square\)
Let \(\mu\) and \(\nu\) be two regular Borel measures on \(\Omega\) we denote by \(\mu\bigvee\nu\) the smallest regular positive measure which is greater than or equal to \(\mu\) and \(\nu\) on all borel subsets of \(\Omega\), for \(\mu\), \(\nu\) being two regular positive Borel measures on \(\Omega\). We have
\[(\mu\bigvee\nu)(\Omega)=\sup\{\mu(A)+\nu(B):A\cap B=\emptyset,\;A\cup B\subset \Omega,\;A\;\text{and}\;\;B\;\text{are open sets in}\;\;\Omega\}\]
Now let
\[\begin{split}\bigvee_{k=1}^{N}\int_{\Omega}|D(\phi_{k}\circ u_{0 })|&:=\sup\{\sum_{k=1}^{N}\int_{A_{k}}|D(\phi_{k}\circ u_{0})|: \cup_{k=1}^{N}A_{k}\subset\Omega,\\ &\qquad\qquad\qquad\qquad\qquad A_{i}\cap A_{j}=\emptyset\,,\;i \neq j,\;A_{i}\;\text{ open sets in}\;\;\Omega\}\end{split}\]
Furthermore, reasoning again as in the proof of Proposition 2.2 in [4] we have,
\[\bigvee_{k=1}^{N}\int_{\Omega}|D(\phi_{k}\circ u_{0})|=\frac{\sigma}{2}\sum_{ i,j=1\,,\,i\neq j}^{N}\mathcal{H}^{1}(\partial^{*}\Omega_{i}\cap\partial^{*} \Omega_{j}\cap\Omega) \tag{2.7}\]
## 3. The \(\Gamma\)-limit with boundary conditions
Let \(J_{\varepsilon}\) defined in (1.1), we define for \(\rho>0\),
\[\tilde{J}_{\varepsilon}(u,\Omega_{\rho})=\begin{cases}J_{\varepsilon}(u,\Omega_ {\rho})\;\;,\;\text{if}\;\;u=g_{\varepsilon}\;\;\text{on}\;\;\mathbb{R}^{n} \setminus\Omega_{\rho}\;\;,\;u\in H^{1}_{loc}(\mathbb{R}^{n};\mathbb{R}^{m}) \\ +\infty\qquad\;,\;\;\text{otherwise}\end{cases} \tag{3.1}\]
where \(\Omega_{\rho}\) as in **(H2)(ii)**, \(\,g_{\varepsilon}\to g_{0}\,\) in \(\,L^{1}(\Omega_{\rho})\;\), and \(\,g_{\varepsilon}\) as in **(H2)(i),(ii)** and \(g_{0}\) takes values on \(\{W=0\}\) arguing as in Lemma 2.3. Also, \(g_{0}\) is initially defined in the trace sense for BV functions and then trivially extended in \(\mathbb{R}^{n}\setminus\Omega_{\rho}\) being constant in each line that that passes through \(\partial\Omega_{\rho}\) and the origin of \(\mathbb{R}^{n}\) and belongs in \(\mathbb{R}^{n}\setminus\Omega_{\rho}\). \(\Omega\) is convex, so the intersection of such line with \(\partial\Omega_{\rho}\) is a single point (since the origin belongs in \(\Omega\), we have that \(\Omega_{\rho}\subset\Omega\) for \(\rho<1\) and \(\Omega\subset\Omega_{\rho}\) for \(\rho>1\)).
Let
\[J_{0}(u,\Omega_{\rho})=\begin{cases}\bigvee_{k=1}^{N}\int_{\Omega_{\rho}}|D( \phi_{k}\circ u)|\;\;,\;\text{if}\;\;u\in BV(\Omega_{\rho};\{W=0\})\\ +\infty\qquad\;,\;\;\text{otherwise}\end{cases} \tag{3.2}\]
and we obtain that, if \(J_{0}(u,\Omega_{\rho})<+\infty\), then
\[J_{0}(u,\Omega_{\rho})=\sigma\sum_{1\leq i<j\leq N}\mathcal{H}^{n-1}(\partial ^{*}\Omega_{i}\cap\partial^{*}\Omega_{j}\cap\Omega_{\rho})=\sigma\mathcal{H}^ {n-1}(S(u)\cap\Omega_{\rho}) \tag{3.3}\]
where \(S(u)\) is the interface of \(u\) separating the phases.
Finally we define
\[\tilde{J}_{0}(u,\Omega_{\rho})=\begin{cases}J_{0}(u,\Omega_{\rho})\;\;,\; \text{if}\;\;u\in BV(\Omega_{\rho};\{W=0\})\;\;\text{and}\;u=g_{0}\;\;\text{on }\;\;\mathbb{R}^{n}\setminus\Omega_{\rho}\\ +\infty\qquad\;,\;\;\text{otherwise}\end{cases} \tag{3.4}\]
We can write \(J_{\varepsilon},J_{0},\tilde{J}_{\varepsilon},\tilde{J}_{0}:L^{1}(\Omega_{ \rho};\mathbb{R}^{n})\to\overline{\mathbb{R}}\), where \(\overline{\mathbb{R}}=\mathbb{R}\cup\{\infty\}\) and the \(\Gamma\)-convergence will be in respect to the \(L^{1}\) topology.
In [4] it has been proved that \(J_{\varepsilon}\;\;\Gamma-\)converges to \(J_{0}\) with mass constraint, but it also holds without mass constraint. We will point out this more clearly in the proof of Theorem 3.1 below, in which we are going to prove that \(\tilde{J}_{\varepsilon}(u,\Omega)\;\;\Gamma-\)converges to \(\tilde{J}_{0}(u,\overline{\Omega})\). Also, in [3, Theorem 3.7] there is a \(\Gamma\)-convergence result with boundary conditions in the scalar case which we utilize for proving Theorem 3.1. In other words, in Theorem 3.1 below, we prove that provided \(J_{\varepsilon}\;\Gamma\)-converges to \(J_{0}\), then we establish the \(\Gamma\)-limit of \(\tilde{J}_{\varepsilon}\), that is, the functional \(J_{\varepsilon}\) with the constraint of Dirichlet values.
**Theorem 3.1**.: _Let \(J_{\varepsilon}\) be defined by (1.1) and \(\tilde{J}_{\varepsilon}\;,\;\tilde{J}_{0}\) defined in (3.1) and (3.4) respectively. Then_
\[\Gamma-\lim_{\varepsilon\to 0}\tilde{J}_{\varepsilon}(u,\Omega)=\tilde{J}_{0}(u,\overline{\Omega}) \tag{3.5}\]
_where we extend \(u\) by setting \(u=g_{0}\) on \(\mathbb{R}^{2}\setminus\Omega.\)_
**Remark 3.2**.: _Note that the domain of \(\tilde{J}_{0}\) is the closure of \(\Omega\), which means that there is a boundary term (see also (2.9) in [16] for the analog in the scalar case). More precisely, by Proposition 2.4 above and Theorem 5.8 in [9] we can write_
\[\begin{split}\tilde{J}_{0}(u,\overline{\Omega})=\frac{1}{2}\sum _{i=1}^{N}\int_{\Omega}|D(\phi_{i}\circ u)|\\ =\frac{1}{2}\sum_{i=1}^{N}\int_{\Omega}|D(\phi_{i}\circ u)|+\frac{ 1}{2}\sum_{i=1}^{N}\int_{\partial\Omega}|T(\phi_{i}\circ u)-T(\phi_{i}\circ g_ {0})|\;d\mathcal{H}^{n-1}\\ \text{where }\,T\,\text{ is the trace operator for }\,BV\,\text{ functions}.\end{split} \tag{3.6}\]
Proof.: We begin by proving the \(\Gamma-\liminf\) inequality.
Let \(u_{\varepsilon}\in L^{1}(\Omega;\mathbb{R}^{m})\) such that \(u_{\varepsilon}\to u\) in \(L^{1}(\Omega;\mathbb{R}^{m})\). If \(u_{\varepsilon}\notin H^{1}_{loc}\) or \(u_{\varepsilon}\neq g_{\varepsilon}\) on \(\mathbb{R}^{m}\setminus\Omega\), then \(\tilde{J}_{\varepsilon}(u_{\varepsilon},\Omega)=+\infty\) and the liminf inequality holds trivially. So, let \(u_{\varepsilon}\in H^{1}_{loc}(\Omega;\mathbb{R}^{m})\) such that \(u_{\varepsilon}\to u\) in \(L^{1}\) and \(u_{\varepsilon}=g_{\varepsilon}\) on \(\mathbb{R}^{m}\setminus\Omega\).
Let \(\rho>1\), we have
\[\tilde{J}_{\varepsilon}(u_{\varepsilon},\Omega)=J_{\varepsilon}(u_{ \varepsilon},\Omega_{\rho})-J_{\varepsilon}(g_{\varepsilon},\Omega_{\rho} \setminus\Omega) \tag{3.7}\]
and
\[J_{\varepsilon}(g_{\varepsilon},\Omega_{\rho}\setminus\Omega)\leq c(\rho^{n- 1}-1)\int_{-\infty}^{+\infty}\frac{1}{2}|U^{\prime}|^{2}+W(U)=O(\rho^{n-1}-1) \tag{3.8}\]
(by **(H2)(ii)** and (2.1))
Hence, by (3.7), for every \(u_{\varepsilon}\) converging to \(u\) in \(L^{1}\) such that \(u_{\varepsilon}=g_{\varepsilon}\) on \(\mathbb{R}^{m}\setminus\Omega\) and \(\liminf_{\varepsilon\to 0}\tilde{J}_{\varepsilon}(u_{\varepsilon},\Omega)<+\infty\), we have that
\[\liminf_{\varepsilon\to 0}\tilde{J}_{\varepsilon}(u_{\varepsilon},\Omega) \geq\liminf_{\varepsilon\to 0}J_{\varepsilon}(u_{\varepsilon},\Omega_{\rho})-O( \rho^{n-1}-1) \tag{3.9}\]
Also, by the liminf inequality for \(J_{\varepsilon}\) (see Theorem 2.5 in [4]), we can obtain
\[\liminf_{\varepsilon\to 0}J_{\varepsilon}(u_{\varepsilon},\Omega_{\rho})\geq \bigvee_{k=1}^{N}\int_{\Omega_{\rho}}|D(\phi_{k}\circ u)|=J_{0}(u,\Omega_{ \rho}) \tag{3.10}\]
Thus, by (3.9) and (3.10), passing the limit as \(\rho\) tends to \(1\) we have the liminf inequality
\[\liminf_{\varepsilon\to 0}\tilde{J}_{\varepsilon}(u_{\varepsilon},\Omega)\geq J_{0}(u,\overline{\Omega}) \tag{3.11}\]
We now prove the \(\Gamma-\)limsup inequality. Let \(u\in BV(\Omega;\{a_{1},a_{2},...,a_{N}\})\) be such that \(u=g_{0}\) on \(\mathbb{R}^{m}\setminus\Omega\).
a) We first assume that \(u=g_{0}\) on \(\mathbb{R}^{m}\setminus\Omega_{\rho_{1}}\) with \(\rho_{1}<1\).
As we observe in the proof of Theorem 2.5 in [4] (see in particular the proof of Lemma 3.1), the \(\Gamma\)-limsup inequality for \(J_{\varepsilon}\) also holds without the mass constraint. Also, since the \(\Gamma\)-liminf inequality holds, the \(\Gamma\)-limsup inequality is equivalent with
\[J_{0}(u,\Omega)=\lim_{\varepsilon\to 0}J_{\varepsilon}(u_{\varepsilon},\Omega) \tag{3.12}\]
for some sequence \(u_{\varepsilon}\) converging to \(u\) in \(L^{1}(\Omega_{\rho_{1}};\mathbb{R}^{m})\). So let \(u_{\varepsilon}\) be a sequence converging to \(u\) in \(L^{1}(\Omega_{\rho_{1}};\mathbb{R}^{m})\) such that (3.12) is satisfied. In particular \(u_{\varepsilon}\) converges to \(g_{0}\) on \(\mathbb{R}^{m}\setminus\Omega_{\rho_{1}}\).
Now, utilizing the sequence \(u_{\varepsilon}\) obtained from (3.12), we will modify it by a cut-off function so that the boundary condition is satisfied. Let \(\phi_{\varepsilon}\) be a cut-off function between \(U=\Omega_{\frac{1+\rho_{1}}{2}}\) and \(U^{\prime}=\Omega\) and let \(V=\Omega\setminus\overline{\Omega}_{\rho_{1}}\). By Lemma 3.2 in [3]3, we have
Footnote 3: Lemma 3.2 in [3] can be extended in the vector case with minor modifications.
\[J_{\varepsilon}(u_{\varepsilon}\phi_{\varepsilon}+(1-\phi_{\varepsilon})g_{ \varepsilon},\Omega)\leq J_{\varepsilon}(u_{\varepsilon},\Omega)+J_{ \varepsilon}(g_{\varepsilon},V)+\delta_{\varepsilon}(u_{\varepsilon},g_{ \varepsilon},U,U^{\prime},V) \tag{3.13}\]
(where \(g_{\varepsilon}\) is extended in \(V\) trivially).
By the assumptions on \(u_{\varepsilon}\) and **(H2)(ii)** we also have
\[u_{\varepsilon}\to g_{0}\;,\;\;\;\;\;g_{\varepsilon}\to g_{0}\;\;\;\mbox{in }\;L^{1}(V).\]
Hence we get (again by Lemma 3.2 in [3])4
Footnote 4: The condition \(\sup_{\varepsilon>0}(J_{\varepsilon}(u_{\varepsilon},U^{\prime})+J_{ \varepsilon}(g_{\varepsilon},V))<+\infty\) in Lemma 3.2 in [3] holds from Lemma 2.2 and **(H2)**.
\[\lim_{\varepsilon\to 0}\delta_{\varepsilon}(u_{\varepsilon},g_{\varepsilon},U,U^{ \prime},V)=0\]
and by (3.7), (3.8) and (3.13)
\[\Gamma-\limsup_{\varepsilon\to 0}\tilde{J}_{\varepsilon}(\tilde{u}_{ \varepsilon},\Omega)\leq\tilde{J}_{0}(u,\Omega)\]
where \(\tilde{u}_{\varepsilon}=u_{\varepsilon}\phi_{\varepsilon}+(1-\phi_{ \varepsilon})g_{\varepsilon}\) and \(\tilde{u}_{\varepsilon}=g_{\varepsilon}\) in \(\mathbb{R}^{n}\setminus\Omega\).
b) In the general case we consider \(\rho_{1}<1\) and we define \(u_{\rho_{1}}(x)=u(\frac{1}{\rho_{1}}x)\). By the previous
case (a) and (3.3)
\[\begin{split}\Gamma-\limsup_{\varepsilon\to 0}\tilde{J}_{ \varepsilon}(u_{\rho_{1}},\Omega)\leq\tilde{J}_{0}(u_{\rho_{1}},\Omega)=\sigma \mathcal{H}^{n-1}(S(u_{\rho_{1}})\cap\Omega)\\ \leq\sigma\mathcal{H}^{n-1}(S(u)\cap\overline{\Omega})+O(1-\rho_ {1}^{n-1})\\ =\tilde{J}_{0}(u,\overline{\Omega})+O(1-\rho_{1}^{n-1})\end{split} \tag{3.14}\]
Since \(u_{\rho_{1}}\) converges to \(u\) as \(\rho_{1}\) tends to \(1\), if we denote
\[J^{\prime}(u_{\rho_{1}},\Omega):=\Gamma-\limsup_{\varepsilon\to 0}\tilde{J}_{ \varepsilon}(u_{\rho_{1}},\Omega)\]
then by the lower semicontinuity of the \(\Gamma-\)upper limit (see e.g. Proposition 1.28 in [5]) and (3.14)
\[\Gamma-\limsup_{\varepsilon\to 0}\tilde{J}_{\varepsilon}(u_{\rho_{1}}, \Omega)\leq\liminf_{\rho_{1}\to 1}J^{\prime}(u_{\rho_{1}},\Omega)\leq \tilde{J}_{0}(u,\overline{\Omega}) \tag{3.15}\]
Hence by (3.11) and (3.15) we get the required equality (3.5).
## 4. Minimizing partitions and the structure of the minimizer
In this section we begin with the basic definitions of minimizing partitions. Then we underline the relationship of minimizing partitions in \(\mathbb{R}^{2}\) with the minimizers of the functional \(\tilde{J}_{0}\) and by imposing the appropriate Dirichlet conditions, we analyze the structure of the minimizer of \(\tilde{J}_{0}\) that we obtain from the \(\Gamma\)-limit. Utilizing a Bernstein type theorem for minimizing partitions we can explicitly compute the energy of the minimizer in Proposition 4.4 and by regularity results in [14] we can determine the precise structure of a minimizer subject to the particulary boundary conditions in Theorem 4.6. In the last subsection we note that we can extend these results to the mass constraint case. Finally, in subsection 4.2 we make some comments for the limiting minimizers in dimension three.
Let \(\Omega\subset\mathbb{R}^{n}\) open, occupied by \(N\) phases. Associated to each pair of phases \(i\) and \(j\) there is a surface energy density \(\sigma_{ij}\), with \(\sigma_{ij}>0\) for \(i\neq j\) and \(\sigma_{ij}=\sigma_{ji}\), with \(\sigma_{ii}=0\). Hence, if \(A_{i}\) denoted the subset of \(\Omega\) occupied by phase \(i\), then \(\Omega\) is the disjoint union
\[\Omega=A_{1}\cup A_{2}\cup...\cup A_{N}\]
and the energy of the partition \(A=\{A_{i}\}_{i=1}^{N}\) is
\[E(A)=\sum_{1\leq i<j<N}\sigma_{ij}\mathcal{H}^{n-1}(\partial A_{i}\cap\partial A _{j}) \tag{4.1}\]
where \(\mathcal{H}^{n-1}\) is the \((n-1)\)-Hausdorff measure in \(\mathbb{R}^{n}\). If \(\Omega\) is unbounded, for example \(\Omega=\mathbb{R}^{n}\) (we say then that \(A\) is complete), the quantity above in general will be infinity. Thus, for each \(W\) open, with \(W\subset\subset\Omega\), we consider the energy
\[E(A;W)=\sum_{0<i<j\leq N}\sigma_{ij}\mathcal{H}^{n-1}(\partial A_{i}\cap \partial A_{j}\cap W) \tag{4.2}\]
**Definition 4.1**.: _The partition \(A\) is a minimizing \(N\)-partition if given any \(W\subset\subset\Omega\) and any \(N\)-partition \(A^{\prime}\) of \(\Omega\) with_
\[\bigcup_{i=1}^{N}(A_{i}\triangle A_{i}^{\prime})\subset\subset W \tag{4.3}\]
_we have_
\[E(A;W)\leq E(A^{\prime};W)\]
the symmetric difference \(A_{i}\triangle A_{i}^{\prime}\) is defined as their union minus their intersection, that is, \(A_{i}\triangle A_{i}^{\prime}=(A_{i}\cup A_{i}^{\prime})\setminus(A_{i}\cap A _{i}^{\prime})\).
To formulate the Dirichlet problem, given a partition \(C\) of \(\partial\Omega\) up to a set of \(\mathcal{H}^{n-1}\)-measure zero, we may prescribe the boundary data for \(A\):
\[(\partial_{\Omega}A)_{i}=\partial A_{i}\cap\partial\Omega=C_{i}\;,\quad i=1,...,N.\]
Now the energy is minimized subject to such a prescribed boundary.
**Remark 4.2**.: _Note that the minimization of the functional \(\tilde{J}_{0}(u,\Omega)\) is equivalent to minimizing the energy \(E(A;\Omega)\) under the appropriate Dirichlet conditions._
We now state a well known Bernstein-type theorem in \(\mathbb{R}^{2}\).
**Theorem:** Let \(A\) be a complete minimizing partition in \(\mathbb{R}^{2}\) with \(N=3\) (three phases), with surface tension coefficients satisfying
\[\sigma_{ik}<\sigma_{ij}+\sigma_{jk}\ \ \,\ \text{for}\ \ j\neq i,k\ \ \text{with}\ \ i,j,k\in\{1,2,3\}. \tag{4.4}\]
Then \(\partial A\) is a triod.
For a proof and related material we refer to [20] and the expository [2].
In Figure 2 we show a triod with angles \(\theta_{1},\theta_{2},\theta_{3}\), and the corresponding triangle with their supplementary angles \(\hat{\theta}_{i}=\pi-\theta_{i}\). For these angles Young's law holds, that is,
\[\frac{\sin\!\hat{\theta}_{1}}{\sigma_{23}}=\frac{\sin\!\hat{\theta}_{2}}{ \sigma_{13}}=\frac{\sin\!\hat{\theta}_{3}}{\sigma_{12}} \tag{4.5}\]
In our case we have \(\sigma_{ij}=\sigma>0\) for \(i\neq j\), therefore we have by Young's law \(\theta_{i}=\frac{2\pi}{3}\ \,\ i=1,2,3\). As a result of Theorem 2 above, we expect that, by imposing the appropriate boundary conditions, the minimizer \(u_{0}\) of \(\tilde{J}_{0}(u,\overline{B}_{1})\,\ B_{1}\subset\mathbb{R}^{2}\) which we obtain from the \(\Gamma\)-limit will be a triod with angles \(\frac{2\pi}{3}\) restricted in \(B_{1}\) and centered at a point \(x\in B_{1}\).
We now recall _Steiner's problem_ that gives us some geometric intuition about this fact.
Let us take three points \(A\), \(B\) and \(C\), arranged in any way in the plane. The problem is to find a fourth point \(P\) such that the sum of distances from \(P\) to the other three points is a minimum; that is we require \(AP+BP+CP\) to be a minimum length.
Figure 2:
If the triangle \(ABC\) possesses internal angles which are all less than \(120^{o}\), then \(P\) is the point such that each side of the triangle, i.e. \(AB,\ BC\) and \(CA\), subtends an angle of \(120^{o}\) at \(P\). However, if one angle, say \(A\hat{C}B\), is greater than \(120^{o}\), then \(P\) must coincide with \(C\).
The _Steiner's problem_ is a special case of the Geometric median problem and has a unique solution whenever the points are not collinear.
### The structure of the minimizer in the disk
In order to obtain precise information about the minimizer of the limiting functional \(\tilde{J}_{0}(u,\overline{B}_{1}),\ B_{1}\subset\mathbb{R}^{2}\), we impose some particular boundary conditions that "fit well" with the geometric intuition that we have from the minimizing partitions. We will also prove that the minimizer is unique.
So, we assume the following,
**(H2) (iii)**\(g^{\varepsilon}(1,\theta)=a_{1}\,\ \theta\in(C\varepsilon,\frac{2\pi}{3}-C \varepsilon)\,\ g^{\varepsilon}(1,\theta)=a_{2}\,\ \theta\in(\frac{2\pi}{3}+C \varepsilon,\frac{4\pi}{3}-C\varepsilon)\) and \(g^{\varepsilon}(1,\theta)=a_{3}\,\ \theta\in(\frac{4\pi}{3}+C\varepsilon,2\pi-C\varepsilon)\) (in polar coordinates) and connecting \(a_{1}\) with \(a_{2}\) in \((\frac{2\pi}{3}-C\varepsilon,\frac{2\pi}{3}+C\varepsilon)\) and similarly in the remaining intervals.
We see that \(g_{\varepsilon}\to g_{0}\) in \(L_{1}\), where \(g_{0}(\theta)=a_{1}\chi_{(0,\frac{2\pi}{3})}+a_{2}\chi_{(\frac{2\pi}{3},\frac{ 4\pi}{3})}+a_{3}\chi_{(\frac{4\pi}{3},2\pi)}\,\ \theta\in(0,2\pi)\).
**Remark 4.3**.: _For the mass constraint case, by classical results of Almgren's improved and simplified in White [19] for minimizing partitions with surface tension coefficients \(\sigma_{ij}\) satisfying the strict triangle inequality (see (4.4)), \(\Omega_{j}\) can be taken open with \(\partial\Omega_{j}\) real analytic except possibly for a singular part with Hausdorff dimension at most \(n-2\). Therefore \(\partial^{*}\Omega_{i}\cap\partial^{*}\Omega_{j}=\partial\Omega_{i}\cap \partial\Omega_{j}\,\ \mathcal{H}^{n-1}\)-a.e., where \(u_{0}=\sum_{i=1}^{N}a_{i}\chi_{\Omega_{i}}\) is the minimizer of \(J_{0}\) with a mass constraint. Also, Morgan in [15] has proved regularity of minimizing partitions in the plane subject to mass constraint. However, we deal with the problem with boundary conditions, so we cannot apply these regularity results directly._
The problem of minimizing partitions subject to boundary conditions, in contrast to the mass constraint case, might not always admit a minimum, we provide two examples if Figure 3 below.
However a minimizer will exist for the minimization problem \(\min_{u\in BV(\Omega;\{W=0\})}\tilde{J}_{0}(u,\overline{\Omega})\), for instance the one we obtain from the \(\Gamma\)-limit, which will form a "boundary layer" in the boundary of the domain instead of internal layer (i.e. the interface separating the phases). Particularly, in both (a) and (b) in Figure 3 above, \(u_{0}=a_{1}\), a.e. will be a minimizer of \(\tilde{J}_{0}\) and
\[\tilde{J}_{0}(u_{0},\overline{\Omega})=\frac{1}{2}\sum_{i=1}^{3}\int_{\partial \Omega}|T(\phi_{i}\circ u_{0})-T(\phi_{i}\circ g_{0})|d\mathcal{H}^{1}=\sigma| AB|=\tilde{J}_{0}(u_{0},\overline{\Omega^{\prime}})\]
When there are no line segments in the boundary of the domain or when \(g_{0}\) does not admit jumps nearby such line segments, then we expect that there are no boundary layers and the boundary term in the energy of \(\tilde{J}_{0}\) vanishes (see Remark 3.2), otherwise we could find a minimizer with strictly less energy. In the cases where the boundary term vanishes we can write \(\tilde{J}_{0}(u_{0},\overline{\Omega})=\tilde{J}_{0}(u_{0},\Omega)\). This can be proved rigorously in the case where \(\Omega=B_{1}\) and assuming **(H2)(iii)**, utilizing also Proposition 3.2 in [14] as we will see in Theorem 4.6.
**Proposition 4.4**.: _Let \((u_{\varepsilon})\) be a minimizing sequence of \(\tilde{J}_{\varepsilon}(u,B_{1})\). Then \(u_{\varepsilon}\to u_{0}\) in \(L^{1}\) along subsequence with \(u_{0}\in BV(B_{1};\{a_{1},a_{2},a_{3}\})\) and \(u_{0}\) is a minimizer of \(\tilde{J}_{0}(u,\overline{B}_{1})\) subject to the limiting Dirichlet values **(H2)(iii)**, where we extend \(u\) by setting \(u=g_{0}\) on \(\mathbb{R}^{2}\setminus B_{1}\)._
_In addition, we have_
\[\sum_{1\leq i<j\leq 3}\mathcal{H}^{1}(\partial^{*}\Omega_{i}\cap\partial^{*} \Omega_{j}\cap\overline{B}_{1})=3 \tag{4.6}\]
_where \(u_{0}=a_{1}\chi_{\Omega_{1}}+a_{2}\chi_{\Omega_{2}}+a_{3}\chi_{\Omega_{3}}\)._
Proof.: From Lemma 2.2, 2.3 it holds that if \(u_{\varepsilon}\) is a minimizing sequence for \(\tilde{J}_{\varepsilon}(u,B_{1})\), then \(\tilde{J}_{\varepsilon}(u_{\varepsilon},B_{1})\leq C\) and thus \(u_{\varepsilon}\to u_{0}\) in \(L^{1}\) along subsequence. The fact that \(u_{0}\) is a minimizer of \(\tilde{J}_{0}\) is a standard fact from the theory of \(\Gamma-\)convergence. It can be seen as follows.
Let \(w\in BV(\overline{B_{1}},\{a_{1},a_{2},a_{3}\})\) such that \(w=g_{0}\) on \(\mathbb{R}^{2}\setminus B_{1}\), then from the limsup inequality in Theorem 3.1, we have that there exists \(w_{\varepsilon}\in H^{1}_{loc}(\mathbb{R}^{2};\mathbb{R}^{m})\;,\;w_{ \varepsilon}=g_{\varepsilon}\) on \(\mathbb{R}^{2}\setminus B_{1}\) such that \(w_{\varepsilon}\to w\) in \(L^{1}\) and \(\limsup_{\varepsilon\to 0}\tilde{J}_{\varepsilon}(w_{\varepsilon},B_{1})\leq \tilde{J}_{0}(w,\overline{B}_{1})\). Now since \(u_{\varepsilon}\) is a minimizing sequence for \(\tilde{J}_{\varepsilon}(u,B_{1})\) and from the liminf inequality in Theorem 3.1, we have
\[\begin{split}\tilde{J}_{0}(u_{0},\overline{B}_{1})\leq\liminf_{ \varepsilon\to 0}\tilde{J}_{\varepsilon}(u_{\varepsilon},B_{1})\leq\liminf_{ \varepsilon\to 0}\tilde{J}_{\varepsilon}(w_{\varepsilon},B_{1})\\ \leq\limsup_{\varepsilon\to 0}\tilde{J}_{\varepsilon}(w_{ \varepsilon},B_{1})\leq\tilde{J}_{0}(w,\overline{B}_{1})\end{split} \tag{4.7}\]
For proving (4.6), we utilize Theorem 2 above (i.e. Theorem 2 in [2]). Since the triod is a minimizing 3-partition in \(\mathbb{R}^{2}\) we have that for any \(W\subset\subset\mathbb{R}^{2}\) and any partition it holds that \(E(A,W)\leq E(V,W)\), where suppose that \(A=\{A_{1},A_{2},A_{3}\}\) is the partition of the triod and \(V=\{V_{1},V_{2},V_{3}\}\) is a 3-partition in \(\mathbb{R}^{2}\).
We have \(u_{0}=a_{1}\chi_{\Omega_{1}}+a_{2}\chi_{\Omega_{2}}+a_{3}\chi_{\Omega_{3}}\) such that \(u_{0}=g_{0}\) on \(\partial B_{1}\) and extend \(u_{0}\) in \(\mathbb{R}^{2}\), being the triod with \(\theta_{i}=\dfrac{2\pi}{3}\) in \(\mathbb{R}^{2}\setminus B_{1}\). This defines a 3-partition in \(\mathbb{R}^{2}\), noted as \(\tilde{\Omega}=\{\tilde{\Omega}_{i}\}_{i=1}^{3}\). Since the triod is a minimizing 3-partition in the plane, we take any \(W\subset\subset\mathbb{R}^{2}\) such that \(B_{2}\subset\subset W\) and \(\bigcup_{i=1}^{3}(A_{i}\triangle\tilde{\Omega}_{i})\subset\subset W\), so we have
\[E(A,W)=E(A,\overline{B}_{1})+E(A,W\setminus\overline{B}_{1})\leq E(\tilde{ \Omega},W)=E(\tilde{\Omega},\overline{B}_{1})+E(\tilde{\Omega},W\setminus \overline{B}_{1}) \tag{4.8}\]
where \(A\) is the partition of the triod.
Now since \(E(A,W\setminus\overline{B}_{1})=E(\tilde{\Omega},W\setminus\overline{B}_{1})\) (by the way we extended \(u_{0}\) in \(\mathbb{R}^{2}\)) and
\(E(A,\overline{B}_{1})=\sigma\sum_{1\leq i<j\leq 3}\mathcal{H}^{1}(\partial A_{i} \cap\partial A_{j}\cap\overline{B}_{1})=3\sigma\) (since \(\partial A_{i}\cap\partial A_{j}\cap\overline{B}_{1}\) are radi of \(B_{1}\)), we conclude
\[\begin{split} 3\sigma\leq E(\tilde{\Omega},\overline{B}_{1})= \tilde{J}_{0}(u_{0},\overline{B}_{1})\\ \Leftrightarrow 3\leq\sum_{1\leq i<j\leq 3}\mathcal{H}^{1}( \partial^{*}\Omega_{i}\cap\partial^{*}\Omega_{j}\cap\overline{B}_{1})\end{split} \tag{4.9}\]
The inequality \(\sum_{1\leq i<j\leq 3}\mathcal{H}^{1}(\partial^{*}\Omega_{i}\cap\partial^{*} \Omega_{j}\cap\overline{B}_{1})\leq 3\) can be obtained by the minimality of \(u_{0}\) in comparison with the partition of the triod in \(B_{1}\) (in particular, consider as a test function \(\tilde{u}=a_{1}\chi_{A_{1}}+a_{2}\chi_{A_{2}}+a_{3}\chi_{A_{3}}\)).
**Remark 4.5**.: _Arguing similarly as in Proposition 4.4 above, we can obtain that for every ball \(B_{R}\), the energy of the limiting minimizer will be \(\tilde{J}_{0}(u_{0},\overline{B}_{R})=3\sigma R\), for every \(R>0\), thus
_we can obtain an entire minimizer in the plane (by a diagonal argument) and the partition that it defines will be a minimal cone, since \(\dfrac{\mathcal{H}^{1}(\partial\Omega_{i}\cap\partial\Omega_{j}\cap B_{R})}{ \omega_{1}R}=C_{0}\) (see [20])._
Finally, we will prove that the minimizer of \(\tilde{J}_{0}\) in \(\overline{B}_{1}\) is unique, that is, the only minimizer is the triod restricted to \(B_{1}\) centered at the origin. For this result, we will need some regularity results from [14].
**Theorem 4.6**.: _Let \(u_{0}=a_{1}\chi_{\Omega_{1}}+a_{2}\chi_{\Omega_{2}}+a_{3}\chi_{\Omega_{3}}\) be a minimizer of \(\tilde{J}_{0}(u,\overline{B}_{1})\) subject to the limiting Dirichlet values **(H2)(iii)**. Then \(\partial\Omega_{i}\cap\partial\Omega_{j}\) are radi of \(B_{1}\), \(|\Omega_{i}|=\frac{1}{3}|B_{1}|\)\((i\neq j)\) and the minimizer is unique (as in Figure 4 below)._
Proof.: Firstly, we show that the minimizing partition of \(B_{1}\) with respect to the boundary conditions defined from \(g_{0}\), is a \((M,0,\delta)\)-minimal for \(\delta>0\) and therefore \((M,cr^{\alpha},\delta)\)-minimal (see Definition 2.1 in [14]). If not, let \(S\) be the partition defined from \(u_{0}\), we can find a Lipchitz function \(\phi:\mathbb{R}^{2}\rightarrow\mathbb{R}^{2}\) such that
\[\mathcal{H}^{1}(S\cap W)>\mathcal{H}^{1}(\phi(S\cap W))\]
with \(W=\mathbb{R}^{2}\cap\{x:\phi(x)\neq x\}\), \(\text{diam}(W\cup\phi(W))<\delta\) and \(\text{dist}(W\cup\phi(W),\mathbb{R}^{2}\setminus B_{1})>0\).
So if we consider the partition
\[\tilde{S}=\begin{cases}S\;\;,\;S\cap W=\emptyset\\ \phi(S\cap W)\;\;,\;S\cap W\neq\emptyset\end{cases}\]
Figure 4.
then the boundary of the partition defined by \(\tilde{S}\) will satisfy the boundary conditions (since \(\text{dist}(W\cup\phi(W),\mathbb{R}^{2}\setminus B_{1})>0\)) and also \(\mathcal{H}^{1}(\tilde{S})<\mathcal{H}^{1}(S)\) which contradicts the minimality of \(S\).
Thus, by Proposition 3.2 in [14], we have that the unique smallest \((M,cr^{\alpha},\delta)\)-minimal set consists of three line segments from the three vertices defined from \(g_{0}\) (i.e. the jump points in \(\partial B_{1}\)) meeting at \(\frac{2\pi}{3}\). The meeting point is unique and it is the origin of \(B_{1}\). Thus, the line segments \(\partial\Omega_{i}\cap\partial\Omega_{j}=\partial^{*}\Omega_{i}\cap\partial^{ *}\Omega_{j}\) (since the boundary of the partition is piecewise smooth) are radii of \(B_{1}\) and \(|\Omega_{i}|=\frac{1}{3}|B_{1}|\).
**Remark 4.7**.: _If we modify the hypothesis **(H2)(iii)** in the Dirichlet conditions such that the limit of \(g_{\varepsilon}\) becomes \(g_{0}(\theta)=\sum_{i=1}^{3}a_{i}\:\chi_{I_{i}}(\theta)\:\:,\:\:\theta\in[0,2 \pi)\:\:,\:\:I_{i}\subset[0,2\pi)\) connected and \(\cup_{i=1}^{3}I_{i}=[0,2\pi)\), then we have an analogous structure and uniqueness of the minimizer as illustrated in the Figure 5 below. The proof of Proposition 4.4 and Theorem 4.6 will be similar, the only difference is that the center of the triod will be a different point in \(B_{1}\) than the origin such that the angle will remain \(\frac{2\pi}{3}\), if the largest angle of the triangle defined by the points \(\partial I_{i}\cap\partial I_{j}\) is less than \(\frac{2\pi}{3}\) (otherwise see (a) in Proposition 3.2 in [14]). Also, the sum of the interfaces in (4.6) will be different than three, depending on the particular points that appear in the limiting boundary conditions (i.e. \(\partial I_{i}\cap\partial I_{j}\))._
### Minimizers in dimension three
In this subsection we will briefly make some comments for the structure of minimizers in \(\mathbb{R}^{3}\). If we impose the appropriate boundary conditions in \(B_{R}\subset\mathbb{R}^{3}\) and \(\{W=0\}=\{a_{1},a_{2},a_{3}\}\), \(g_{\varepsilon}\to g_{0}\:\:\:\text{in}\:\:\:L^{1}(B_{R};\mathbb{R}^{3})\) such that the
Figure 5.
partition in \(\partial B_{R}\) defined by \(g_{0}\) is equal to the partition of \((C_{tr}\times\mathbb{R})\cap\partial B_{R}\), where \(C_{tr}\) is the triod as in Figure 2 (with equal angles), then by Theorem 3 in [2], arguing as in Proposition 4.4 (see also Remark 4.5), we can obtain
\[\tilde{J}_{0}(u,B_{R})=\frac{3}{2}\sigma\pi R^{2}\]
which gives
\[\frac{\mathcal{H}^{2}(\partial\Omega_{i}\cap\partial\Omega_{j}\cap B_{R})}{ \omega_{2}R^{2}}=\frac{3}{2}\]
where \(\omega_{2}\) is the volume of the 2-dimensional unit ball (see [20]). That is, the partition that the minimizer defines can be extended to a minimal cone in \(\mathbb{R}^{3}\). Now since the only minimizing minimal cones are the triod and the tetrahedral cone (see [18]), then the minimizer of \(\tilde{J}_{0}\) is such that \(u_{0}=\sum_{i=1}^{3}a_{i}\chi_{\Omega_{i}}\), where \(\Omega=\{\Omega_{i}\}_{i=1}^{3}\) is the partition of \((C_{tr}\times\mathbb{R})\cap B_{R}\).
Similarly, if \(\{W=0\}=\{a_{1},a_{2},a_{3},a_{4}\}\) and we impose the Dirichlet conditions such that \(g_{0}\) defines the partition of the tetrahedral cone intersection with \(\partial B_{R}\), then again \(u_{0}=\sum_{i=1}^{4}a_{i}\chi_{\Omega_{i}}\), where \(\Omega=\{\Omega_{i}\}_{i=1}^{4}\) is the partition of the tetrahedral cone restricted in \(B_{R}\).
### Minimizers in the disc for the mass constraint case
In this last subsection, we note that the result in Theorem 4.6 above can be extended also to the mass constraint case (see [4]). However, in this case the uniqueness will be up to rigid motions of the disc (see Theorem 3.6 and Theorem 4.1 in [6]).
Let \(u_{0}\) be a minimizer of \(J_{0}(u,B_{1})\), \(B_{1}\subset\mathbb{R}^{2}\) defined in (3.3) subject to the mass constraint \(\int_{B_{1}}udx=m\) (or consider the minimizer \(u_{0}\) of Theorem p.70 in [4]) and \(\{W=0\}=\{a_{1},a_{2},a_{3}\}\). Then \(u_{0}=\sum_{i=1}^{3}a_{i}\chi_{\Omega_{i}}\), where \(\Omega_{1},\Omega_{2},\Omega_{3}\) is a partition of \(B_{1}\) which minimizes the quantity
\[\sum_{1\leq i<j\leq 3}\sigma\mathcal{H}^{1}(\partial^{*}\Omega_{i}\cap\partial ^{*}\Omega_{j})\]
among all other partitions of \(B_{1}\) such that \(\sum_{i=1}^{3}|\Omega_{i}|a_{i}=m\).
If \(a_{i}\), \(a_{j}\) are linear independent for \(i\neq j\), \(i,j\in\{1,2,3\}\) and \(m=\frac{1}{3}|B_{1}|\sum_{i=1}^{3}a_{i}\), then we obtain that \(|\Omega_{i}|=\frac{1}{3}|B_{1}|\) (also by the fact that \(\sum_{i=1}^{3}|\Omega_{i}|=|B_{1}|\)). Now by Theorem 4.1 in [6] we conclude that the minimizer is a standard graph (that is \(\partial^{*}\Omega_{i}\cap\partial^{*}\Omega_{j}=\partial\Omega_{i}\cap \partial\Omega_{j}\) are line segments in this case) and unique up to a rigid motion of the disc.
This result holds more generally for some choice of \(m>0\) (i.e. \(m_{1},m_{2}>0\)), by Theorem 4.1 in [6] and the boundary of the partition is consisted of three circular arcs or line segments meeting at an interior vertex at 120 degrees angles, reaching orthogonally \(\partial B_{1}\) and so that the sum of geodesic curvature is zero. |
2301.04151 | Blowup Equations for Little Strings | We propose blowup equations for 6d little string theories which generalize
Nakajima-Yoshioka's blowup equations for the 4d/5d instanton partition
functions on Omega background. We find that unlike the blowup equations for
standard SQFTs, we need to sum over auxiliary magnetic fluxes on the blown-up $
\mathbb{P}^1$ for a non-dynamical 2-form gauge field which plays a role in
canceling the mixed anomalies of the gauge symmetries. We demonstrate with
explicit examples that the blowup equations, when combined with the modular
properties, can be solved in order to determine the elliptic genera of little
strings. | Hee-Cheol Kim, Minsung Kim, Yuji Sugimoto | 2023-01-10T19:00:01Z | http://arxiv.org/abs/2301.04151v1 | # Blowup Equations for Little Strings
###### Abstract
We propose blowup equations for 6d little string theories which generalize Nakajima-Yoshioka's blowup equations for the 4d/5d instanton partition functions on Omega background. We find that unlike the blowup equations for standard SQFTs, we need to sum over auxiliary magnetic fluxes on the blown-up \(\mathbb{P}^{1}\) for a non-dynamical 2-form gauge field which plays a role in canceling the mixed anomalies of the gauge symmetries. We demonstrate with explicit examples that the blowup equations, when combined with the modular properties, can be solved in order to determine the elliptic genera of little strings.
## 1 Introduction
Since the introduction of supersymmetric quantum field theories (SQFTs), numerous advancements have been made. From a classification perspective, five-dimensional supersymmetric field theories are classified by examining the consistency of physics on the Coulomb branch of moduli space [1; 2; 3; 4], utilizing geometric descriptions [5; 6; 7; 8; 9; 10], and analyzing the RG-flows of 6d superconformal field theories (SCFTs) on a circle [11; 12; 13; 14; 15; 16; 17]. Six-dimensional supersymmetric field theories are classified based on the types of non-compact bases and the methods of gluing them together in F-theory
compactified on non-compact elliptically fibered Calabi-Yau threefolds [18; 19; 20; 21]. The classification of 6d little string theories (LSTs) is also discussed in [21; 22] which will be explained right after.
There have been significant quantitative studies conducted on higher dimensional SQFTs. For instance, it is possible to calculate the supersymmetric partition functions of these theories on \(\Omega\)-deformed \(\mathbb{R}^{4}\) using various methods. This partition function is a type of Witten index that counts BPS states on the Coulomb branch. For theories with classical gauge groups, it can be computed using supersymmetric localization based on ADHM constructions of the instanton moduli space [23; 24] or using the topological vertex formalism introduced in [25; 26; 27]. We can also calculate the partition function in the presence of the codimension two or four defects in these theories [28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44]. While the SUSY localization and topological vertex formalism are effective methods for qualitatively studying higher dimensional SQFTs, one disadvantage of these methods is that we need to know ADHM constructions for instanton moduli space of gauge theories or brane web descriptions of these SQFTs in Type IIB string theory.
An alternative approach to calculating the instanton partition functions is by utilizing the blowup equation, which was initially developed for the study of Donaldson invariants in mathematics. In [45], a systematic method for formulating and solving the blowup equations to obtain Nekrasov's instanton partition functions of 4d \(\mathcal{N}=2\)\(SU(N)\) gauge theories was proposed, and this method was subsequently generalized to 5d \(\mathcal{N}=1\)\(SU(N)\) gauge theories in [46; 47]. More recently, several extensions have been made to compute various observables in higher dimensional field theories: the instanton partition functions of 5d SUSY gauge theories with generic gauge groups and matter representations were computed in [48; 49; 50], the blowup equations for refined topological strings on certain local Calabi-Yau 3-folds were formulated in [51; 50], and the elliptic genera of self-dual strings in 6d SCFTs on tensor branch were calculated using elliptic blowup equations in [51; 52; 53; 54]. Also, as another extension, the blowup equations for 5d and 6d supersymmetric field theories with codimension four defects were proposed in [55]. One of the key benefits of the blowup approach is its capability to systematically calculate the partition functions, including non-perturbative contributions, through the use of the effective prepotential and consistent magnetic fluxes on the blowup background which can be systematically obtained for any 5d or 6d supersymmetric field theory [51; 50]. While we now have a large amount of examples for the blowup formalism, it remains to be verified whether it can be applied to little string theories.
Little string theories were originally introduced as worldvolume theories of NS5-branes in the gravity decoupling limit [56; 57; 58; 59; 60]. Depending on the space in which the NS5-branes reside, there are \(\mathcal{N}=(2,0),(1,1)\), and \((1,0)\) LSTs in 6 dimensions. The decoupling limit is achieved by taking the string coupling constant to zero while maintaining an intrinsic string scaling finite. The resulting theory becomes a
non-local theory without gravity and it has some stringy properties such as T-duality. In this sense, this theory is an intermediate theory between local quantum field theories and the usual string theories, so a deep understanding of the LSTs may help us understand both subjects. Additionally, since NS5-branes are known to be one of the most challenging and interesting non-perturbative objects to study, a better understanding of these objects is desired. Beside this, there are also various motivations for studying LSTs, including the investigation of discrete light-cone quantization and holography in a linear dilation background, etc. For a brief overview of LSTs in these contexts, we recommend readers to see [61; 62].
A systematic construction of the LSTs is proposed based on the geometric phases of F-theory in [22]. This construction allows us to classify LSTs according to the types of base curves and the manner in which they are connected, generalizing the geometric classification of 6d SCFTs. In quantitative studies, the partition functions of A-type LSTs engineered by NS5-branes on A-type singularities have been obtained using the localization method applied to the worldvolume theories of 2d instantonic strings [63]. Similarly, the elliptic genera of LSTs on some of D-type singularities have been computed based on the localization method [64]. Also, in [65], the elliptic genera of some LSTs were calculated using T-dualities and the modular ansatz, which is based on the modular properties of the elliptic genera. However, these computations have only been carried out in a few simple cases, as the localization method requires ADHM-like constructions of little string worldvolume theories that are currently unavailable in most cases. The modular ansatz method also has some limitations, including a rapid increase in the number of unknown coefficients as the string number increases and the need for precise knowledge of T-duality for LSTs. As such, it is important to find alternative methods for calculating the partition functions of LSTs.
In this paper, we propose a systematic method for constructing blowup equations for LSTs and provide examples of its application in the explicit calculation of partition functions. The blowup equations can be formulated by using two key ingredients: the effective prepotential evaluated on the \(\Omega\)-background, which can be obtained from the effective cubic and mixed Chern-Simons terms on the tensor branch, and a set of magnetic fluxes on the blown-up \(\mathbb{P}^{1}\). We will explain how to obtain these ingredients for arbitrary LSTs.
It turns out that the blowup equations for LSTs are rather different from those for 5d/6d SQFTs. Interestingly, the blowup equations for LSTs involve summation over magnetic fluxes for an auxiliary gauge field as well as those for the dynamical gauge fields. This auxiliary gauge field is a non-dynamical 2-form field used to cancel mixed anomalies of gauge symmetries. We will explain the precise role of this auxiliary gauge field in the next section. However, we note that the summation over auxiliary magnetic fluxes is not convergent in terms of Kahler parameters. This is essentially due to the absence of a quadratic kinetic term for the auxiliary gauge field. Despite this, we show that the partition functions of the LSTs calculated from other
methods satisfy the blowup equations that we have proposed when certain upper and lower bounds are placed on the power of Kahler parameters coupled to the auxiliary magnetic fluxes. Furthermore, we will show that elliptic genera of the LSTs can be calculated by solving the blowup equations when combined with the modular ansatz. We illustrate our approach using \(\hat{A}_{1}\) type IIA and IIB LSTs, \(E_{8}\times E_{8}\) and \(SO(32)\) Heterotic LSTs as rank-1 LSTs, and the \(SU(3)\) gauge theory with a symmetric and an antisymmetric hypermultiplets as a rank-2 LST1.
Footnote 1: Here we use a word “rank” as a number of dynamical parameters which is different from [22].
The rest of this paper is organized as follows. In Section 2, we provide a review of the blowup equations and modular bootstrap approach for 6d SCFTs, and present the proposed blowup equations for LSTs. In Section 3, we demonstrate how our proposal works with several examples. In Section 4, we summarize our results and discuss some future directions. In Appendix A, we collect some facts about the elliptic functions used in the main context. In Appendix B, we present the computations of the elliptic genera of some LSTs using ADHM constructions of instantonic strings.
## 2 Blowup equations and Modular bootstrap
In this section, we propose the blowup equations for the partition functions of 6d little string theories on \(T^{2}\times\mathbb{R}^{4}\). We begin by reviewing the formulation of blowup equations for 6d SCFTs and then extend this approach to construct the blowup equations for 6d LSTs. We also describe how to calculate the elliptic genera of strings in LSTs using their modular properties and by solving the blowup equations.
The elliptic genera of strings in 6d LSTs on a torus \(T^{2}\) times \(\Omega\)-deformed \(\mathbb{R}^{4}\), which is a Witten index, is defined as
\[Z_{k}(\tau,\phi,m;\epsilon_{1,2})=\mathrm{Tr}_{RR}\left[(-1)^{F}e^{2\pi i(\tau H _{L}-\bar{\tau}H_{R})}e^{2\pi i\epsilon_{1}(J_{1}+J_{R})}e^{2\pi i\epsilon_{2} (J_{2}+J_{R})}e^{2\pi i\phi\cdot\Pi}e^{2\pi im\cdot F}\right], \tag{1}\]
where \(\tau\) is the complex structure of the torus, \(H_{L}\) and \(H_{R}\) are left-moving and right-moving Hamiltonians in the 2d worldsheet, \(J_{1}\) and \(J_{2}\) are Cartan generators of the \(SO(4)\) Lorentz rotation on \(\mathbb{R}^{4}\), \(J_{R}\) is the Cartan for \(SU(2)_{R}\) R-symmetry, \(\epsilon_{1}\) and \(\epsilon_{2}\) are the \(\Omega\)-deformation parameters, \(\Pi\) and \(F\) are gauge and flavor charges, and \(\phi\) and \(m\) collectively denote chemical potentials for gauge and flavor symmetries, repectively. The supercharge \(Q\) and its conjugate \(Q^{\dagger}\) commute with \(J_{1}+J_{R}\) and \(J_{2}+J_{R}\), and the right-moving Hamiltonian is given by \(2H_{R}=\{Q,Q^{\dagger}\}\). This elliptic genus counts BPS spectrum in the Ramond sector annihilated by \(Q\) and \(Q^{\dagger}\), so it is independent of \(\bar{\tau}\), in the 2d worldsheet SCFT living on strings with tensor charge denoted by \(k\).
### Blowup equations for 6d SCFTs
Let us review the blowup equations for the 6d SCFT which are functional equations for the full partition function defined by
\[Z(\tau,\varphi,\phi,m;\epsilon_{1,2})=e^{-2\pi i\mathcal{E}}Z_{ \text{pert}}\times\sum_{k}e^{-k\varphi}Z_{k}\, \tag{2}\]
with \(Z_{k=0}=1\) where \(Z_{\text{pert}}\) is the perturbative contribution of the partition function on \(T^{2}\times\mathbb{R}^{4}\), \(\mathcal{E}\) is called the _effective prepotential_ and \(\varphi\) here denotes the tension of the self-dual strings with charge \(k\) parameterized by the scalar vacuum expectation values in the tensor multiplets. We note that the partition function (2) can be written as a factorized expression of
\[Z=e^{-2\pi i\mathcal{E}}Z_{\text{GV}}=e^{-2\pi i\mathcal{E}} \operatorname{PE}\Bigg{[}\sum_{j_{l},j_{r},\mathbf{d}}(-1)^{2(j_{l}+j_{r})}N^{ \mathbf{d}}_{j_{l},j_{r}}\frac{\sqrt{p_{1}p_{2}}\chi_{j_{l}}(\epsilon_{-}) \chi_{j_{r}}(\epsilon_{+})}{(1-p_{1})(1-p_{2})}e^{2\pi i\mathbf{d}\cdot\mathbf{ m}}\Bigg{]}, \tag{3}\]
where \(Z_{\text{GV}}\) is the refined Gopakumar-Vafa (GV) invariants [66; 67] counting the BPS degeneracies \(N^{\mathbf{d}}_{j_{l},j_{r}}\) on \(\Omega\)-background. Here, \(\operatorname{PE}[f(\mu)]=\exp\bigl{[}\sum_{n=1}^{\infty}\frac{1}{n}f(\mu^{n} )\bigr{]}\) is the Plethystic exponential, \(\mathbf{m}\) collectively denotes chemical potentials \((\varphi,\phi,m)\) for tensor, gauge and flavor symmetries, \(\mathbf{d}\) is the electric charge of the BPS state with Lorentz spin \((j_{l},j_{r})=(\frac{J_{1}-J_{2}}{2},\frac{J_{1}+J_{2}}{2})\), \(\chi_{j}\) is the \(SU(2)\) character of spin \(j\) representation, \(\epsilon_{\pm}=\frac{\epsilon_{1}\pm\epsilon_{2}}{2}\) and \(p_{1,2}=e^{2\pi i\epsilon_{1,2}}\).
The effective prepotential \(\mathcal{E}\) in the prefactor arises from the classical action and the regularization factors in the path integral. We can compute it by evaluating the low energy effective action on the \(\Omega\)-background in the presence of non-trivial background gauge fields. The low energy effective action of 6d SCFTs on a circle which was computed in [68; 69; 70; 14; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 219; 220; 221; 223; 224; 2225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 2444; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 288; 289; 291; 300; 301; 302; 303; 304; 305; 306; 307; 308; 308; 309; 310; 311; 312; 313; 314; 315; 316; 317; 318; 317; 319; 324; 325; 326; 327; 328; 339; 333; 340; 341; 342; 343; 344; 345; 346; 347; 348; 349; 350; 351; 352; 353; 354; 355; 356; 357; 358; 359; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 377; 378; 379; 380; 381; 382; 383; 384; 385; 386; 387; 388; 389; 390; 388; 389; 391; 392; 393; 394; 395; 396; 397; 398; 399; 40; 40; 41; 421; 430; 431; 444; 45; 456; 467; 478; 481; 499; 50; 401; 432; 44; 46; 482; 483; 49; 51; 49; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 83; 84; 85; 86; 87; 88; 89; 91; 92; 93; 94; 95; 96; 97; 98; 99; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 119; 120; 109; 113; 107; 108; 109; 114; 109; 115; 109; 116; 117; 118; 119; 133; 1140; 117; 119; 1218; 122; 123; 124; 125; 126; 127; 128; 129; 1341; 143; 144; 145; 146; 147; 148; 159; 160; 170; 181; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 210; 211; 22; 233; 241; 242; 243; 244; 245; 246; 247; 248; 249; 251; 265; 266; 267; 268; 279; 283; 297; 298; 399; 410; 42; 430; 443; 445; 468; 47; 483; 49; 50; 51; 52; 531; 54; 532; 54; 556; 57; 58; 59; 61; 63; 64; 65; 66; 67; 68; 69; 70; 72; 73; 74; 75; 76; 77; 78; 79; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 90; 103; 104; 105; 106; 107; 108; 109; 111; 119; 122; 133; 143; 15; 167; 178; 189; 199; 110; 121; 134; 150; 109; 135; 109; 114; 151; 152; 153; 168; 179; 188; 192; 193; 194; 195; 196
where the 4-form \(X_{4\alpha}\), which appears in the Bianchi identity \(dG_{\alpha}=X_{4\alpha}\), is given by
\[X_{4\alpha}=-\frac{1}{4}a_{\alpha}p_{1}(T_{6})+\frac{1}{4}\sum_{a}b_{a,\alpha} \operatorname{Tr}F_{a}^{2}+c_{\alpha}c_{2}(R)\,. \tag{6}\]
Here, \(p_{1}(T_{6})\) and \(c_{2}(R)\) are the first Pontryagin class of the 6d spacetime tangent bundle and the second Chern class of \(SU(2)_{R}\) R-symmetry bundle, respectively, and \(F_{a}\) is the field strength of the \(a\)-th symmetry group including all gauge and flavor symmetries. The coefficients \(a_{\alpha}\), \(b_{a,\alpha}\) and \(c_{\alpha}\) are determined from anomaly cancellation conditions.
The classical action provides non-trivial contributions to the effective prepotential. The characteristic classes in the Green-Schwarz terms can be replaced by the \(\Omega\)-deformation parameters as
\[p_{1}(T_{6})\mapsto-(\epsilon_{1}^{2}+\epsilon_{2}^{2})\,,\quad c_{2}(R)\mapsto \epsilon_{+}^{2}\,, \tag{7}\]
with \(\epsilon_{\pm}=\frac{\epsilon_{1}\pm\epsilon_{2}}{2}\). Using this, the tree-level effective prepotential on the \(\Omega\)-deformed background is evaluated as
\[\mathcal{E}_{\rm tree}=\frac{1}{\epsilon_{1}\epsilon_{2}}\biggl{[}-\frac{\tau }{2}\Omega^{\alpha\beta}\phi_{\alpha,0}\phi_{\beta,0}-\Omega^{\alpha\beta}\phi _{\alpha,0}\biggl{(}\frac{a_{\beta}}{4}(\epsilon_{1}^{2}+\epsilon_{2}^{2})+ \frac{b_{a,\beta}}{2}K_{a,ij}\phi_{a,i}\phi_{a,j}+c_{\beta}\epsilon_{+}^{2} \biggr{)}\biggr{]}\,, \tag{8}\]
where \(\phi_{\alpha,0}=i\varphi_{\alpha}/2\pi\) denotes the scalar VEV in the tensor multiplet, \(K_{a,ij}\) is the Killing form of the \(a\)-th symmetry group, and \(\phi_{a,i}\) are the holonomies for gauge and flavor symmetries.
There are also 1-loop contributions to the effective prepotential \(\mathcal{E}\) which can be calculated as follows. The 6d SCFT compactified on a circle leads to a 5d Kaluza-Klein (KK) theory. The low energy theory of this 5d KK theory on a Coulomb branch is characterized by topological Chern-Simons couplings which involve contributions from the Kaluza-Klein momentum states along the circle as well as zero momentum states. The 1-loop Chern-Simons terms at low energy can be written as
\[S_{\rm 1-loop}=\int\left(\frac{C_{ijk}}{24\pi^{2}}A_{i}\wedge F_{j}\wedge F_{k} -\frac{1}{48}C_{i}^{G}A_{i}\wedge p_{1}(T_{6})+\frac{1}{2}C_{i}^{R}A_{i} \wedge c_{2}(R)\right), \tag{9}\]
where the first term is the cubic Chern-Simons term for the gauge and the flavor symmetries, the second and third terms are the mixed gauge-gravity and gauge-\(SU(2)_{R}\) R-symmetry Chern-Simons terms, respectively.
The cubic Chern-Simons terms are determined by the cubic prepotential given by [3; 68; 69; 70]
\[\mathcal{F}_{\rm 1-loop}=\frac{1}{12}\sum_{n\in\mathbb{Z}}\left(\sum_{e\in \mathbf{R}}|n\tau+e\cdot\phi|^{3}-\sum_{f}\sum_{w\in\mathbf{w}_{f}}|n\tau+w \cdot\phi+m_{f}|^{3}\right)\!, \tag{10}\]
where \({\bf R}\) is the set of root vectors of the 6d gauge group, \({\bf w}_{f}\) and \(m_{f}\) are a set of weights and mass parameters, respectively, of the \(f\)-th charged hypermultiplet. The summation over all integer KK charges \(n\) can be performed by using the zeta function regularization2. The mixed Chern-Simons coefficients \(C_{i}^{G}\) and \(C_{i}^{R}\) can be computed in a similar manner. At this time, the contributions of the positive KK charge states and negative KK charge states cancel each other, and so we find
Footnote 2: For a 6d SCFT with twist, we can have fractional KK-momentum states. See [50].
\[C_{i}^{G}=-\partial_{i}\left(\sum_{e\in{\bf R}}|e\cdot\phi|-\sum_{f}\sum_{w\in {\bf w}_{f}}|w\cdot\phi+m_{f}|\right),\quad C_{i}^{R}=\frac{1}{2}\partial_{i} \sum_{e\in{\bf R}}|e\cdot\phi|\,. \tag{11}\]
The 1-loop contribution to the effective prepotential is comprised of the collection of the Chern-Simons contributions.
\[{\cal E}_{\rm 1-loop}=\frac{1}{\epsilon_{1}\epsilon_{2}}\bigg{(}{\cal F}_{ \rm 1-loop}+\frac{\epsilon_{1}^{2}+\epsilon_{2}^{2}}{48}C_{i}^{G}\phi_{i}+ \frac{\epsilon_{+}^{2}}{2}C_{i}^{R}\phi_{i}\bigg{)}\,. \tag{12}\]
The full effective prepotential of a 6d SCFT on a torus times \(\Omega\)-deformed \(\mathbb{R}^{4}\) is then given by the sum of the classical and the 1-loop contributions:
\[{\cal E}={\cal E}_{\rm tree}+{\cal E}_{\rm 1-loop}\,. \tag{13}\]
This explains how to compute the effective prepotentials \({\cal E}\) for arbitrary 6d SCFTs. We remark that when the 6d SCFT is compactified on a circle with automorphism twists, the intersection form \(\Omega^{\alpha\beta}\), the Killing form \(K_{a,ij}\), and the gauge and flavor algebra appearing in the effective prepotential should be replaced by those of the twisted theory. See [50] for explicit calculations of \({\cal E}\) for many interesting 6d SCFTs on \(T^{2}\times\mathbb{R}^{4}\) with/without twists.
Let us now explain how to formulate the blowup equations for 6d SCFTs. Consider a 6d SCFT on a blowup geometry \(\hat{\mathbb{C}}^{2}\) obtained by replacing the origin of the \(\mathbb{C}^{2}\) by a 2-sphere \(\mathbb{P}^{1}\). The partition function on this \(\hat{\mathbb{C}}^{2}\) background, which we will call \(\hat{Z}\), is factorized under the supersymmetric localization as a product of two contributions coming from the north and south pole of the \(\mathbb{P}^{1}\) at the origin [45; 46]. It turns out that the partition function is independent of the volume of the \(\mathbb{P}^{1}\), and thus blowing down the \(\mathbb{P}^{1}\) results in a smooth transition from \(\hat{Z}\) to the ordinary partition function \(Z\) on \(\mathbb{C}^{2}\) without the \(\mathbb{P}^{1}\) at the origin. More precisely, as indicated in [50], the partition function \(\hat{Z}\) defined on \(\hat{\mathbb{C}}^{2}\) is related after the blowdown transition to the ordinary partition function \(Z\) on \(\mathbb{C}^{2}\) by replacing \((-1)^{F}\) in (1) and (2) by \((-1)^{2J_{R}}\). This replacement of the fermion number operator can be implemented by shifting the \(\Omega\)-deformation parameter \(\epsilon_{1}\) for the angular momentum to \(\epsilon_{1}+1\). Thus one finds
\[\begin{split}\hat{Z}(\phi,m,\epsilon_{1},\epsilon_{2})& =e^{-2\pi i{\cal E}(\phi,m,\epsilon_{1},\epsilon_{2})}\hat{Z}_{ \rm GV}(\phi,m,\epsilon_{1},\epsilon_{2})\,,\\ \hat{Z}_{\rm GV}(\phi,m,\epsilon_{1},\epsilon_{2})&= Z_{\rm GV}(\phi,m,\epsilon_{1}+1,\epsilon_{2})\,.\end{split} \tag{14}\]
Note that \(\epsilon_{1}\) in the prefactor \(\mathcal{E}\) remains the same because shifting \(\epsilon_{1}\) in the GV-invariant does not affect the regularization factor.
Now, by identifying the blowup partition function \(\hat{Z}\) on \(\hat{\mathbb{C}}^{2}\), which takes a factorized expression under localization, with the partition function \(Z\) on the ordinary \(\mathbb{C}^{2}\) background, we can find a functional equation so-called a blowup equation as follows [45; 46; 47] (See also [49; 50; 51; 52; 53; 54; 55; 72; 73] for various generalizations) :
\[\Lambda(m,\epsilon_{1},\epsilon_{2})\hat{Z}(\phi,m,\epsilon_{1},\epsilon_{2})= \sum_{\vec{n}}(-1)^{|\vec{n}|}\hat{Z}^{(N)}(\vec{n},\vec{B})\hat{Z}^{(S)}( \vec{n},\vec{B})\,, \tag{15}\]
where \(|\vec{n}|=\sum_{i}n_{i}\) denotes the sum of magnetic fluxes \(n_{i}\) for the dynamical tensors and gauge symmetry groups on \(\mathbb{P}^{1}\), \(\vec{B}\) denotes the background magnetic fluxes for the global symmetries, and \(\Lambda\) is a constant prefactor independent of the dynamical Kahler parameters \(\phi\). Here, \(\hat{Z}^{(N)}\) and \(\hat{Z}^{(S)}\) are localized partition functions near the north and south poles of the \(\mathbb{P}^{1}\). They can be obtained, since the local geometries can be approximated as \(\mathbb{C}^{2}\), from the ordinary partition function \(Z\) by shifting the chemical potentials as
\[\begin{split}\hat{Z}^{(N)}(\vec{n},\vec{B})&=\hat{ Z}(\phi_{i}+\epsilon_{1}n_{i},m_{j}+\epsilon_{1}B_{j},\epsilon_{1},\epsilon_{2}- \epsilon_{1})\,\\ \hat{Z}^{(S)}(\vec{n},\vec{B})&=\hat{Z}(\phi_{i}+ \epsilon_{2}n_{i},m_{j}+\epsilon_{2}B_{j},\epsilon_{1}-\epsilon_{2},\epsilon_{ 2})\,.\end{split} \tag{16}\]
Here \(\phi_{i}\) collectively denotes the scalar VEVs in the tensor and gauge multiplets.
The prefactor \(\Lambda\) can be zero. In this case, the blowup equation is called vanishing blowup equation. For example, when the 6d SCFT contains a half hypermultiplet that does not form a full hypermultiplet, the theory admits only vanishing blowup equations. We will not discuss vanishing blowup equations in this paper.
We cannot turn on arbitrary magnetic fluxes \((\vec{n},\vec{B})\) on the \(\mathbb{P}^{1}\), but they must be correctly quantized. The proper quantization conditions for the magnetic fluxes are [51]
\[(\vec{n},\vec{B})\cdot e\text{ is integral/half-integral}\;\Leftrightarrow\;2(j _{l}+j_{r})\text{ is odd/even}, \tag{17}\]
for all BPS particles of the gauge and flavor charge \(e\) and spin \((j_{l},j_{r})\). Among \((\vec{n},\vec{B})\) satisfying the quantization conditions, a set of special magnetic fluxes called _consistent magnetic fluxes_ can give a blowup equation (15) which the partition function \(Z\) obeys. We refer the reader to [50] for a detailed discussion on the process of identifying the consistent magnetic fluxes and solving the blowup equations to calculate the BPS spectra of 5d and 6d SQFTs.
It is more convenient to express the blowup equation (15) in terms of the GV-invariant as follows:
\[\Lambda(m,\epsilon_{1},\epsilon_{2})\hat{Z}_{\text{GV}}(\phi,m,\epsilon_{1}, \epsilon_{2})=\sum_{\vec{n}}(-1)^{|\vec{n}|}e^{-2\pi iV}\hat{Z}^{(N)}_{\text{ GV}}(\vec{n},\vec{B})\hat{Z}^{(S)}_{\text{GV}}(\vec{n},\vec{B}), \tag{18}\]
where
\[\begin{split} V&=\mathcal{E}(\phi_{i},m_{j},\epsilon_{1},\epsilon_{2})-\mathcal{E}(\phi_{i}+\epsilon_{1}n_{i},m_{j}+\epsilon_{1}B_{j}, \epsilon_{1},\epsilon_{2}-\epsilon_{1})\\ &\quad-\mathcal{E}(\phi_{i}+\epsilon_{2}n_{i},m_{j}+\epsilon_{2} B_{j},\epsilon_{1}-\epsilon_{2},\epsilon_{2})\,.\end{split} \tag{19}\]
For a 6d SCFT, we can split the GV-invariant part as
\[Z_{\text{GV}}=Z_{\text{pert}}\times Z_{\text{str}}=Z_{\text{pert}}\times\sum_{ \vec{k}}e^{-2\pi i\Omega^{\alpha\beta}k_{\alpha}\phi_{\beta,0}}Z_{\vec{k}}\,. \tag{20}\]
Here, \(Z_{\text{pert}}\) is the 1-loop perturbative contributions from the tensor, vector and hypermultiplets. \(Z_{\text{str}}\) is the self-dual string contributions which are given by a summation over the elliptic genera \(Z_{\vec{k}}\) of the worldsheet SCFTs on self-dual strings with tensor charge \(\vec{k}\equiv(k_{1},k_{2},\cdots,k_{N})\) where \(k_{\alpha}\in\mathbb{Z}_{\geq 0}\) for all \(\alpha\). The explicit form of the 1-loop contributions is given by
\[Z_{\text{pert}}=\text{PE}\left[I_{\text{tensor}}+I_{\text{vector}}+I_{\text{ hyper}}\right], \tag{21}\]
where the single-letter contributions \(I_{\text{tensor}}\), \(I_{\text{vector}}\), \(I_{\text{hyper}}\) of tensor, vector and hypermultiplet are
\[I_{\text{tensor}} =-\frac{p_{1}+p_{2}}{(1-p_{1})(1-p_{2})}\frac{1}{1-q}\,, \tag{22}\] \[I_{\text{vector}} =-\frac{1+p_{1}p_{2}}{(1-p_{1})(1-p_{2})}\frac{1}{2}\sum_{n\in \mathbb{Z}}\sum_{\rho\in\mathbf{R}}e^{2\pi i|n\tau+\rho\cdot\phi|}\,,\] (23) \[I_{\text{hyper}} =\frac{\sqrt{p_{1}p_{2}}}{(1-p_{1})(1-p_{2})}\sum_{n\in\mathbb{Z} }\sum_{f}\sum_{w\in\mathbf{w}_{f}}e^{2\pi i|n\tau+w\cdot\phi+m_{f}|}\,, \tag{24}\]
where \(q=e^{2\pi i\tau}\).
For a given 6d SCFT, we can systematically compute the effective prepotential \(\mathcal{E}\) and find the consistent magnetic fluxes \((\vec{n},\vec{B})\), and thus formulate the blowup equations as in (18). We then expand the blowup equations in terms of the Kahler parameters \(e^{2\pi i\mathbf{id}\cdot\mathbf{m}}\) and solve them iteratively to calculate the BPS degeneracies of \(N^{\mathbf{d}}_{j_{1},j_{r}}\) of the 6d theory.
### Blowup equations for LSTs
We will now extend the blowup formalism for 6d SCFTs to 6d LSTs. Little string theories are characterized by a collection of 2-form tensor fields whose intersection pairing, represented by \(\Omega^{\alpha\beta}\), is negative semi-definite and has a single null direction. This means that there exists a unit vector \(\ell_{\alpha}\) in the string charge lattice such that \(\Omega^{\alpha\beta}\ell_{\beta}=0\). As a result, the tensor field corresponding to the null direction \(\ell_{\alpha}\) in the string charge lattice is non-dynamical. We will call the strings with tensor charges propotional to \(\ell_{\alpha}\) as the full winding strings. The tension of these full winding strings,
represented by \(T\sim M_{\rm string}^{2}\), is always finite, and it defines the intrinsic scale of the LST. The full winding strings in LSTs are therefore distinguised from the self-dual strings in 6d SCFTs which have tensionless limit.
We are interested in the elliptic genera of 2d worldsheet SCFTs of LSTs on tensor branch. We can write the contributions from the dynamical strings to the partition function as a collection of the elliptic genera of the strings as follows:
\[Z_{\rm str}=\sum_{\vec{k}}v_{1}^{k_{1}}\cdots v_{N}^{k_{N}}Z_{ \vec{k}}\, \tag{25}\]
where
\[v_{\alpha}\equiv e^{-2\pi i\Omega^{\alpha\beta}\phi_{\beta,0}} \,\quad v_{N}\equiv e^{2\pi i(w-\Omega^{N\beta}\phi_{\beta,0})}\, \tag{26}\]
with \(\alpha=1,\cdots,N-1\) and \(\beta=1,\cdots,N\). Here, the scalar VEVs of the \(N-1\) tensor multiplets \(\phi_{\beta,0}\) and the little string tension \(w\sim T\) play the role of chemical potentials for the string charges. The full winding string states are represented by the fugacity \(e^{2\pi iw}\), but are independent of other tensor scalar VEVs \(\phi_{\beta,0}\).
There is a natural limit \(w\to i\infty\) while keeping \(\phi_{\alpha,0}\) finite. In this limit the full winding string states are truncated and the LST is reduced to a 6d SCFT with \(N-1\) tensor multiplets. From this point of view, the LST can be considered as an affine extension of the 6d SCFT by attaching an affine tensor node to the tensor quiver diagram. This leads to the intersection form \(\Omega^{\alpha\beta}\) with \(N\) tensor nodes of the LST [22]. The partition function (25) under this 6d SCFT limit becomes that of the self-dual strings in the 6d SCFT and it satisfies the blowup equation discussed in the previous subsection.
We now proceed to construct the blowup equations for the partition function of the LSTs and use them to compute their elliptic genera. One of the distinguished features of the LSTs from 6d SCFTs is that the mixed gauge-global anomalies in LSTs are not completely canceled by the standard Green-Schwarz mechanism. The anomaly 8-form for the mixed anomalies should take a factorized form as
\[I_{8}^{\rm mixed}=Y_{4}\wedge X_{4,0}\, \tag{27}\]
where the first factor \(Y_{4}\) is a 4-form given in terms of the second Chern classes for the dynamical gauge fields
\[Y_{4}=\frac{1}{4}\sum_{\alpha=1}^{N}\ell_{\alpha}{\rm Tr}F_{G_{ \alpha}}^{2}\, \tag{28}\]
and the second factor \(X_{4,0}\) is a 4-form independent of the dynamical gauge field which can be written as
\[X_{4,0}=-\frac{1}{4}a_{0}p_{1}(T_{6})+\frac{1}{4}\sum_{a}b_{a,0} \,{\rm Tr}\,F_{a}^{2}+c_{0}c_{2}(R)\,. \tag{29}\]
Here, \(F_{G_{\alpha}}\) and \(F_{a}\) are the field strength for the gauge group \(G_{\alpha}\) and that for the \(a\)-th flavor group respectively. We normalize the instanton number as \(k_{\alpha}=\frac{1}{4}{\rm Tr}F_{G_{\alpha}}^{2}\in\mathbb{Z}\) when integrated over a 4-manifold and it parametrizes the \(\alpha\)-th direction in the string charge lattice. The coefficents \(a_{0}\), \(b_{a,0}\), and \(c_{0}\) are fixed by the 1-loop and the Green-Schwarz anomaly calculations. When the theory has a F-theory construction on an elliptic Calabi-Yau threefold, we can identify them as
\[a_{0}=K\cdot\Sigma_{\rm LST}\,\quad b_{a,0}=\Sigma_{F_{a}}\cdot\Sigma_{ \rm LST}\,\quad c_{0}=\ell_{\alpha}h_{\alpha}^{\vee}\,, \tag{30}\]
where \(K\) is the canonical class of the base \(B\) in the CY 3-fold, \(\Sigma_{\rm LST}\) is the curve class associated to the little string scale satisfying \(\Omega\cdot\Sigma_{\rm LST}=0\), \(\Sigma_{F_{a}}\) is the curve class supporting the 7-brane with \(a\)-th flavor symmetry, \(h_{\alpha}^{\vee}\) is the dual Coxeter number of the group \(G_{\alpha}\), and the dot \(\cdot\) between two curve classes stands for the intersection number of the curves.
The non-vanishing mixed anomalies are inconsistent with the dynamical gauge symmetries in the presence of background gauge fields for the global symmetries. Therefore, there must be a regularization scheme that cancels these mixed gauge anomalies while preserving the dynamical gauge symmetries, even in the presence of non-trivial background fields for the global symmetries.
There are some choices of regularization scheme. For instance, in [74], the mixed gauge anomalies were canceled by adding Green-Schwarz counterterms involving a 2-form background gauge field coupled to the 2-form instanton currents \(J\sim\star{\rm Tr}F_{G}\wedge F_{G}\). Here, the 2-form background gauge field transforms under the background global symmetry transformation and also under the local Lorentz transformation. This results in a continuous 2-group global symmetry.
In this paper we will introduce another counterterm which leads to the consistent blowup equations for LSTs as we will explain below. We shall introduce the counterterm defined as
\[\Delta S=-\int B_{0}\wedge X_{4,0}\, \tag{31}\]
with a 2-form gauge field \(B_{0}\) which transforms under the dynamical gauge transformation parametrized by \(\Lambda_{G}\) as
\[B_{0}\ \rightarrow\ B_{0}+\frac{1}{4}\ell_{\alpha}\,{\rm Tr} \Lambda_{G_{\alpha}}F_{G_{\alpha}}. \tag{32}\]
This modifies the Bianchi identity for the 3-form field strength \(H_{0}=dB_{0}\) as
\[dH_{0}=Y_{4}. \tag{33}\]
Then the gauge variation of the counterterm cancels the gauge anomalies arising from \(I_{8}^{\rm mixed}\) in the presence of the background fields for the global symmetries. Let us emphasize that the 2-form field \(B_{0}\) here is not a fixed background field since it
transforms non-trivially under the dynamical gauge transformation. We need to integrate this field in the path integral although it has no kinetic term in the action. This 2-form field can be considered as a kind of Lagrange multiplier introduced to cancel the mixed gauge-global anomalies.
We are now ready to formulate the blowup equations for the LSTs. We first need to prepare the effective prepotential on the \(\Omega\)-background. The tree-level action for a LST is almost the same as that of 6d SCFTs, but now there are additional contributions from the gauge kinetic terms coupled to the little string tension \(w\) and the counterterm (31). We propose that the tree-level effective prepotential for a LST is
\[\mathcal{E}^{\text{LST}}_{\text{tree}} =\mathcal{E}^{\text{SCFT}}_{\text{tree}}+\mathcal{E}^{(0)}_{\text {tree}}\;, \tag{34}\] \[\mathcal{E}^{(0)}_{\text{tree}} =\frac{1}{\epsilon_{1}\epsilon_{2}}\bigg{[}\frac{w}{2}\ell_{ \alpha}K_{\alpha,ij}\phi_{\alpha,i}\phi_{\alpha,j}-\phi_{0,0}\bigg{(}\frac{a_{ 0}}{4}(\epsilon_{1}^{2}+\epsilon_{2}^{2})+\frac{b_{a,0}}{2}K_{a,ij}m_{a,i}m_{ a,j}+c_{0}\epsilon_{+}^{2}\bigg{)}\bigg{]}\,.\]
Here, we introduced an auxiliary scalar VEV \(\phi_{0,0}\) to take into account the magnetic flux of the 2-form \(B_{0}\) in (31) on the blowup background, which will be explained in more detail below. The first term with \(w\) in \(\mathcal{E}^{(0)}_{\text{tree}}\) is the gauge kinetic terms evaluated on the \(\Omega\)-background and the second term with \(\phi_{0,0}\) is the contribution from the counterterm (31). The 1-loop contributions to the effective prepotential \(\mathcal{E}_{\text{1-loop}}\) and to the GV-invariant \(Z_{\text{pert}}\) can be calculated in the same way as those for 6d SCFTs presented in (12) and in (21) respectively in the preivous subsection.
Now we claim that the partition function \(Z=e^{-2\pi i\vec{\epsilon}}\times Z_{\text{GV}}\) of a little string theory satisfies the blowup equation
\[\begin{split}\Lambda(m;\epsilon_{1},\epsilon_{2})\hat{Z}(\phi,m; \epsilon_{1},\epsilon_{2})&=\sum_{\vec{n}}(-1)^{|\vec{n}|}\hat{Z }^{(N)}(\vec{n},\vec{B})\hat{Z}^{(S)}(\vec{n},\vec{B})\\ \Leftrightarrow\Lambda(m,\epsilon_{1},\epsilon_{2})\hat{Z}_{ \text{GV}}(\phi,m,\epsilon_{1},\epsilon_{2})&=\sum_{\vec{n}}(-1 )^{|\vec{n}|}e^{-2\pi iV}\hat{Z}^{(N)}_{\text{GV}}(\vec{n},\vec{B})\hat{Z}^{(S )}_{\text{GV}}(\vec{n},\vec{B})\,,\end{split} \tag{35}\]
with a set of consistent magnetic fluxes \(\vec{n},\vec{B}\) satisfying the quantization in (17). \(\hat{Z}\) is again the partition function with the \(\epsilon_{1}\) shift given in (14), and \(\hat{Z}^{(N)}\) and \(\hat{Z}^{(S)}\) are the local partition functions near the north pole and south pole of the \(\mathbb{P}^{1}\) defined by (16).
There are a few remarks for the blowup equations for LSTs. Firstly, the magnetic fluxes \(\vec{n}\) on the blownup \(\mathbb{P}^{1}\) in the blowup equation involve not only the magnetic fluxes for the dynamical tensor and gauge symmetry groups, but also the magnetic flux for the 2-form gauge field \(B_{0}\) that is added to cancel the mixed gauge-global anomalies. As explained above, the 2-form field \(B_{0}\) behaves like a Lagrange multiplier and we should sum over its magnetic fluxes on the blowup background. Otherwise, it will not be possible to activate background fields for the symmetries that have mixed anomalies with gauge symmetries. In the blowup equation, turning on a
flux \(n_{0,0}\) for \(B_{0}\) is implemented by a shift of the auxiliary scalar field in the form \(\phi_{0,0}\to\phi_{0,0}+n_{0,0}\epsilon_{1,2}\) with \(n_{0,0}\in\mathbb{Z}\). The summation of these auxiliary magnetic fluxes is crucial to construct a consistent blowup equation for LSTs that have mixed gauge anomalies. We note that the auxiliary field \(\phi_{0,0}\) only serves the purpose of activating the fluxes \(n_{0,0}\epsilon_{1,2}\) and ultimately disappears in the blowup equation through the use of the combination \(V=\mathcal{E}^{(N)}+\mathcal{E}^{(S)}-\mathcal{E}\).
Secondly, the sum over the auxiliary magnetic flux \(n_{0,0}\) on \(\mathbb{P}^{1}\) in the blowup equation is not convergent. It turns out that the \(n_{0,0}\) dependent terms appear only in the exponent \(V\) in the blowup equation and they are all linear in \(n_{0,0}\). Namely, the right side of the blowup equation contains a sum over \(n_{0,0}\) of the form
\[\sum_{n_{0,0}\in\mathbb{Z}}e^{-n_{0,0}f(m;\epsilon_{1,2})+\cdots}\times\cdots\, \tag{36}\]
with a function \(f(m;\epsilon_{1,2})\) independent of the dynamical Kahler parameters \(\phi\). This sum is obviously divergent, so it seems that the blowup equation is not well-defined.
Nevertheless, we assert that the blowup equation of a LST is still valid in the following sense. As we will demonstrate explicitly with examples in the next section, the LST partition functions satisfy the blowup equations if we first expand them in terms of Kahler parameters and then sum over the auxiliary magnetic fluxes \(n_{0,0}\). Surprisingly, if one sums up the fluxes \(|n_{0,0}|\leq n_{\rm max}\), one finds that every order in the Kahler parameter expansion is exactly canceled, leaving a few terms coming from the maximum flux \(|n_{0,0}|=n_{\rm max}\). These remaining terms are also canceled iteratively by new terms appearing when the maximum flux is increased such as \(n_{\rm max}\to n_{\rm max}+1\to n_{\rm max}+2\to n_{\rm max}+3\), and so on. Hence, if sufficiently large enough fluxes are summed up, all terms arising from smaller \(n_{0,0}\) fluxes are canceled out. This is how the blowup equation works for LSTs and is rather different from the structure of the typical blowup equations for 4d/5d/6d SCFTs.
In particular, without the sum over the auxiliary flux \(n_{0,0}\), the above blowup equation does not hold at all. This is related to the fact that the LSTs possess mixed gauge anomalies in the presence of background fields such as \(\vec{B}\) and \(\epsilon_{1,2}\) for the global and Lorentz symmetries which we need to activate to formulate a consistent blowup equation and that we need to introduce the 2-form \(B_{0}\) and the counterterm (31) associated to the auxiliary flux to cancel such mixed gauge anomalies. We have checked this for a number of examples that we will discuss in detail in the next section. Therefore, we propose that the auxiliary magnetic flux \(n_{0,0}\) must be taken into account in the construction of the blowup equations for LSTs. The counterterm (31) with the auxiliary 2-form field \(B_{0}\) is required in this sense.
Importantly, we can use the blowup equations, combining them with the modular ansatz, to determine the elliptic genera of 2d worldsheet SCFTs on strings in LSTs. To show this, let us now illustrate how to bootstrap the BPS spectra of LSTs using the blowup equations and the modular properties of the elliptic genera.
### Bootstrapping LSTs
We first review how to formulate a general ansatz for the elliptic genus of BPS strings in 6d theories by exploiting its properties under the modular transformation. The modular property of the elliptic genus defined in (2.1) is governed by the 't Hooft anomalies of the worldsheet SCFT. Under the modular transformation, the elliptic genus transforms as [75],
\[Z_{\vec{k}}\bigg{(}\frac{a\tau+b}{c\tau+d},\frac{z}{c\tau+d}\bigg{)}=\epsilon(a,b,c,d)^{c_{R}-c_{L}}\exp\biggl{(}\frac{2\pi ic}{c\tau+d}f(z)\biggr{)}Z_{\vec{k }}(\tau,z)\,, \tag{2.37}\]
where \((\begin{smallmatrix}a&b\\ c&d\end{smallmatrix})\in\mathrm{SL}(2,\mathbb{Z})\), \(\epsilon(a,b,c,d)\) is a phase factor, \(c_{L,R}\) are chiral central charges of the worldsheet SCFT, and \(z\) collectively denotes chemical potentials for the symmetries. The _modular anomaly_\(f(z)\) is closely related with the anomaly polynomial \(I_{4}\) of the 2d SCFT [76; 77]. In fact, it agrees with the supersymmetric Casimir energy of the 2d SCFT defined in [78] which is given by an equivariant integral of the anomaly polynomial \(I_{4}\),
\[f(z)=\int_{\mathrm{eq}}I_{4}. \tag{2.38}\]
The equivariant integration here can be implemented by the replacement rules for the characteristic classes as
\[p_{1}(T_{2})\mapsto 0\,,\quad c_{2}(l)\mapsto\epsilon_{-}^{2}\,,\quad c_{2}(r),c_{2}(R)\mapsto\epsilon_{+}^{2}\,,\quad\frac{1}{2}\operatorname{Tr}F_{a}^{2} \mapsto K_{a,ij}\phi_{a,i}\phi_{a,j}\,. \tag{2.39}\]
Knowing the anomaly polynomial of the worldsheet SCFT and the modular transformation in (2.37), we can formulate an ansatz for the elliptic genus in terms of elliptic functions.
The anomaly polynomial of the 2d SCFTs living on self-dual strings in 6d SCFTs has been calculated in [79; 80] by using anomaly inflow mechanism. For a 6d SCFT with an intersection form \(\Omega^{\alpha\beta}_{\mathrm{cft}}\), the anomaly polynomial of the worldsheet CFT on a self-dual string with charge \(\vec{k}=\{k_{\alpha}\}\) is
\[I_{4}=\Omega^{\alpha\beta}_{\mathrm{cft}}k_{\alpha}\bigg{(}X_{4\beta}+\frac{1} {2}k_{\beta}\chi_{4}(T_{4})\bigg{)}\,, \tag{2.40}\]
where \(X_{4\beta}\) is a 4-form defined in (2.6), \(\chi_{4}(T_{4})\) is the Euler class of the transverse \(SO(4)=SU(2)_{l}\times SU(2)_{r}\) Lorentz rotation which can be written as \(\chi_{4}(T_{4})=c_{2}(l)-c_{2}(r)\) in terms of the second Chern classes for the \(SU(2)_{l}\times SU(2)_{r}\) bundle. The first Pontryagin class \(p_{1}(T_{6})\) of the 6d tangent bundle in \(X_{4\alpha}\) is decomposed as \(p_{1}(T_{6})=p_{1}(T_{2})-2c_{2}(l)-2c_{2}(r)\).
Similarly, the anomaly polynomials of 2d SCFTs on BPS strings in a number of LSTs were calculated in [65]. We will generalize this computation and provide a universal expression for the anomaly polynomials of the 2d SCFTs on strings in LSTs.
The 't Hooft anomalies on the 2d worldsheet of the self-dual strings in the 6d SCFTs embedded in a LST should be the same as (40). However, there is another contribution to the 't Hooft anomalies coming from the full winding strings in the LST. This extra contribution can be captured by integrating the mixed gauge anomaly 8-form \(I_{8}^{\rm mixed}\) on the full winding string background [74]. Let us define the number of full winding strings \(\kappa\in\mathbb{Z}\) as a maximal integer satisfying \(k_{\alpha}-\kappa\ell_{\alpha}\geq 0\) for all \(\alpha\). We propose that the anomaly polynomial of the 2d SCFT on strings in a little string theory is
\[I_{4}=\Omega^{\alpha\beta}k_{\alpha}\bigg{(}X_{4\beta}+\frac{1} {2}k_{\beta}\chi_{4}(T_{4})\bigg{)}+\kappa X_{4,0}\,. \tag{41}\]
Here, \(\Omega^{\alpha\beta}\) is the Dirac pairing of the \(N\)-dimensional string charge lattice and \(X_{4,0}\) is defined in (29). We can use this anomaly polynomial to compute the modular anomaly \(f(z)\) in (37) for the worldsheet SCFTs for strings with a charge \(\vec{k}\).
A function which transforms as (37) under the modular transformation is known as _Jacobi form3_. In the language of Jacobi forms, (37) implies that elliptic genus has weight 0 and its indices are fixed by 't Hooft anomaly coefficients of the global symmetries on the worldsheet theory. To write down an ansatz for the elliptic genus of \(\vec{k}\) strings using the Jacobi forms, we need to use the modular property in (37) and the pole structure of \(Z_{\vec{k}}\).
Footnote 3: We summarize the definition, terminologies, and properties of Jacobi forms in Appendix A
We propose a _modular ansatz_ for the elliptic genus for the strings in LSTs, which will be of the form
\[Z_{\vec{k}}=\frac{1}{\eta(\tau)^{2|c_{L}-c_{R}|}}\frac{\Phi_{ \vec{k}}(\tau,\epsilon_{\pm},\phi,m)}{\prod_{\alpha}\mathcal{D}_{\alpha}^{ \rm cm}(\tau,\epsilon_{\pm})\mathcal{D}_{\alpha}^{\mathfrak{d}\alpha}(\tau, \epsilon_{\pm},\phi)}\bigg{(}\frac{1}{\mathcal{D}_{\kappa}^{\rm bulk}(\tau, \epsilon_{\pm},m_{0})}\bigg{)}\,, \tag{42}\]
where \(\phi\) and \(m\) collectively denote chemical potentials for gauge and flavor symmetries, respectively. This is an extension of the ansatzes introduced in [65; 76; 77; 81; 82]. Here the denominator factors in the ansatz will be fixed by pole structure expected for the moduli space of strings, which we will explain now.
Firstly, the factor \(\eta(\tau)^{2|c_{L}-c_{R}|}\), where \(\eta(\tau)\) is the Dedekind eta function defined in (100), fixes the leading behavior of the elliptic genus in \(q\)-expansion which is determined by the vacuum Casimir energy of the 2d SCFT.
The second factor of the form
\[\mathcal{D}_{\alpha}^{\rm cm}(\tau,\epsilon_{\pm})=\prod_{s=1}^{ k_{\alpha}}\varphi_{-1,1/2}(\tau,s\epsilon_{1})\varphi_{-1,1/2}(\tau,s\epsilon_{2 })\,, \tag{43}\]
is the contribution coming from the transverse motions of the strings [83; 84], where \(\varphi_{-1,1/2}\) is the weight \(-1\) and index \(1/2\) Jacobi form
\[\varphi_{-1,1/2}(\tau,z)=i\frac{\theta_{1}(\tau,z)}{\eta(\tau)^{3 }}\, \tag{44}\]
with the Jacobi theta function \(\theta_{1}(\tau,z)\) given in (A.11). Notice that the leading order of \(\varphi_{-1,1/2}(\tau,z)\) in \(q\)-expansion is \(\mathcal{O}(1)\), so that this does not change the leading behavior of the elliptic genus in \(q\)-expansion.
The third factor, \(\mathcal{D}_{\alpha}^{\mathfrak{g}_{\alpha}}\), arises from the bosonic zero modes of instantons for the 6d gauge algebra \(\mathfrak{g}_{\alpha}\). For instance, when the gauge algebra is \(\mathfrak{g}_{\alpha}=\mathfrak{su}(2)\), we have
\[\mathcal{D}_{\alpha}^{A_{1}}=\prod_{s=1}^{k_{\alpha}}\prod_{l=0}^{s-1}\varphi_ {-1,1/2}((s+1)\epsilon_{+}+(s-1-2l)\epsilon_{-}\pm e\cdot\phi)\,, \tag{45}\]
where \(e\) is the positive root of \(\mathfrak{su}(2)\) and \(\varphi(x\pm y)\equiv\varphi(\tau,x+y)\varphi(\tau,x-y)\). For a general gauge algebra \(\mathfrak{g}_{\alpha}\), we use an embedding \(\mathfrak{su}(2)\subset\mathfrak{g}_{\alpha}\) which maps three \(SU(2)\) generators to generators \(T_{e}^{a=1,2,3}\) associated with a positive root \(e\) of \(\mathfrak{g}_{\alpha}\)[85; 86]. This embedding gives the denominator factor \(\mathcal{D}_{\alpha}^{\mathfrak{g}_{\alpha}}\) as [65]
\[\mathcal{D}_{\alpha}^{\mathfrak{g}_{\alpha}}=\prod_{e\in\mathbf{R}_{\mathfrak{ g}_{\alpha}}^{+}}\mathcal{D}_{\lfloor k_{\alpha}/\xi_{e}\rfloor,e}^{A_{1}}\,, \tag{46}\]
where \(\mathbf{R}_{\mathfrak{g}_{\alpha}}^{+}\) is the set of positive roots of \(\mathfrak{g}_{\alpha}\), \(\mathcal{D}_{k,e}^{A_{1}}\) is (45) by replacing an \(\mathfrak{su}(2)\) positive root with a given root of \(\mathfrak{g}_{\alpha}\), \(\lfloor\cdot\rfloor\) is the floor function and
\[\operatorname{tr}\!\left(T_{e}^{a}T_{e}^{b}\right)=\xi_{e}\delta^{ab}\,,\quad \xi_{e}=\left\{\begin{array}{ll}1&\text{ if $e$ is a long root or $\mathfrak{g}=A_{n},D_{n},E_{n}$}\\ 2&\text{ if $e$ is a short root and $\mathfrak{g}=B_{n},C_{n},F_{4}$}\\ 3&\text{ if $e$ is a short root and $\mathfrak{g}=G_{2}$}\end{array}\right.\,. \tag{47}\]
Lastly, some LSTs have a denominator factor \(\mathcal{D}_{\kappa}^{\text{bulk}}\), which depends on the winding number \(\kappa\), as mentioned in [65]. This factor is associated to certain full winding string states that are decoupled from the 6d LST, which means that these states do not possess dynamical gauge charges, and presumably escape to the bulk spacetime in which the LST is embedded. In the examples we present in section 3.2, for example, the LSTs can be embedded in 10d heterotic string theories and the modular ansatz for strings in these theories includes a denominator factor of the form
\[\mathcal{D}_{\kappa}^{\text{bulk}}=\prod_{s=1}^{\kappa}\varphi_{-1,1/2}(\pm s \lambda(m_{0}-\epsilon_{+}))\, \tag{48}\]
where \(m_{0}\) is the chemical potential for the \(SU(2)_{m}\subset SU(2)_{R}\times SU(2)_{m}\) rotational symmetry transverse to the 6d spacetime of the LSTs. These are chosen to match the ADHM constructions for the strings. Specifically, the value of \(\lambda\) is set to 1 for \(SO(32)\) heterotic LST in section 3.2.2 and to 2 for \(E_{8}\times E_{8}\) heterotic LST in section 3.2.1. However, we find that it is possible to select alternative denominator factors that do not alter the dynamical string spectrum, while leading to different full winding string states that decouple from the LST. For example, as we will show in section 3.2.1, the same string spectrum for the \(E_{8}\times E_{8}\) heterotic LST can be obtained by using a
different denominator factor with \(\lambda=1\). We postulate that, for a generic LST, it is always possible to choose the factor \(\mathcal{D}_{\kappa}^{\rm bulk}\) to be either trivial or in the form specified in (48) with a certain \(\lambda\in\mathbb{Z}\), provided that a modular ansatz of the form (42) can be established. This modular ansatz will be consistent with the dynamical string spectrum of the LST, though the decoupled states that are independent of dynamical gauge symmetries may vary.
After factoring out the denominator factors, the numerator \(\Phi_{\vec{k}}\) in the modular ansatz (42) starts at \(q^{0}\) order in \(q\)-expansion and becomes a Weyl-invariant Jacobi form whose weight and indices are fixed by the 't Hooft anomalies of a given theory and the structure of the denominator in the modular ansatz. For every simple Lie algebra, the Weyl-invariant Jacobi forms with given weight and index can be written as a linear combination of finite generators [87; 88]. Thus, the numerator is given by the finite linear combination of the generators \(\varphi_{k_{jl},m_{jl}}(z_{l})\) of the Weyl invariant Jacobi forms with weight \(k_{jl}\) and index \(m_{jl}\) for \(l\)'th symmetry algebra \(\mathfrak{g}_{l}\):
\[\Phi_{\vec{k}}=\sum_{i}C_{i}E_{4}^{a_{4}^{(i)}}E_{6}^{a_{6}^{(i)}}\prod_{j,l} \varphi_{k_{jl}m_{jl}}(z_{l})^{b_{jl}^{(i)}}\,, \tag{49}\]
where \(z_{l}\) denotes a chemical potential for \(l\)-th symmetry, \(C_{i}\in\mathbb{C}\), and \(E_{4}\) and \(E_{6}\) denote the Eisenstein series of weight 4 and 6, respectively. The exponents \(a_{4,6}^{(i)}\) and \(b_{jl}^{(i)}\) are constrained by the condition requiring that the elliptic genus \(Z_{\vec{k}}\) be transformed as (37) under the modular transformation. They are thus non-negative integers satisfying two conditions:
\[4a_{4}^{(i)}+6a_{6}^{(i)}+\sum_{j,l}k_{jl}b_{jl}^{(i)}-|c_{L}-c_{R}|+\sum_{ \alpha}2k_{\alpha}+\sum_{\alpha}\sum_{e\in\mathbb{R}_{\mathfrak{g}\alpha}^{+} }\sum_{s=1}^{\lfloor k_{\alpha}/\xi_{e}\rfloor}2s-(\text{weight}(\mathcal{D }_{\kappa}^{\rm bulk}))=0\,, \tag{50}\]
and
\[f(z) =\sum_{j,l}m_{jl}b_{jl}^{(i)}\langle z_{l},z_{l}\rangle_{ \mathfrak{g}_{l}}-\frac{1}{2}\sum_{\alpha}\sum_{s=1}^{k_{\alpha}}s^{2}(\epsilon _{1}^{2}+\epsilon_{2}^{2}) \tag{51}\] \[\qquad-\frac{1}{2}\sum_{\alpha}\sum_{e\in\mathbb{R}_{\mathfrak{ g}\alpha}^{+}}\sum_{s=1}^{\lfloor k_{\alpha}/\xi_{e}\rfloor}((s+1)\epsilon_{+}+(s-1- 2l)\epsilon_{-}\pm e\cdot\phi)^{2}-(\text{index}(\mathcal{D}_{\kappa}^{\rm bulk }))\]
for each \(i\), where \(\langle\cdot,\cdot\rangle_{\mathfrak{g}_{l}}\) is a symmetric bilinear form for \(\mathfrak{g}_{l}\) defined by its Killing form. The index \(l\) runs for all symmetries of the 2d worldsheet SCFT, while \(\mathfrak{g}_{\alpha}\) denotes gauge symmetry for \(\alpha\)-th node. Hence the modularity fixes the elliptic genus up to finitely many constants \(C_{i}\) in (49).
The unknown constants \(C_{i}\)'s can be fixed by imposing the GV-invariant ansatz (3) of the elliptic genus and by solving the blowup equation. Note that the modular ansatz (42) for \(\sum_{\alpha}k_{\alpha}>1\) can have higher order poles at \(\epsilon_{1}=0\) and \(\epsilon_{2}=0\) arising
from the center of mass contribution (43) for generic \(C_{i}\)'s. However, the single letter index in the Plethystic exponential of the GV-invariant ansatz can have only simple poles at \(\epsilon_{1}=0\) and \(\epsilon_{2}=0\). This imposes strong constraints on \(C_{i}\)'s.
Furthermore, we demand that the partition function satisfies the blowup equation. In contrast to the 5d/6d SCFT cases, the blowup equations for LSTs involve a divergent summation over an auxiliary magnetic flux \(n_{0,0}\), as explained in the previous subsection. Due to this structure, it seems that the partition function of an LST cannot be determined solely by solving the blowup equations and the GV-invariant ansatz (3). However, the modular ansatz in (42) places further constraints on the partition function and, by combining it with the blowup equations, it should be feasible to completely determine the partition functions of LSTs in Kahler parameter expansion. We will demonstrate this with several interesting examples in the next section.
## 3 Examples
In this section, the partition functions of several low rank LSTs are calculated. We first compute elliptic genera of strings using the ADHM constructions. We then construct the blowup equations for the partition functions of the LSTs and verify that the results from the ADHM constructions satisfy the blowup equations. Lastly, we formulate the modular ansatz for the elliptic genera of strings in the LSTs and fix the unknown coefficients in the ansatz by solving the blowup equations. We show that the partition functions obtained through this method are consistent with the results obtained using ADHM constructions.
### \(\hat{A}_{1}\) LSTs
Our first example is the little string theories on \(N\) parallel NS5-branes in type II string theories in gravity decoupling limit introduced in [56; 57]. In the IIA theory, the little string theory is the \(\mathcal{N}=(2,0)\) LST with \(N\) tensor multiplets. This LST is realized in F-theory by an elliptically fibered Calabi-Yau threefold whose base surface contains a loop of \(N\) rational curves of self-intersection number \(-2\)[22]. The intersection matrix \(\Omega^{\alpha\beta}\) of the \(-2\) curves is given by the minus of the Cartan matrix of the affine Lie algebra \(\hat{A}_{N-1}=A_{N-1}^{(1)}\). On the other hand, the LST in the IIB theory is the \(\mathcal{N}=(1,1)\) Yang-Mills theory with \(U(N)\) gauge group which is realized in F-theory by an elliptic CY 3-fold with a base containing a genus one curve of self-intersection number \(0\). These two LSTs in IIA and in IIB, which we call \(\hat{A}_{N-1}\) LSTs, are related via T-duality under a circle compactification. In this subsection, we consider the partition functions and the blowup equations of these LSTs for \(N=2\).
#### 3.1.1 IIA picture
Let us first consider the \(\mathcal{N}=(2,0)\)\(\hat{A}_{1}\) little string theory for 2 NS5-branes in type IIA string theory. This theory has two tensor multiplets (for one dynamical tensor field and one free tensor field) with the intersection form
\[\Omega^{\alpha\beta}=\begin{pmatrix}-2&2\\ 2&-2\end{pmatrix}. \tag{3.1}\]
The index part of the partition function is factorized as
\[Z_{\rm GV}^{\rm IIA}=Z_{\rm pert}^{\rm IIA}\cdot Z_{\rm str}^{\rm IIA}\,, \tag{3.2}\]
where the perturbative partition function is given by the contributions coming from two \(\mathcal{N}=(2,0)\) tensor multiplets
\[Z_{\rm pert}^{\rm IIA}={\rm PE}\left[\bigg{(}\frac{\sqrt{p_{1}p_{2}}}{(1-p_{1} )(1-p_{2})}\big{(}M+M^{-1}\big{)}-\frac{p_{1}+p_{2}}{(1-p_{1})(1-p_{2})}\bigg{)} \frac{2q}{1-q}\right]. \tag{3.3}\]
Here \(M=e^{2\pi im}\) is the fugacity for the \(SU(2)_{m}\subset SU(2)_{R}\times SU(2)_{m}\) rotational symmetry of the \(\mathbb{R}^{4}\) plane transverse to the NS5-branes.
The partition function \(Z_{\rm str}^{\rm IIA}\) is from the strings carrying tensor charges defined as
\[Z_{\rm str}^{\rm IIA}=\sum_{k_{1},k_{2}\geq 0}Q^{k_{1}}\bigg{(}\frac{e^{2\pi iw }}{Q}\bigg{)}^{k_{2}}Z_{(k_{1},k_{2})}^{\rm IIA}, \tag{3.4}\]
where \(Q\equiv e^{2\pi i(2\phi_{1,0}-2\phi_{2,0})}\) is the fugacity for the dynamical tensor charge and \(w\) is the chemical potential for the winding number. We will now study two distinct methods for calculating the elliptic genus \(Z_{(k_{1},k_{2})}^{\rm IIA}\): the ADHM construction based on 2d gauged linear sigma model (GLSM) on the strings, and the blowup approach with the modular ansatz.
GlsmWe start with the brane construction studied in [63]. The brane construction for the \(\hat{A}_{1}\) LST is depicted in Figure 1(a) and (b). Here, we compactify the 9-th direction on a circle, and put two NS5-branes extended along 012345 directions at \(x^{9}=\phi_{1,0}\) and \(x^{9}=\phi_{2,0}\), respectively. The strings in the LST arise from the \(k_{1}\) D2-branes and \(k_{2}\) D2-branes stretched between two NS5-branes. We also put a single D6-brane, which becomes trivial in the M-theory uplift, to explicitly provide \(U(1)_{m}\) symmetry in the 2d GLSM. See [63] for a detailed study of this brane configuration.
The 2d GLSM on D2-branes has \(U(k_{1})\times U(k_{2})\) gauge symmetry, \(SU(2)_{l}\times SU(2)_{r}\) symmetry which rotates 2345 directions, \(SO(3)\) symmetry for 678 directions, and \(U(1)_{m}\) symmetry. At low energy, we expect the \(SO(3)\) and \(U(1)_{m}\) symmetries to be enhanced to \(SU(2)_{R}\times SU(2)_{m}\). There are an \(\mathcal{N}=(0,4)\) vector multiplet \((A_{\mu}^{(i)},\lambda_{+(i)}^{\alpha A})\) and adjoint hypermultiplets \((a_{\alpha\beta}^{(i)},\lambda_{-(i)}^{\alpha A})\) for each gauge node, bifundamental twisted
hypermultiplets \((\varphi^{(i)}_{A},\chi^{\dot{\alpha}}_{-(i)})\) and Fermi multiplets \((\chi^{\alpha}_{+(i)})\) from D2-D2 string modes, and hypermultiplets \((q^{(i)}_{\dot{\alpha}},\psi^{A(i)}_{-})\) and Fermi multiplets \((\Psi^{(i)}_{+})\), \((\tilde{\Psi}^{(i)}_{+})\) from the D2-D6 string modes. Here, \(i=1,2\) denotes each gauge node, \(\pm\) represent 2d chirality of fermions, \(\{\alpha,\beta,\cdots\}\), \(\{\dot{\alpha},\dot{\beta},\cdots\}\) and \(\{A,B,\cdots\}\) are doublet indices for the \(SU(2)_{l}\), \(SU(2)_{r}\), and \(SU(2)_{R}\), respectively. We summarize the matter content of the 2d GLSM in Figure 1(c).
The gauge theory description for the 2d worldsheet theory allows us to express the elliptic genus by a contour integral of 1-loop determinants from the supermultiplets, and the contour integral can be evaluated by using the JK-residue prescription as discussed in [75; 89]. The result is [63]
\[Z^{\rm IIA}_{(k_{1},k_{2})}=\sum_{\{Y_{1},Y_{2}\},|Y_{i}|=k_{i}}\prod_{i=1}^{2} \prod_{(a,b)\in Y_{i}}\frac{\theta_{1}(\tau,E^{(a,b)}_{i,i+1}-m+\epsilon_{-}) \theta_{1}(\tau,E^{(a,b)}_{i,i-1}+m+\epsilon_{-})}{\theta_{1}(\tau,E^{(a,b)}_{ i,i}+\epsilon_{1})\theta_{1}(\tau,E^{(a,b)}_{i,i}-\epsilon_{2})}\,, \tag{3.5}\]
with
\[E^{(a,b)}_{i,j}=(Y_{i,a}-b)\epsilon_{1}-(Y^{T}_{j,b}-a)\epsilon_{2}\,, \tag{3.6}\]
where \(Y_{1}\) and \(Y_{2}\) are Young diagrams, and \(Y_{i,a}\) and \(Y^{T}_{i,b}\) are the length of \(a\)-th row and \(b\)-th column of \(Y_{i}\), respectively.
Figure 1: (a) and (b) are brane configurations for \(\hat{A}_{1}\) LST in the type IIA string theory where the \(x^{9}\)-direction is compactified on a circle. (c) is the \(\mathcal{N}=(0,4)\) matter contents in the 2d GLSM on the worldsheet of strings.
ModularityThe modular properties of the elliptic genus can be obtained from the anomalies of the 2d worldsheet CFT. The chiral fermions in the GLSM contribute to the anomaly polynomial as
\[\lambda^{\dot{\alpha}A}_{+(i)}+\lambda^{\alpha A}_{-(i)} \rightarrow \sum_{i=1}^{2}2k_{i}^{2}\bigg{(}\frac{c_{2}(r)+c_{2}(R)}{2}-\frac {c_{2}(l)+c_{2}(R)}{2}\bigg{)}\,, \tag{3.7}\] \[\chi^{\dot{\alpha}}_{-(i)}+\chi^{\alpha}_{+(i)} \rightarrow 4k_{1}k_{2}\frac{c_{2}(l)-c_{2}(r)}{2}\,,\] (3.8) \[\psi^{A(i)}_{-}+\Psi^{(i)}_{+}+\tilde{\Psi}^{(i)}_{+} \rightarrow (k_{1}+k_{2})\bigg{(}c_{2}(R)+\frac{1}{4}\operatorname{Tr}F_{m} ^{2}\bigg{)}\,, \tag{3.9}\]
where \(F_{m}\) is the field strength for \(U(1)_{m}\) symmetry. The same anomaly polynomial can also be obtained from the anomaly inflow given in (2.41):
\[I_{4}=-(k_{1}-k_{2})^{2}(c_{2}(l)-c_{2}(r))+(k_{1}+k_{2})\bigg{(}-c_{2}(R)+ \frac{1}{4}\operatorname{Tr}F_{m}^{2}\bigg{)}\,. \tag{3.10}\]
Hence, the modular anomaly of the \((k_{1},k_{2})\) elliptic genus is
\[\int_{\text{eq}}I_{4}=(k_{1}-k_{2})^{2}(-\epsilon_{-}^{2}+\epsilon_{+}^{2})+(k _{1}+k_{2})(-\epsilon_{+}^{2}+m^{2})\,, \tag{3.11}\]
where we use the replacement rule in (2.39).
We can then establish a modular ansatz for \((k_{1},k_{2})\)-string elliptic genus as
\[Z^{\text{IIA}}_{(k_{1},k_{2})}=\frac{\Phi_{(k_{1},k_{2})}(\tau,\epsilon_{\pm},m)}{\prod_{s_{1}=1}^{k_{1}}\varphi_{-1,1/2}(s_{1}\epsilon_{1,2})\cdot\prod_{s _{2}=1}^{k_{2}}\varphi_{-1,1/2}(s_{2}\epsilon_{1,2})}\,. \tag{3.12}\]
The numerator \(\Phi_{(k_{1},k_{2})}\) can be written in terms of the Eisenstein series \(E_{4}\), \(E_{6}\) and the \(SU(2)\) Weyl invariant Jacobi forms \(\varphi_{-2,1}\), \(\varphi_{0,1}\) for \(\epsilon_{\pm}\) and \(m\) as we explained in (2.49):
\[\begin{split}\Phi_{(k_{1},k_{2})}=\sum_{i}C_{i}^{(k_{1},k_{2})}E _{4}^{a_{4}^{(i)}}E_{6}^{a_{6}^{(i)}}\varphi_{-2,1}(\epsilon_{+})^{b_{1}^{(i)} }\varphi_{0,1}(\epsilon_{+})^{b_{2}^{(i)}}\varphi_{-2,1}(\epsilon_{-})^{b_{3}^ {(i)}}\\ \cdot\varphi_{0,1}(\epsilon_{-})^{b_{4}^{(i)}}\varphi_{-2,1}(m)^{b _{5}^{(i)}}\varphi_{0,1}(m)^{b_{6}^{(i)}}\,.\end{split} \tag{3.13}\]
We need to determine the unknown coefficients in the modular ansatz for the numerator \(\Phi_{(k_{1},k_{2})}\). For this, we first impose the consistency conditions (2.50) and (2.51) and then use the GV-invariant ansatz (2.3). For instance, let us consider \((k_{1},k_{2})=(1,0)\) case. By using (2.50) and (2.51), the numerator has weight \(-2\) and the modular anomaly \(f(z)=\epsilon_{+}^{2}+m^{2}\). Then the ansatz reduces to
\[\Phi_{(1,0)}=C_{1}^{(1,0)}\varphi_{0,1}(\epsilon_{+})\varphi_{-2,1}(m)+C_{2}^{ (1,0)}\varphi_{-2,1}(\epsilon_{+})\varphi_{0,1}(m)\,, \tag{3.14}\]
where \(C_{1}^{(1,0)}\) and \(C_{2}^{(1,0)}\) are unknown constants. Now by expanding \(Z_{(1,0)}\) in terms of \(q=e^{2\pi i\tau}\) up to \(q^{1}\) order and comparing it with the GV-invariant form (2.3), one can
find that BPS state degeneracies \(N^{\rm d}_{j_{l},j_{r}}\) appearing in the \((1,0)\)-string elliptic genus can be non-negative integers only if
\[C_{1}^{(1,0)}=-C_{2}^{(1,0)}\in\mathbb{Z}/12\,,\quad C_{1}^{(1,0)}\geq 0\,. \tag{3.15}\]
Similarly, \(\Phi_{(1,1)}\) has weight \(-4\) and the modular anomaly \(f(z)=2\epsilon_{-}^{2}+2m^{2}\), and thus it can be written with \(4\) unknown constants as
\[\begin{split}\Phi_{(1,1)}&=C_{1}^{(1,1)}\varphi_{0,1}(\epsilon_{-})^{2}\varphi_{-2,1}(m)^{2}+C_{2}^{(1,1)}\varphi_{-2,1}(\epsilon _{-})\varphi_{0,1}(\epsilon_{-})\varphi_{-2,1}(m)\varphi_{0,1}(m)\\ &\quad+C_{3}^{(1,1)}\varphi_{-2,1}(\epsilon_{-})^{2}\varphi_{0,1 }(m)+C_{4}^{(1,1)}E_{4}\varphi_{-2,1}(\epsilon_{-})^{2}\varphi_{-2,1}(m)^{2}\,.\end{split} \tag{3.16}\]
In order to have only simple poles at \(\epsilon_{1}=0\) and \(\epsilon_{2}=0\) in \((1,1)\)-order after taking Plethystic logarithm as (2.3), the coefficient \(C_{i}^{(1,1)}\)'s should satisfy
\[C_{1}^{(1,1)}=\left(C_{1}^{(1,0)}\right)^{2},\ C_{2}^{(1,1)}=2C_{1}^{(1,0)}C_ {2}^{(1,0)}\,,\ C_{3}^{(1,1)}=\left(C_{2}^{(1,0)}\right)^{2},\ C_{4}^{(1,1)}=0\,. \tag{3.17}\]
Therefore, all the coefficients are fixed by one coefficient \(C_{1}^{(1,0)}\).
We can perform a similar computation for \((k_{0},k_{2})=(2,0)\), which has \(44\) unknown constants in the modular ansatz. Requiring the partition function has correct GV-invariant form (2.3) at this order, we can express all \(C_{i}^{(2,0)}\) in terms of \(C_{1}^{(1,0)}\). Moreover, we find only two solutions at this order: one is \(C_{1}^{(1,0)}=0\) which leads to the trivial solution \(Z_{(1,0)}^{\rm IIA}=Z_{(1,1)}^{\rm IIA}=Z_{(2,0)}^{\rm IIA}=0\), and another one is \(C_{1}^{(1,0)}=1/12\). The latter non-trivial solution reproduces the result (3.5) from the ADHM computation at \((k_{1},k_{2})=(1,0),(1,1),(2,0)\).
Furthermore, we also check that \(110\) unknown constants in the \((k_{1},k_{2})=(2,1)\) modular ansatz can be completely fixed by requiring the GV-invariant ansatz (2.3). We report the results in Table 1. In the table, we write an ordered list of \(C_{i}^{(k_{1},k_{2})}\), where \(C_{i}^{(k_{1},k_{2})}\) appears earlier than \(C_{j}^{(k_{1},k_{2})}\) in the list if \((a_{4}^{(i)},a_{6}^{(i)},b_{1}^{(i)},\cdots,b_{6}^{(i)})\) in the ansatz (3.13) appears before \((a_{4}^{(j)},a_{6}^{(j)},b_{1}^{(j)},\cdots,b_{6}^{(j)})\) in an ascending order4
Footnote 4: We define the ascending order as follows. Suppose the modular ansatz is given as (2.49), where we label weights and indices of the Jacobi forms as \(j_{1}<j_{2}\) if \(k_{j_{l}l}<k_{j_{2},l}\) or \(k_{j_{l}l}=k_{j_{2}l}\), \(m_{j_{l}l}<m_{j_{2}l}\). We also define a set of the exponents in the ansatz as
\[L^{(i)}:=\big{\{}a_{1}^{(i)},a_{2}^{(i)},b_{j_{l},l_{1}}^{(i)},\cdots,b_{j_{N_{ 1}},l_{1}}^{(i)},\cdots,b_{j_{l},l_{n}}^{(i)},\cdots,b_{j_{N_{n}},l_{n}}^{(i)} \big{\}}.\]
Then, if we have \((L^{(i)})_{1}=(L^{(j)})_{1},...,(L^{(i)})_{s-1}=(L^{(j)})_{s-1}\), and \((L^{(i)})_{s}<(L^{(j)})_{s}\), we order \(L^{(i)}\) and \(L^{(j)}\) as \(\{L^{(i)},L^{(j)}\}\). In this way, we fix the ordering of \(L^{(i)}\), and we define their set \(\{L^{(1)},...,L^{(N)}\}\) that we call ascending order. The ordering of \(C_{i}\) follows the ordering of \(L^{(i)}\). For instance, in the case of \(\Phi_{(1,1)}\), we have
\[(a_{4}^{(i)},a_{6}^{(i)},b_{1}^{(i)},\cdots,b_{6}^{(i)})=\left\{\begin{array}[] {ll}(0,0,0,0,0,2,2,0)&(i=1)\\ (0,0,0,0,1,1,1,1)&(i=2)\\ (0,0,0,0,2,0,0,1)&(i=3)\\ (1,0,0,0,2,0,2,0)&(i=4)\end{array}\right. \tag{3.18}\]
When we look at \(a_{4}^{(i)}\), \(a_{4}^{(4)}\) is the biggest value, so \(C_{4}^{(1,1)}\) is the last element. Similarly, we find \(b_{3}^{(i)}\), \(b_{3}^{(1)}<b_{3}^{(2)}<b_{3}^{(3)}=b_{3}^{(4)}\), so \(C_{1}^{(1,1)}\) is the first element, and \(C_{2}^{(1,1)}\) is the second element. Therefore, the ascending order of \(\{C_{i}\}\) in this case is
\[\{C_{1}^{(1,1)},C_{2}^{(1,1)},C_{3}^{(1,1)},C_{4}^{(1,1)}\}. \tag{3.19}\]
Blowup equationFinally, we consider the blowup equation for the (2,0) \(\hat{A}_{1}\) LST. As explained in the section 2.2, the tree level contribution to the effective prepotential consists of two parts. The first one is from the Green-Schwarz term for the dynamical tensor multiplet and the second one is the contribution from the auxiliary 2-form field \(B_{0}\) to cancel the mixed gauge-global anomalies. We can write the effective prepotential as
\[\mathcal{E}=\frac{1}{\epsilon_{1}\epsilon_{2}}\big{(}\tau(\phi_{1, 0}-\phi_{2,0})^{2}+(\phi_{1,0}-\phi_{2,0})(-m^{2}+\epsilon_{+}^{2})\big{)}+ \mathcal{E}_{\rm tree}^{(0)}\,, \tag{3.20}\]
where \(\phi_{1,0}-\phi_{2,0}\) is the scalar vacuum expectation value (VEV) of the dynamical tensor multiplet. The second contribution \(\mathcal{E}_{\rm tree}^{(0)}\) from the auxiliary 2-form field \(B_{0}\) is given by
\[\mathcal{E}_{\rm tree}^{(0)}=\frac{1}{\epsilon_{1}\epsilon_{2}} \big{(}-2m^{2}+2\epsilon_{+}^{2}\big{)}\phi_{0,0}\,, \tag{3.21}\]
where \(\phi_{0,0}\) is the auxiliary scalar associated with the \(B_{0}\) field.
To formulate the blowup equation, we have to sum over magnetic fluxes for both the dynamical tensor field and the auxiliary 2-form field which can be realized by shifting the parameters as
\[\phi_{1,0}-\phi_{2,0}\to\phi_{1,0}-\phi_{2,0}+n_{1,0}\epsilon_{1,2}\,,\quad\phi_{0,0}\to\phi_{0,0}+n_{0,0}\epsilon_{1,2}\,,\quad n_{1,0},n_{0,0}\in\mathbb{Z}\,. \tag{3.22}\]
We do not turn on the background magnetic fluxes for \(\tau\) and \(w\): \(B_{\tau}=B_{w}=0\). We propose the blowup equation for this LST as
\[\Lambda\hat{Z}_{\rm str}^{\rm IIA}=\sum_{n_{1},n_{2}\in\mathbb{Z}}(-1)^{n_{1}+n_{ 2}}q^{(n_{1}-n_{2})^{2}}(M\sqrt{p_{1}p_{2}})^{n_{1}+n_{2}}\hat{Z}_{\rm str}^{ \rm IIA(N)}\hat{Z}_{\rm str}^{\rm IIA(S)}\,, \tag{3.23}\]
where \(n_{1}\equiv n_{0,0}+n_{1,0}\) and \(n_{2}\equiv n_{0,0}\). We absorbed the perturbative part of the partition function into \(\Lambda\) as it is independent of the parameters \(\phi_{0,0},\phi_{1,0}-\phi_{2,0}\).
To begin with, we will demonstrate how the known elliptic genera, as given in (3.5), can be a solution to the blowup equation, although this equation becomes singular along the summation direction \(n_{1}=n_{2}\), as mentioned in section 2.2. At \((k_{1},k_{2})=(1,0)\) order, the blowup equation (3.23) is given by
\[\sum_{n_{1},n_{2}\in\mathbb{Z}}F(n_{1},n_{2})\coloneqq \sum_{n_{1},n_{2}\in\mathbb{Z}}(-1)^{n_{1}+n_{2}}q^{(n_{1}-n_{2})^ {2}}(M\sqrt{p_{1}p_{2}})^{n_{1}+n_{2}} \tag{3.24}\] \[\qquad\qquad\cdot\Big{(}p_{1}^{2(n_{1}-n_{2})}\hat{Z}_{(1,0)}^{ \rm IIA(N)}+p_{2}^{2(n_{1}-n_{2})}\hat{Z}_{(1,0)}^{\rm IIA(S)}-\hat{Z}_{(1,0)} ^{\rm IIA}\Big{)}=0\,,\]
where we choose \(\Lambda=\sum_{k}\Lambda_{k}e^{2\pi ikw}\) with
\[\Lambda_{0}=\sum_{n_{1},n_{2}\in\mathbb{Z}}(-1)^{n_{1}+n_{2}}q^{(n_{1}-n_{2})^ {2}}(M\sqrt{p_{1}p_{2}})^{n_{1}+n_{2}}\,. \tag{3.25}\]
Suppose we consider only magnetic fluxes \((n_{1},n_{2})=(0,0)\). Then (3.24) becomes
\[\sum_{(n_{1},n_{2})=(0,0)}F(n_{1},n_{2})= \left(\frac{1}{M^{2}}+\frac{(1+p_{1})(1+p_{2})}{M\sqrt{p_{1}p_{2 }}}+\big{(}2+p_{1}+p_{1}^{-1}+p_{2}+p_{2}^{-1}\big{)}\right.\] \[\qquad+\frac{M(1+p_{1})(1+p_{2})}{\sqrt{p_{1}p_{2}}}+M^{2}\bigg{)} q+\mathcal{O}(q^{2}), \tag{3.26}\]
in the double expansion of \(q\) and \(M\). Now we add the contributions coming from the magnetic fluxes \(|n_{1,2}|\leq 1\). We then find that \(M^{0}\) and \(M^{\pm 1}\) terms at \(q^{1}\) order are all canceled and the remaining terms are
\[\sum_{|n_{1,2}|\leq 1}F(n_{1},n_{2})= \left(\frac{1}{M^{4}p_{1}p_{2}}+\frac{(1+p_{1})(1+p_{2})}{M^{3}( p_{1}p_{2})^{3/2}}+\frac{1+p_{1}+p_{2}}{M^{2}p_{1}p_{2}}+M^{2}(p_{1}+p_{2}+p_{1}p_{2})\right.\] \[\qquad+M^{3}(1+p_{1})(1+p_{2})\sqrt{p_{1}p_{2}}+M^{4}p_{1}p_{2} \bigg{)}q+\mathcal{O}(q^{2})\,. \tag{3.27}\]
Again, if we consider the summation of the magnetic fluxes up to \(|n_{1}|,|n_{2}|\leq 2\), \(M^{\pm 2}\) and \(M^{\pm 3}\) terms are canceled and only higher order terms with \(M^{\pm 4,\pm 5,\pm 6}\) remain at \(q^{1}\) order. In this way, if we sum over sufficiently large magnetic fluxes, every order of the Kahler parameter expansion is satisfied. Using the elliptic genera (3.5) and
\[\Lambda=\frac{e^{\pi iw/12}}{\eta(w)}\Lambda_{0}\,, \tag{3.28}\]
we checked that such cancellation occurs up to \(k_{1},k_{2}\leq 2\) string numbers and \(q^{3}\) order.
Now, we will solve the blowup equation and determine the unknown coefficients in the modular ansatz. For \(Z_{(k,0)}\) and \(Z_{(0,k)}\), we can use the elliptic genera for \(k\) M-strings in [90] and they satisfy the blowup equations for the M-string theory as discussed in [50; 53]. The \((k_{1},k_{2})=(1,1)\) order in the blowup equation is independent of dynamical Kahler parameter and thus we cannot fix four unknown coefficients in the modular ansatz at this order. What we can determine is \(\Lambda\) at this order expressed as \(\tau\), \(m\), \(\epsilon_{1,2}\), and the unknown constants in the ansatz. Next, we solve the \((k_{1},k_{2})=(2,1)\) order in the blowup equation which now contains dynamical Kahler parameter \(\phi_{1,0}-\phi_{2,0}\). We need to fix \(4+110\) undetermined coefficients arising from \((k_{1},k_{2})=(1,1),(2,1)\) elliptic genera. For this we substitute the modular ansatz and \(\Lambda_{1}\) into the blowup equation at \((k_{1},k_{2})=(2,1)\) order, and solve it order by order in the \(q\) and \(M\) double expansion as previously described. This allows us to determine all \(4+110\) unknown coefficients as well as the \(\Lambda_{1}\) factor. The result is in perfect agreement with Table 1. We expect that higher order elliptic genera can be similarly calculated.
#### 3.1.2 IIB picture
The LST theory on two NS5-branes in type IIB string theory is the \(\mathcal{N}=(1,1)\)\(U(2)\) Yang-Mills theory. The partition function of this LST is factorized as
\[Z_{\rm GV}^{\rm IIB}=Z_{\rm pert}^{\rm IIB}\cdot Z_{\rm str}^{\rm IIB}\,, \tag{3.29}\]
where the perturbative contribution coming from the \(U(2)\) vector multiplet and an adjoint hypermultiplet is given by
\[\begin{split} Z_{\rm pert}^{\rm IIB}&=\text{PE} \left[-\frac{1+p_{1}p_{2}}{(1-p_{1})(1-p_{2})}\big{(}Q^{2}+2q+qQ^{-2}\big{)} \frac{1}{1-q}\right]\\ &\quad\cdot\text{PE}\left[\frac{\sqrt{p_{1}p_{2}}}{(1-p_{1})(1- p_{2})}\big{(}Q^{2}+2q+qQ^{-2}\big{)}\big{(}M+M^{-1}\big{)}\frac{1}{1-q} \right],\end{split} \tag{3.30}\]
where \(Q=e^{2\pi i\phi_{1}}\) is the \(SU(2)\) gauge fugacity and \(M=e^{2\pi im}\) is the fugacity for the \(SU(2)_{m}\) symmetry of the adjoint hypermultiplet. The partition function of the instanton strings is given by
\[Z_{\rm str}^{\rm IIB}=\sum_{k=0}^{\infty}e^{2\pi ikw}Z_{k}^{\rm IIB}\,, \tag{3.31}\]
where the little string tension \(w\) is identified with the square of the inverse gauge coupling \(1/g_{\rm YM}^{2}\) in the low energy Yang-Mills theory.
GLsmUpon applying S-duality, the system of NS5-branes and F1-strings in the type IIB string theory is transformed into a system of the D1- and D5-branes.
For \(k\)-instanton strings, we consider a configuration where \(k\) D1-branes are bound to 2 D5-branes as illustrated in Figure 2(a). The 2d theory on the D1-branes is described by a \(\mathcal{N}=(4,4)\)\(U(k)\) gauge theory with \(U(2)\) flavor symmetry. The theory also has an \(SO(4)=SU(2)_{l}\times SU(2)_{r}\) symmetry which rotates the 2345 directions, and an \(SO(4)=SU(2)_{R}\times SU(2)_{m}\) which rotates the 6789 directions. This brane configuration is studied in [63; 91], and we summarize the 2d gauge theory description and its matter content in \(\mathcal{N}=(0,4)\) language in Figure 2(b) and (c). In Figure 2(c), we denote by \(\{a,b\cdots\}\) doublet indices for \(SU(2)_{m}\), and other indices have been already introduced in Section 3.1.1.
Based on the 2d gauge theory description, the elliptic genus of the LST can be computed by evaluating the JK-residue of the contour integral of the 1-loop determinants from all the supermultiplets. The result for \(k\)-strings is [63]
\[Z_{k}^{\text{IIB}}=\sum_{|Y_{1}|+|Y_{2}|=k}\prod_{i,j=1}^{2}\prod_{s\in Y_{i}} \frac{\theta_{1}(\tau,E_{ij}(s)+m-\epsilon_{-})\theta_{1}(\tau,E_{ij}(s)-m- \epsilon_{-})}{\theta_{1}(\tau,E_{ij}(s)-\epsilon_{1})\theta_{1}(\tau,E_{ij}(s )+\epsilon_{2})}\,, \tag{3.32}\]
where \(Y_{1}\) and \(Y_{2}\) are Young diagrams, \(s\) is a box in the Young diagram, and
\[E_{ij}(s)=a_{i}-a_{j}-\epsilon_{1}h_{i}(s)+\epsilon_{2}v_{j}(s), \tag{3.33}\]
Figure 2: (a) A brane configuration of the \(\hat{A}_{1}\) LST in the type IIB string theory which consists of 2 D5-branes and \(k\) D1-branes. (b) The 2d \(\mathcal{N}=(0,4)\) gauge theory description for the \(k\)\(D1\)-branes, where a circle represents gauge symmetry, a square means flavor symmetry, and solid, dashed and zigzag lines denote hypermultiplets, Fermi multiplets and twisted hypermultiplets, repectively. (c) The \(\mathcal{N}=(0,4)\) matter content of the 2d gauge theory.
with \(a_{1}=-a_{2}=\phi_{1}\). Here, \(h_{i}(s)\) and \(v_{j}(s)\) are the arm length and leg length of a box \(s\) in \(Y_{i}\) and \(Y_{j}\), respectively.
Since the IIA LST and the IIB LST are related by the T-duality, they should have the same BPS spectra when placed on a spatial circle. This means that the partition function \(Z_{\rm GV}^{\rm IIA}\) of the type IIA LST is same as \(Z_{\rm GV}^{\rm IIB}\) for the type IIB LST under the exchange \(\tau\) and \(w\) up to extra factors which are independent of the dynamical parameter. More precisely, the following relation has been checked explicitly by expanding both sides in terms of \(w\), \(q\), \(Q\) and \(M\) in [63]:
\[Z_{\rm GV}^{\rm IIA}\big{|}_{\sigma\leftrightarrow w} \tag{3.34}\]
ModularityThe modular properties of the elliptic genus can be read off from the anomalies of the 2d chiral matters given in Figure 2(c). The chiral fermions contribute to the 2d anomaly polynomial as
\[\lambda_{+}^{\dot{\alpha}A}+\lambda_{-}^{\alpha A} \rightarrow 2k^{2}\bigg{(}\frac{c_{2}(r)+c_{2}(R)}{2}-\frac{c_{2}(l)+c_{2}(R )}{2}\bigg{)}\,, \tag{3.35}\] \[\lambda_{+}^{\alpha a}+\lambda_{-}^{\dot{\alpha}a} \rightarrow 2k^{2}\bigg{(}\frac{c_{2}(l)+c_{2}(m)}{2}-\frac{c_{2}(r)+c_{2}(m )}{2}\bigg{)}\,,\] (3.36) \[\Psi_{+}^{a}+\psi_{-}^{A} \rightarrow 2k(c_{2}(m)-c_{2}(R))\,, \tag{3.37}\]
where \(c_{2}(m)\) is the second Chern class of the \(SU(2)_{m}\) symmetry. Summing up these contributions gives the anomaly polynomial of the 2d gauge theory
\[I_{4}=2k\bigg{(}-c_{2}(R)+\frac{1}{4}\operatorname{Tr}F_{m}^{2}\bigg{)}\,. \tag{3.38}\]
This indeed agrees with the anomaly inflow result in (2.41), which takes into account the contribution from the counterterm in (2.31). This serves as indirect evidence to support the use of the counterterm in (2.31) for cancelling the mixed gauge-global anomaly of the 6d LST.
From the modular anomaly \(f(z)=\int I_{4}=2k(m^{2}-\epsilon_{+}^{2})\), we can set a modular ansatz for the \(k\) instanton string as
\[Z_{k}^{\rm IIB}=\frac{\Phi_{k}(\tau,\epsilon_{\pm},2\phi_{1},m)}{\prod_{s=1}^{ k}\varphi_{-1,1/2}(s\epsilon_{1,2})\prod_{l=0}^{s-1}\varphi_{-1,1/2}((s+1) \epsilon_{+}+(s-1-2l)\epsilon_{-}\pm 2\phi_{1})}\,. \tag{3.39}\]
The numerator \(\Phi_{k}\) can be written in terms of the Eisenstein series \(E_{4}(\tau)\), \(E_{6}(\tau)\) and the \(SU(2)\) Weyl invariant Jacobi forms for \(\epsilon_{\pm}\), \(2\phi_{1}\) and \(m\). One can explicitly check that the elliptic genus (3.32) has the same denominator structure as that in (3.39). We summarize the coefficients in the modular ansatz in Table 2 obtained by comparing two expressions (3.32) and (3.39). The ordering of the coefficients is ascending order with respect to \(\{\epsilon_{+},\epsilon_{-},2\phi_{1},m\}\) for \(k=1\) as defined in footnote 4. Here, for \(k=2\), we set \(\epsilon_{+}=0\) for simplicity and the order of coefficients in the modular ansatz is ascending order with respect to \(\{\epsilon_{-},2\phi_{1},m\}\).
Blowup equationWe can fix the unknown constants in the modular ansatz using the blowup equation. To begin with, let us evaluate the effective prepotential. The 1-loop prepotential from the \(SU(2)\) vector multiplet, an adjoint hypermultiplet, and their KK towers is given by
\[\epsilon_{1}\epsilon_{2}\mathcal{E}_{\rm 1-loop}=\frac{1}{12}\sum_{n\in \mathbb{Z}}\big{(}|n\tau\pm 2\phi_{1}|^{3}-|n\tau\pm 2\phi_{1}+m|^{3}\big{)}+ \epsilon_{+}^{2}\phi_{1}=(-m^{2}+\epsilon_{+}^{2})\phi_{1}. \tag{111}\]
Here, we use the zeta function regularization for the infinite sum. Then the effective prepotential is given by
\[\mathcal{E}=\frac{1}{\epsilon_{1}\epsilon_{2}}(-m^{2}+\epsilon_{+}^{2})\phi_{ 1}+\mathcal{E}_{\rm tree}^{(0)}\,,\quad\mathcal{E}_{\rm tree}^{(0)}=\frac{1} {\epsilon_{1}\epsilon_{2}}\big{(}w\phi_{1}^{2}+(-2m^{2}+2\epsilon_{+}^{2}) \phi_{0,0}\big{)}\,, \tag{112}\]
where the first term in \(\mathcal{E}_{\rm tree}^{(0)}\) is from the \(SU(2)\) gauge kinetic term and \(\phi_{0,0}\) is an auxiliary scalar for the non-dynamical 2-form field. One notices that under the reparametrization \(w\to\tau\), \(\phi_{1}\to\phi_{1,0}-\phi_{2,0}\), the effective prepotential is the same as the type IIA prepotential in (109).
To formulate the blowup equation, we choose magnetic fluxes for \(\phi_{1}\), \(\phi_{0,0}\), \(m\), \(\tau\) and \(w\) as
\[n_{1}\in\mathbb{Z}\,,\quad n_{0,0}\in\mathbb{Z}\,,\quad B_{m}=1/2\,,\quad B_{ \tau}=B_{w}=0\,. \tag{113}\]
Since the effective prepotential in (112) and the elliptic genus in (110) are the same as those for the type IIA picture up to reparametrization and overall factor, the same blowup equation should hold for the partition function in the type IIB picture:
\[\Lambda\hat{Z}_{\rm str}^{\rm IIB}=\sum_{n_{0,0},n_{1}\in\mathbb{Z}}(-1)^{n_{1} }e^{-2\pi iV}\frac{\hat{Z}_{\rm pert}^{\rm IIB(N)}\hat{Z}_{\rm pert}^{\rm IIB(S )}}{\hat{Z}_{\rm pert}^{\rm IIB}}\hat{Z}_{\rm str}^{\rm IIB(N)}\hat{Z}_{\rm str }^{\rm IIB(S)}\,. \tag{114}\]
We checked that this blowup equation holds for up to 2-strings and the third order in \(q\)-expansion. We also checked that inserting the 1-string modular ansatz (104) into the blowup equation and solving it allows us to determine all 32 unknown constants given in Table 2 within the ansatz.
### Heterotic LSTs
The second example is the \(\mathcal{N}=(1,0)\) LSTs on \(N\) parallel NS5-branes in the \(E_{8}\times E_{8}\) and \(SO(32)\) heterotic string theories which we call rank \(N\) heterotic LSTs [56; 57]. Again, these two LSTs are T-dual to each other under a circle compactification. In this subsection, we study the elliptic genera and the blowup equations of the rank 1 heterotic LSTs.
#### 3.2.1 \(E_{8}\times E_{8}\) picture
The \(E_{8}\times E_{8}\) heterotic LST is the worldvolume theory on a single M5-brane placed between two M9-branes at each end of the interval \(S^{1}/\mathbb{Z}_{2}\). Under the circle reduction, the M5-brane and the M9-branes reduce to an NS5-brane and two sets of \(\text{O}8^{-}+8\text{D}8\)-branes located at two ends of the interval as illustrated in Figure 3[92]. This theory can also be realized in F-theory by two \(-1\) curves \(\Sigma^{1}\) and \(\Sigma^{2}\) in the base surface of an elliptic CY3 with the intersection matrix given by [22]
\[\Omega^{\alpha\beta}=\begin{pmatrix}-1&1\\ 1&-1\end{pmatrix}. \tag{3.44}\]
The partition function of this LST can be factorized into the perturbative part \(Z^{\text{HE}}_{\text{pert}}\) for a single tensor multiplet and the contribution from strings \(Z^{\text{HE}}_{\text{str}}\) as
\[Z^{\text{HE}}_{\text{GV}}=Z^{\text{HE}}_{\text{pert}}\cdot Z^{\text{HE}}_{\text {str}}=Z^{\text{HE}}_{\text{pert}}\cdot\sum_{k_{1},k_{2}\geq 0}Q^{k_{1}} \bigg{(}\frac{e^{2\pi iw}}{Q}\bigg{)}^{k_{2}}Z^{\text{HE}}_{(k_{1},k_{2})}\,, \tag{3.45}\]
where \(Q\equiv e^{2\pi i(\phi_{1,0}-\phi_{2,0})}\) and \(\phi_{1,0}-\phi_{2,0}\) is the scalar VEV for the dynamical tensor multiplet.
GlsmThe \(E_{8}\times E_{8}\) LST contains non-perturbative strings arising from the D2-branes stretched between the D8-branes, the \(\text{O}8^{-}\)-plane and the NS5-brane in Figure 3(b). The worldvolume theory on the D2-branes at low energy can be described by a 2d \(\mathcal{N}=(0,4)\) gauge theory. For \((k_{1},k_{2})\)-strings, the gauge group is \(O(k_{1})\times O(k_{2})\). There are an \(\mathcal{N}=(0,4)\) vector multiplet and a symmetric hypermultiplet coming from
Figure 3: (a) Branes for the \(E_{8}\times E_{8}\) LST in the M-theory setup. (b) The \(E_{8}\times E_{8}\) LST realized in type IIA string theory.
the D2-D2 string modes for each gauge node, the Fermi multiplets in the bifundamental representations of \(O(k_{1})\times SO(16)\) and \(O(k_{2})\times SO(16)\) coming from the D2-D8 string modes, and the \(O(k_{1})\times O(k_{2})\) bifundamental Fermi multiplets and twisted hypermultiplets from the strings between two adjacent D2-branes. These multiplets form representations of the \(SU(2)_{l}\times SU(2)_{r}\) global symmetry which rotates 2345 directions, and those of the \(SO(3)\sim SU(2)_{R}\) rotational symmetry for 678 directions. In the strong coupling limit, the 678 directions and the M-theory circle become an \(\mathbb{R}^{4}\), so we expect the \(SO(3)\) symmetry enhances to \(SO(4)=SU(2)_{R}\times SU(2)_{m_{0}}\). We summarize the matter content of the 2d theory in Figure 4. When \(k_{1}=0\) or \(k_{2}=0\), this 2d gauge theory reduces to that for self-dual strings in the 6d E-string theory studied in [92; 93].
We can compute \(Z_{(k_{1},k_{2})}\) of the 2d gauge theory using the localization method [75; 89]. Here, we give explicit expressions of the elliptic genera up to \((k_{1},k_{2})=(k_{2},k_{1})=(2,1)\) order. The contour integral expressions for the elliptic genera and the detailed computations are presented in Appendix B.1. When \(k_{2}=0\), the elliptic genera are obtained by the localization method [75; 89].
Figure 4: Quiver diagram (a) and matter content (b) of 2d \(\mathcal{N}=(0,4)\) gauge theory for \(E_{8}\times E_{8}\) LST. Here, solid, dashed and zigzag lines denote hypermultiplets, Fermi multiplets and twisted hypermultiplets, repectively and \(i=1,2\) labels each gauge node.
genera reduce to those for the E-strings obtained in [93]:
\[Z^{\rm HE}_{(1,0)} =-\frac{1}{2}\sum_{I=1}^{4}\frac{\prod_{l=1}^{8}\theta_{I}(m_{l})}{ \eta^{6}\theta_{1}(\epsilon_{1})\theta_{1}(\epsilon_{2})}\,, \tag{3.46}\] \[Z^{\rm HE}_{(2,0)} =\frac{1}{4\eta^{12}\theta_{1}(\epsilon_{1})\theta_{1}(\epsilon_ {2})}\sum_{I=1}^{4}\left(\frac{\prod_{l=1}^{8}\theta_{I}(m_{l}\pm\frac{ \epsilon_{1}}{2})}{\theta_{1}(2\epsilon_{1})\theta_{1}(\epsilon_{2}-\epsilon_ {1})}+\frac{\prod_{l=1}^{8}\theta_{I}(m_{l}\pm\frac{\epsilon_{2}}{2})}{\theta _{1}(2\epsilon_{2})\theta_{1}(\epsilon_{1}-\epsilon_{2})}\right)\] \[\quad+\sum_{I=1}^{4}\sum_{J=I+1}^{4}\frac{\theta_{\sigma(I,J)}(0 )\theta_{\sigma(I,J)}(2\epsilon_{+})\prod_{l=1}^{8}\theta_{I}(m_{l})\theta_{J }(m_{l})}{4\eta^{12}\theta_{1}(\epsilon_{1,2})^{2}\theta_{\sigma(I,J)}( \epsilon_{1})\theta_{\sigma(I,J)}(\epsilon_{2})}\,, \tag{3.47}\]
where \(\theta_{I}\) are the Jacobi theta functions defined in (A.11), \(m_{1,\cdots,8}\) are chemical potentials for the \(SO(16)\) global symmetry, and \(\sigma(I,J)\) is defined as
\[\sigma(I,J)=\sigma(J,I)\,,\quad\sigma(I,I)=0\,,\quad\sigma(1,I)= I\,, \tag{3.48}\] \[\sigma(2,3)=4\,,\quad\quad\quad\sigma(2,4)=3\,,\quad\sigma(3,4)= 2\,.\]
Here we use a shorthand notation \(\theta_{I}(x\pm y)=\theta_{I}(x+y)\theta_{I}(x-y)\). For \((k_{1},k_{2})=(1,1)\),
\[Z^{\rm HE}_{(1,1)}=\frac{1}{4}\sum_{I,J=1}^{4}\frac{\prod_{l=1}^{8}\theta_{I} (m_{l})\cdot\prod_{l=9}^{16}\theta_{J}(m_{l})}{\eta^{12}\theta_{1}(\epsilon_ {1})^{2}\theta_{1}(\epsilon_{2})^{2}}\frac{\theta_{\sigma(I,J)}(\pm m_{0}+ \epsilon_{-})}{\theta_{\sigma(I,J)}(\pm m_{0}-\epsilon_{+})}\,, \tag{3.49}\]
where \(m_{l=9,\cdots,16}\) are chemical potentials for the other \(SO(16)\) symmetry and \(m_{0}\) is a chemical potential for \(SU(2)_{m_{0}}\). The \((k_{1},k_{2})=(2,1)\)-string elliptic genus is
\[Z^{\rm HE}_{(2,1)}=\frac{1}{4}Z^{(0)}_{(2,1)}+\frac{1}{8}\sum_{K=1}^{4}\sum_{I <J}^{4}Z^{(I,J,K)}_{(2,1)}\,, \tag{3.50}\]
where
\[Z^{(0)}_{(2,1)} =\sum_{I,J=1}^{4}\frac{-\prod_{l=1}^{8}\theta_{I}(m_{l}\pm\frac{ \epsilon_{1}}{2})\cdot\prod_{l=9}^{16}\theta_{J}(m_{l})}{2\eta^{18}\theta_{1 }(\epsilon_{1,2})^{2}\theta_{1}(2\epsilon_{1})\theta_{1}(\epsilon_{2}- \epsilon_{1})}\frac{\theta_{\sigma(I,J)}(\pm m_{0}+\epsilon_{1}-\frac{ \epsilon_{2}}{2})}{\theta_{\sigma(I,J)}(\pm m_{0}-\epsilon_{1}-\frac{ \epsilon_{2}}{2})}+(\epsilon_{1}\leftrightarrow\epsilon_{2})\] \[\quad+\sum_{I=1}^{4}\frac{-\prod_{l=1}^{8}\theta_{I}(m_{l}\pm(m_{ 0}+\epsilon_{+}))\cdot\prod_{l=9}^{16}\theta_{I}(m_{l})}{\eta^{18}\theta_{1}( \epsilon_{1,2})\theta_{1}(2m_{0})\theta_{1}(2m_{0}+2\epsilon_{+})\theta_{1}( 2m_{0}+2\epsilon_{+}+\epsilon_{1,2})}+(m_{0}\rightarrow-m_{0}), \tag{3.51}\]
and
\[Z^{(I,J,K)}_{(2,1)} =-\frac{\theta_{\sigma(I,J)}(0)\theta_{\sigma(I,J)}(2\epsilon_{+ })}{\eta^{18}\theta_{1}(\epsilon_{1,2})^{3}\theta_{\sigma(I,J)}(\epsilon_{1,2} )}\frac{\theta_{\sigma(I,K)}(\pm m_{0}+\epsilon_{-})\theta_{\sigma(J,K)}(\pm m _{0}+\epsilon_{-})}{\theta_{\sigma(I,K)}(\pm m_{0}-\epsilon_{+})\theta_{ \sigma(J,K)}(\pm m_{0}-\epsilon_{+})}\] \[\quad\cdot\prod_{l=1}^{8}\theta_{I}(m_{l})\theta_{J}(m_{l}) \cdot\prod_{l=9}^{16}\theta_{K}(m_{l})\,. \tag{3.52}\]
A few remarks are in order. First, we expect that two \(SO(16)\) flavor symmetries which we can see in Figure 4 and also from the matter content to be enhanced to the
\(E_{8}\times E_{8}\) symmetry at low energy. One can check this enhancement from the elliptic genera by expanding them in terms of \(q\). Second, although the worldsheet theory seems to have \(SU(2)_{m_{0}}\) flavor symmetry, the bulk 6d LST does not have any matter fields charged under this symmetry. In fact, this symmetry only acts on the string modes stretched between two O8-planes which correspond to 10d bulk modes moving along the direction parallel to the orientifold planes in Figure 3(b). These modes are decoupled from the 6D LST. Therefore the BPS spectrum of strings in the LST should not depends on \(m_{0}\). Let us check these expectations.
First, the \((1,0)\)-string and \((2,0)\)-string elliptic genera (3.46) and (3.47) do not contain \(m_{0}\). Second, \((1,1)\)-string elliptic genus (3.49) has \(m_{0}\) dependence. However \((1,1)\)-string order is independent of the dynamical parameter \(Q\), and we can therefore consider it as a contribution from the bulk modes not involved in the LST spectrum. Lastly, \((2,1)\)-string elliptic genus (3.50) does contain \(m_{0}\) which seems to be a problem. Quite surprisingly, however if we express the partition function in the GV-invariant form given in (2.3) and extract the BPS spectrum, \(m_{0}\) dependence in \((2,1)\)-string order disappears completely. We find that the single letter index at \((2,1)\)-string order is
\[f_{(2,1)} =q^{1/2}\big{[}\chi_{1,1}(\epsilon_{\pm})+\chi_{1/2,1/2}(\epsilon _{\pm})(\chi_{\bf 248}^{(1)}+1)+(\chi_{\bf 3875}^{(1)}+\chi_{\bf 248}^{(1)}+2) \big{]} \tag{3.53}\] \[\quad+q^{3/2}\big{[}\chi_{2,2}(\epsilon_{\pm})+\chi_{3/2,3/2}( \epsilon_{\pm})(\chi_{\bf 248}^{(1)}+3)+2\chi_{3/2,1/2}(\epsilon_{\pm})\] \[\qquad\qquad+\chi_{1,1}(\epsilon_{\pm})(\chi_{\bf 3875}^{(1)}+4 \chi_{\bf 248}^{(1)}+\chi_{\bf 248}^{(2)}+5)+\chi_{1,0}(\epsilon_{\pm})(2 \chi_{\bf 248}^{(1)}+3)+2\chi_{1/2,3/2}(\epsilon_{\pm})\] \[\qquad\qquad+\chi_{1/2,1/2}(\epsilon_{\pm})(\chi_{\bf 30380}^{(1)}+ \chi_{\bf 248}^{(1)}\chi_{\bf 248}^{(2)}+\chi_{\bf 248}^{(2)}+4\chi_{\bf 3875}^{( 1)}+7\chi_{\bf 248}^{(1)}+10)\] \[\qquad\qquad+\chi_{0,1}(\epsilon_{\pm})(2\chi_{\bf 248}^{(1)}+3)+( \chi_{\bf 147250}^{(1)}+2\chi_{\bf 30380}^{(1)}+\chi_{\bf 3875}^{(1)}\chi_{\bf 2 48}^{(2)}+\chi_{\bf 248}^{(1)}\chi_{\bf 248}^{(2)}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+4\chi _{\bf 3875}^{(1)}+8\chi_{\bf 248}^{(1)}+2\chi_{\bf 248}^{(2)}+7\big{]}+ \cdots,\]
where \(f_{(k_{1},k_{2})}\) is defined via \(Z_{\rm str}^{\rm HE}={\rm PE}[\frac{\sqrt{p_{1}p_{2}}}{(1-p_{1})(1-p_{2})}\sum Q ^{k_{1}}(w/Q)^{k_{2}}f_{(k_{1},k_{2})}]\), \(\chi_{j_{l},j_{r}}(\epsilon_{\pm})=\chi_{j_{l}}(\epsilon_{-})\chi_{j_{r}}( \epsilon_{+})\) represents spin \((j_{l},j_{r})\) state, and \(\chi_{\bf R}^{(i)}\) is the character of representation \({\bf R}\) in the \(i\)-th \(E_{8}\) symmetry algebra. This is indeed independent of \(m_{0}\) showing that the dynamical BPS states in the spectrum of the LST are independent of \(U(1)_{m_{0}}\) symmetry. We expect this to hold for higher order computations.
ModularityThe chiral fermions in the 2d \(\mathcal{N}=(0,4)\) gauge theory given in Figure 4(b) contribute to the 2d anomaly polynomial \(I_{4}\) as
\[\begin{split}\lambda^{\dot{\alpha}A}_{+(1)}+\lambda^{\dot{\alpha}A }_{+(2)}&\to\sum_{i=1}^{2}k_{i}(k_{i}-1)\bigg{(}\frac{c_{2}(r)+c_{2} (R)}{2}+\frac{p_{1}(T_{2})}{24}\bigg{)}\,,\\ \lambda^{\alpha A}_{-(1)}+\lambda^{\alpha A}_{-(2)}& \to-\sum_{i=1}^{2}k_{i}(k_{i}+1)\bigg{(}\frac{c_{2}(l)+c_{2}(R)}{2}+\frac{p_{ 1}(T_{2})}{24}\bigg{)}\,,\\ \chi^{\dot{\alpha}}_{-}+\chi^{\alpha}_{+}&\to 2k_{1}k_{2} \bigg{(}\frac{c_{2}(l)-c_{2}(r)}{2}\bigg{)}\,,\\ \Psi^{(1)}_{l}+\Psi^{(2)}_{l}&\to\bigg{(}\frac{k_{1}}{4 }\operatorname{Tr}F_{1}^{2}+\frac{k_{2}}{4}\operatorname{Tr}F_{2}^{2}+(k_{1} +k_{2})\frac{p_{1}(T_{2})}{3}\bigg{)}\,,\end{split} \tag{100}\]
where \(F_{1}\) and \(F_{2}\) are the 2-form field strengths for two \(SO(16)\) global symmetries. This agrees with the anomaly polynomial computed from the anomaly inflow using (41):
\[I_{4} =-\frac{(k_{1}-k_{2})^{2}}{2}(c_{2}(l)-c_{2}(r))-\frac{k_{1}+k_{2 }}{2}\bigg{(}c_{2}(l)+c_{2}(r)+2c_{2}(R)-\frac{1}{2}p_{1}(T_{2})\bigg{)}\] \[\quad+\frac{k_{1}}{4}\operatorname{Tr}F_{1}^{2}+\frac{k_{2}}{4} \operatorname{Tr}F_{2}^{2}\,. \tag{101}\]
We notice that the elliptic genera above for \(k_{1},k_{2}\geq 1\) have additional poles at \(m_{0}=\epsilon_{+}\) beside the center of mass contributions. These poles come from the bulk modes decoupled from the 6d LST. Based this observation, we write the modular ansatz as
\[Z^{\text{HE}}_{(k_{1},k_{2})}=\frac{1}{\eta^{12(k_{1}+k_{2})}}\frac{\Phi_{(k_ {1},k_{2})}(\tau,\phi,m_{0},m_{l=1,\cdots,16})}{\mathcal{D}^{\text{cm}}_{(k_{ 1},k_{2})}\cdot\prod_{s=1}^{\kappa}\varphi_{-1,1/2}(\pm s\lambda(m_{0}- \epsilon_{+}))}\,, \tag{102}\]
with \(\kappa=\min(k_{1},k_{2})\). The \((1,1)\)-string elliptic genus (115) from ADHM computation and the identity \(\prod_{I=1}^{4}\theta_{I}(\pm m_{0}-\epsilon_{+})=\eta^{6}\theta_{1}(\pm 2m_{0} -2\epsilon_{+})\) suggest \(\lambda=2\) here.
Let us first consider the case where the flavor chemical potentials are switched off \(m_{l=1,\cdots,16}=0\). In this case the numerator \(\Phi_{(k_{1},k_{2})}\) can be written in terms of the Eisenstein series and the \(SU(2)\) Jacobi forms for \(\epsilon_{\pm}\) and \(m_{0}\), and we find that this ansatz is compatible with the elliptic genera from the ADHM construction. We explicitly check this up to \((2,1)\)-string order.
However, the cases with generic flavor chemical potentials turn out to be rather subtle. It has been shown that the \(Z^{\text{HE}}_{(k_{1},0)}\) for the E-strings can be expressed in terms of \(E_{8}\) Weyl invariant Jacobi forms [76; 93], which demonstrates the enhancement of symmetry from \(SO(16)\) to \(E_{8}\) at low energy. Similarly, we expect the symmetry enhancement \(SO(16)\times SO(16)\to E_{8}\times E_{8}\) in the 2d CFTs for the strings in the LST. This can be verified by checking if the spectrum of the BPS strings forms \(E_{8}\times E_{8}\) representations. Indeed, we explicitly checked in (101) that the single letter index
at \((k_{1},k_{2})=(2,1)\) can be written in terms of \(E_{8}\times E_{8}\) characters. So it seems that one can formulate a consistent ansatz of the form (3.56) with generic flavor chemical poentials.
However, our analysis revealed that this is not the case due to the presence of additional bulk states that do not carry dynamical tensor charge. As these states are decoupled from the LST, we cannot expect them to form representations of the \(E_{8}\times E_{8}\) symmetry. For instance, the single letter index at \((k_{1},k_{2})=(1,1)\), which we compute from the \((1,1)\)-string elliptic genus in (3.49), cannot be written in terms of the \(E_{8}\times E_{8}\) characters, as demonstrated below:
\[f_{(1,1)}=\frac{e^{4\pi im_{0}}}{1-e^{4\pi i(m_{0}\pm\epsilon_{+} )}}\bigg{[}\frac{\chi_{0,1/2}(\epsilon_{+})}{q}+\Big{(}2\chi_{1/2,1}(\epsilon_ {\pm})-\chi_{1/2,0}(\epsilon_{\pm})(\chi_{1}(m_{0})-1) \tag{3.57}\] \[\qquad\qquad\qquad+\chi_{0,1/2}(\epsilon_{\pm})(\chi_{1}(m_{0})+ \chi_{120}^{(1)}+\chi_{120}^{(2)}+1)+\chi_{1/2}(m_{0})\chi_{16}^{(1)}\chi_{1 6}^{(2)}\Big{)}+\mathcal{O}(q)\bigg{]}\,,\]
where the notations are same as those in (3.53), except that \(\chi_{\bf R}^{(i)}\) is now the \(i\)-th \(SO(16)\) character. We believe that this is due to the presence of additional bulk states in the spectrum at this order. For this reason, the ADHM computations above when \(k_{1},k_{2}\geq 1\) do not give elliptic genera that exhibit the symmetry enhancement to \(E_{8}\times E_{8}\). Therefore we are unable to write ansatzes that reproduce the ADHM results in terms of \(E_{8}\) Jacobi forms.
Even though the elliptic genera obtained from the ADHM computation do not exhibit manifest \(E_{8}\times E_{8}\) symmetry, we can still attempt to construct a modular ansatz in terms of \(E_{8}\) Weyl invariant Jacobi forms that accurately reproduces the BPS spectrum of the LST up to extra decoupled string states. There are nine fundamental \(E_{8}\) Jacobi forms \(A_{1,2,3,4,5}\) and \(B_{2,3,4,6}\) given in (A.27), where \(A_{n}\) and \(B_{n}\) have index \(n\) and weight \(4\) and \(6\), repectively. We first write an ansatz for \(\Phi_{(k_{1},k_{2})}\) in (3.56) using the \(E_{8}\) Jacobi forms for \(m_{1,\cdots,8}\) and \(m_{9,\cdots,16}\), together with the Eisenstein series and \(SU(2)\) Jacobi forms for \(\epsilon_{\pm}\) and \(m_{0}\). We then fix the unknown coefficients in the ansatz by using the dynamical BPS spectrum from the ADHM computation. To our surprise, we find that there are two values of \(\lambda\), namely \(1\) and \(2\), that are consistent with both the \(E_{8}\times E_{8}\) symmetry and the spectrum of the LST obtained through ADHM computation. We check this up to \((2,1)\)-string order and \(q^{5}\) order in \(q\)-expansion, and report the results from these two choices in Table 3 and 4, respectively, where the coefficients are listed in ascending order with respect to \(\{\epsilon_{+},\epsilon_{-},m_{0},m_{1,\cdots,8},m_{9,\cdots,16}\}\).
At \((k_{1},k_{2})=(1,0)\) and \((2,0)\), there are \(1\) and \(49\) constants, repectively, and they can be fixed by comparing the ansatz with the E-string elliptic genera as done in [76]. At \((k_{1},k_{2})=(1,1)\), the comparison does not yield any constraints due to the presence of decoupled states in the elliptic genus that are not part of the LST. However, the requirement from the GV-invariant form given in (2.3) still impose additional constraints. This allows us to fix all coefficients for \(\lambda=1\) and \(23\) coefficients among \(37\) for \(\lambda=2\). At \((k_{1},k_{2})=(2,1)\), there are \(130\) coefficients for \(\lambda=1\) and \(831\) coefficients
for \(\lambda=2\), and all of these coefficients are determined by comparison with the LST spectrum computed from the ADHM computation.
It is worth noting that the spectra of the decoupled states, which do not carry dynamical tensor charge, differ between the ADHM result and the results from the two values of \(\lambda\). We also note that 14 coefficients at \((1,1)\)-string order for \(\lambda=2\) are not fixed and they appear in the coefficients in \((2,1)\)-string ansatz, as shown in Table 4. However, these coefficients do not affect the BPS spectrum for states carrying non-zero dynamical tensor charge.
Blowup equationThe tree-level and one-loop contributions to the effective prepotential of the \(E_{8}\times E_{8}\) LST are identical to those for the E-string theory, but there are additional contributions from the auxiliary 2-form field. Collecting these contributions yields the full effective prepotential which is given by
\[{\cal E} =\frac{1}{\epsilon_{1}\epsilon_{2}}\Bigg{(}\frac{\tau}{2}(\phi_{ 1,0}-\phi_{2,0})^{2}+\Bigg{(}\frac{\epsilon_{1}^{2}+\epsilon_{2}^{2}}{4}- \frac{1}{2}\sum_{i=1}^{8}m_{i}^{2}+\epsilon_{+}^{2}\Bigg{)}(\phi_{1,0}-\phi_{ 2,0})\Bigg{)}+{\cal E}_{\rm tree}^{(0)}\,,\] \[{\cal E}_{\rm tree}^{(0)} =\frac{1}{\epsilon_{1}\epsilon_{2}}\Bigg{(}\frac{\epsilon_{1}^{ 2}+\epsilon_{2}^{2}}{2}-\frac{1}{2}\sum_{i=1}^{16}m_{i}^{2}+2\epsilon_{+}^{2} \Bigg{)}\phi_{0,0}\,, \tag{113}\]
with an auxiliary scalar VEV \(\phi_{0,0}\).
For a blowup equation, we consider a set of consistent magnetic fluxes for the dynamical tensor and the auxiliary 2-form field such that
\[\phi_{1,0}-\phi_{2,0}\ \rightarrow\ \phi_{1,0}-\phi_{2,0}+n_{1,0}\epsilon_{1, 2}\,\quad\phi_{0,0}\ \rightarrow\ \phi_{0,0}+n_{0,0}\epsilon_{1,2}\, \tag{114}\]
where the fluxes are quantized as
\[n_{1}\equiv n_{1,0}+n_{0,0}\in\mathbb{Z}+1/2\,,\quad n_{2}\equiv n_{0,0}\in \mathbb{Z}\,. \tag{115}\]
\begin{table}
\begin{tabular}{|c|l|} \hline \((k_{1},k_{2})\) & \multicolumn{2}{|c|}{\(\big{\{}C_{i}^{(k_{1},k_{2})}\big{\}}\)} \\ \hline \hline \((1,0)\) & \(\{1\}\) \\ \hline \((2,0)\) & \(\frac{1}{2^{13}\cdot 36}\,\{4,-3,5,3,10,32,-15,96,32,36,-24,40,-12,-40,-128,0,-5,-4,5,-32,-24,-9,0,15,\) \\ & \(9,-10,64,-5,32,0,3,6,-15,-9,0,-72,15,-96,-12,-3,3,0,-27,18,-45,9,45,108,0\}\) \\ \hline \((1,1)\) & \(\frac{1}{2^{2}\cdot 3}\{-1,1\}\) \\ \hline \((2,1)\) & \(\frac{1}{2^{2}\cdot 3}\{0,-4,8,-4,3,0,-15,0,16,16,-3,-3,10,15,-48,112,3,5,-10,-112,48,-5,-16,-16,\) \\ & \(-12,-24,12,-40,36,24,40,40,-128,0,-36,0,-40,256,-128,0,0,0,5,0,0,-24,0,-5,-5,40,\) \\ & \(-48,5,20,8,4,9,0,-5,0,-9,-10,5,32,-9,0,15,10,-32,64,0,9,0,-15,-96,32,0,0,0,0,-9, 0,\) \\ & \(15,0,-12,6,9,0,-15,-24,-60,3,-6,-15,0,144,-120,-3,0,15,72,0,3,-3,-3,0,0,3,0,0, 0,9,\) \\ & \(18,-9,45,-27,-18,-45,-45,108,0,27,0,45,-216,108,0,0,0,0,0\) \\ \hline \end{tabular}
\end{table}
Table 3: Coefficients \(C_{i}^{(k_{1},k_{2})}\) in the modular ansatz of rank 1 \(E_{8}\times E_{8}\) heterotic LST, written in terms of \(E_{8}\) Jacobi forms and \(\lambda=1\).
\(\{C_{i}^{(k_{1},k_{2})}\}\)
\begin{tabular}{|c|} \hline \(\left(k_{1},k_{2}\right)\) \\ \hline \hline \(\left(1,1\right)\) & \(\frac{1}{2\pi^{2}}\{-1,8957952c_{2},1-8957952c_{2},-8957952c_{5},-16-8957952c_{ 5},8957952c_{7},-8957952c_{7},8957952c_{9},\\ \(\left(1,1\right)\) & \(6-8957952c_{2},8957952c_{12},-3-2\), \(8957952c_{16},-6-8957952c_{16},6-8957952c_{16},8957952c_{18},\\ \(\left(1,1\right)\) & \(6-8957952c_{18},8957952c_{20},-3-8957952c_{20},8957952c_{22},-12-8957952c_{22},8957952c_{ 24},-8957952c_{24},-857952c_{24},12,\\ \(\left(8957952c_{22},-8957952c_{23},-8957952c_{25},18-8957952c_{26},9857952c_{2 9},857952c_{22},-18-8957952c_{31},-18-8957952c_{31},-9,0,9857952c_{35},\\ \(-27\) & \(-8957952c_{32},27\)) \\ \hline \end{tabular}
\begin{tabular}{|c|} \hline \(\left(k_{1},k_{2}\right)\) \\ \hline \hline \(\left(1,1\right)\) & \(\frac{1}{2\pi^{2}}\{-1,8957952c_{2},1-8957952c_{2},-8957952c_{56},-16-8957952c_{ 56},8957952c_{7},-8957952c_{7},8957952c_{9},\\ \(\left(1,1\right)\) & \(6-8957952c_{2},8957952c_{18},32-8957952c_{19},-32,8957952c_{26},-6-8957952c_{26},89 57952c_{18},\\ \(\left(1,1\right)\) & \(6-8957952c_{26},8957952c_{20},-3-8957952c_{20},8957952c_{22},-12-8957952c_{22},89 57952c_{24},-8597952c_{24},12,\\ \(\left(8957952c_{22},-8957952c_{22},18-8957952c_{23},-8957952c_{23},957952c_{25}, -18-8957952c_{23},-18-8957952c_{31},-9,0,9857952c_{35},\\ \(-27\) & \(-8957952c_{32},27\)) \\ \hline \end{tabular}
\begin{tabular}{|c|} \hline \(\left(1,1\right)\) & \(\frac{1}{2\pi^{2}}\{-1,8957952c_{2},1-8957952c_{2},-8957952c_{56},-16-8957952c_{ 56},8957952c_{7},-8957952c_{7},-8957952c_{7},8957952c_{9},\\ \(\left(1,1\right)\) & \(6-8957952c_{2},8957952c_{20},-3-8957952c_{20},8957952c_{22},-12-8957952c_{22},89 57952c_{24},-8957952c_{24},12,\\ \(\left(8957952c_{22},-8957952c_{23},-8957952c_{23},18-8957952c_{23},957952c_{2 5},-18-8957952c_{23},957952c_{24},-18-8957952c_{31},-9,0,9857952c_{35},\\ \(-27\) & \(-8957952c_{31},-957952c_{32},7\)) \\ \hline \(\left(1,1\right)\) & \(6-8957952c_{24},8957952c_{20},-3-8957952c_{20},8957952c_{22},-18-8957952c_{22 3},957952c_{24},-8957952c_{25},-8957952c_{26},8957952c_{24},-8957952c_{24},12,\\ \(\left(8957952c_{22},-8957952c_{22},-8957952c_{23},18-8957952c_{23},957952c_{25}, -18-8957952c_{23},957952c_{24},-18-8957952c_{31},-9,0,9857952c_{35},-27\) \\ \(-27\) & \(-8957952c_{23},27\)) \\ \hline \end{tabular}
\begin{tabular}{|c|} \hline \(\left(1,1\right)\) & \(\frac{1}{2\pi^{2}}\{-1,8957952c_{2},1-8957952c_{2},-8957952c_{2},-16-8957952c_{ 56},-16-8957952c_{26},8957952c_{56},8957952c_{7},-8957952c_{7},8957952c_{26}, -8957952c_{7},8957952c_{6},-8957952c_{7},8957952c_{6},89,7957952c_{8},\\ \(\left(1,1\right)\) & \(6-8957952c_{2},8957952c_{20},-3-8957952c_{20},8957952c_{22},-3-8957952c_{22},89 57952c_{22},-12-8957952c_{22},8957952c_{24},-8957952c_{24},12,\\ \(\left(8957952c_{22},-8957952c_{24},12\right)-8957952c_{22},-8957952c_{225},18-8957952c_{2 25},957952c_{22},-18-8957952c_{23},-18-8957952c_{31},-9,0,9857952c_{331},-9,0,98579 52c_{35},\\ \(-27\) & \(-8957952c_{32},27\)) \\ \hline \(\left(2,1\right)\) & \(\frac{1}{2\pi^{2}}\{-1,8957952c_{2},1-8957952c_{2},-8957952c_{2},-18-8957952c_{ 56},-16-8957952c_{26},-8957952c_{6},8957952c_{7},-8957952c_{26},-8957952c_{27},89 57952c_{35},\\ \(\left(1,1\right)\) & \(6-8957952c_{24},8958559c_{20},-3-8957952c_{20},-3-8957952c_{20},8957952c_{22 5},-12-8957952c_{22},8957952c_{22},-18-8957952c_{22},8957952c_{24},-8957952c_{24}, 12,\\ \(\left(8957952c_{22},-8957952c_{24},12\right)-8957952c_{22},-8957952c_{22},18-8957952c_{22 },957952c_{22},957952c_{223},-18-8957952c_{231},-8957952c_{24},-8957952c_{254}, -8957952c_{25},-8957952c_{26},-8957952c_{27},-8957952c_{26},-8957952c_{27},89 57952c_{28},-957952c_{29},18-8957952c_{218},-8957952c_{218},-8957952c_{24},12, \\ \(\left(8957952c_{22},-8957952c_{24},12\right)-8957952c_{22},-8957952c_{25},
where \(\Lambda=\Lambda(w,\tau,m_{i})\). We checked the the elliptic genera computed from the 2d gauge theory description for the strings satisfy this blowup equation, with \(m_{1}=\cdots=m_{8}\) and \(m_{9}=\cdots=m_{16}\), up to \((k_{1},k_{2})=(2,1)\) order and to second order in the \(q\)-expansion.
The elliptic genera can also be computed by solving the blowup equation with a modular ansatz written in terms of \(E_{8}\) Jacobi forms in the following way. The \((1,0)\)- and \((2,0)\)-string elliptic genera have already been calculated in this way for the 6d E-strings in previous work [50; 53]. At the \((1,1)\)- and \((2,1)\)-string order, the modular ansatzes are constrained by the requirement of the GV-invariant expression in (3), which we use to fix several coefficients in the ansatzes. Finally, we solve the blowup equation, which completely fixes all the coefficients in \((2,1)\)-string modular ansatz for both \(\lambda=1\) and \(2\). These results are in agreement with those presented in Table 3 and 4. We expect that the elliptic genera at higher orders can be calculated using the blowup equation in a similar manner.
#### 3.2.2 \(So(32)\) picture
The \(SO(32)\) heterotic LST is the worldvolume theory on \(N\) NS5-branes in the \(SO(32)\) heterotic string theory. At low energies, it is described by an \(Sp(N)\) gauge theory with 16 fundamental hypermultiplets. In the F-theory construction [22], this theory is engineered by a rational curve \(\Sigma\) in the base surface with \(\Sigma^{2}=0\). It is also T-dual to the \(E_{8}\times E_{8}\) heterotic LST upon compactification on a circle.
The partition function when \(N=1\) can be written as
\[Z_{\rm GV}^{\rm HO}=Z_{\rm pert}^{\rm HO}\cdot Z_{\rm str}^{\rm HO}=Z_{\rm pert }^{\rm HO}\cdot\sum_{k=0}^{\infty}e^{2\pi ikw}Z_{k}^{\rm HO}\,, \tag{103}\]
where \(w\sim 1/g_{\rm YM}^{2}\) is intrepreted as the inverse gauge coupling and \(k\) is the little string number. The 1-loop contributions coming from the \(Sp(1)\) vector and the fundamental hypermultiplets are
\[\begin{split} Z_{\rm pert}^{\rm HO}={\rm PE}\,\bigg{[}& -\frac{1+p_{1}p_{2}}{(1-p_{1})(1-p_{2})}\big{(}Q^{2}+qQ^{-2}\big{)} \frac{1}{1-q}\\ &+\frac{\sqrt{p_{1}p_{2}}}{(1-p_{1})(1-p_{2})}\big{(}Q+qQ^{-1} \big{)}\sum_{l=1}^{16}\big{(}e^{2\pi im_{l}}+e^{-2\pi im_{l}}\big{)}\frac{1}{ 1-q}\bigg{]},\end{split} \tag{104}\]
where \(m_{l}\) are the chemical potentials for the \(SO(32)\) flavor symmetry and \(Q=e^{2\pi i\phi_{1}}\) is the \(Sp(1)\) fugacity.
GlsmUnder S-duality, the \(SO(32)\) LST can be mapped into a system of a D5-brane in type I string theory. In this system, the little strings are \(k\) D1-branes bound to the D5-brane, as shown in Figure 5(a) [65; 94]. The partition function for the little strings can be computed using the 2d gauge theory description for the worldvolume theory on the D1-branes.
The 2d gauge theory is an \({\cal N}=(0,4)\)\(O(k)\) gauge theory with a symmetric hypermultiplet, twisted hypermultiplet, and an antisymmetric Fermi multiplet describing the motion of the D1-branes on O9-plane. In addition, there are \(O(k)\times Sp(1)\) bifundamental matters coming from the D1-D5 strings, and the D1-D9 string modes give rise to Fermi multiplets in bifundamental representation of \(O(k)\times SO(32)\). This theory has an \(SO(4)=SU(2)_{l}\times SU(2)_{r}\) global symmetry which rotates 2345 directions and another \(SO(4)=SU(2)_{R}\times SU(2)_{m_{0}}\) rotation symmetry corresponds to 6789 directions. Essentially, the 2d gauge theory agrees with the ADHM data for \(k\)-instantons in the \(Sp(1)\) gauge theory with 16 fundamentals. The 2d gauge theory description and its matter content are summarized in Figure 5(b) and (c).
The elliptic genera of the 2d gauge theory for \(k\) little strings can be calculated using localization. The computational details will be explained in Appendix B.2. The 1-string elliptic genus is given by
\[Z_{1}^{\rm HO}=-\sum_{I=1}^{4}\frac{\prod_{l=1}^{16}\theta_{I}(m_{l})}{2\eta^{ 12}\theta_{1}(\epsilon_{1})\theta_{1}(\epsilon_{2})\theta_{1}(\pm m_{0}- \epsilon_{+})}\frac{\theta_{I}(m_{0}\pm\phi_{1})}{\theta_{I}(\epsilon_{+}\pm \phi_{1})}\,, \tag{110}\]
where \(m_{0}\) is the chemical potential for \(SU(2)_{m_{0}}\). Also, the explicit expression of the 2-string elliptic genus is presented in (109). We note that although the elliptic genera seem to depend on the mass parameter \(m_{0}\), which plays no role in the 6d worldvolume theory, the BPS states carrying \(Sp(1)\) gauge charge are all independent of \(m_{0}\). One can see this by checking that all BPS states captured by the elliptic
Figure 5: (a) Brane configuration, (b) quiver description, and (c) matter content for the rank 1 \(SO(32)\) heterotic LST.
genera depending on \(\phi_{1}\) are independent of \(m_{0}\), which we checked up to 2-string and \(q^{2}\) order in the \(q\)-expansion.
The \(E_{8}\times E_{8}\) and \(SO(32)\) heterotic LSTs are related via the T-duality [95; 96; 97; 98; 99]. This implies that the partition functions of these two theories are related each other up to appropriate reparametrizations of fugacities. To compare two elliptic genera, we first turn on Wilson lines for the flavor symmetries along the T-dual circle such that they break \(E_{8}\times E_{8}\) and \(SO(32)\) symmetries to their common subgroup \(SO(16)\times SO(16)\). In the \(E_{8}\times E_{8}\) picture, the Wilson lines shift some of the chemical potentials for \(E_{8}\times E_{8}\) symmetry as
\[\tilde{m}_{8}^{\text{\,HE}}=m_{8}^{\text{\,HE}}+\tau^{\text{\,HE}}\,,\quad \tilde{m}_{16}^{\text{\,HE}}=m_{16}^{\text{\,HE}}+\tau^{\text{\,HE}}\,,\quad \tilde{m}_{l}^{\text{\,HE}}=m_{l}^{\text{\,HE}}\;(l\neq 8,16)\,, \tag{112}\]
and we also redefine
\[\begin{split}\phi_{1,0}^{\text{\,HE}}-\phi_{2,0}^{\text{\,HE}}& \rightarrow\phi_{1,0}^{\text{\,HE}}-\phi_{2,0}^{\text{\,HE}}+ \tilde{m}_{8}^{\text{\,HE}}-\frac{\tau^{\text{\,HE}}}{2}\,,\\ w^{\text{\,HE}}&\to w^{\text{\,HE}}+\tilde{m}_{8}^{ \text{\,HE}}+\tilde{m}_{16}^{\text{\,HE}}-\tau^{\text{\,HE}}\,.\end{split} \tag{113}\]
To distinguish chemical potentials in two LSTs, we add a superscript 'HE' for the chemical potentials in \(E_{8}\times E_{8}\) LST. We list some leading BPS states in Table 5, where we only show the states carrying nonzero charge for \(\phi_{1,0}-\phi_{2,0}\).
\begin{table}
\begin{tabular}{|c|c||c|c|} \hline \(\mathbf{d}\) & \(\oplus N_{j_{i},j_{r}}^{\mathbf{d}}(j_{l},j_{r})\) & \(\mathbf{d}\) & \(\oplus N_{j_{i},j_{r}}^{\mathbf{d}}(j_{l},j_{r})\) \\ \hline \((1,0,0)\) & \(\mathbf{16}_{1}(0,0)\) & \((1,0,\frac{1}{2})\) & \(\overline{\mathbf{128}}_{1}(0,0)\) \\ \hline \((1,0,1)\) & \(\begin{array}{c}[\mathbf{560}_{1}+\mathbf{16}_{1}](0,0)\oplus\\ \mathbf{16}_{1}(\frac{1}{2},\frac{1}{2})\end{array}\) & \((2,0,0)\) & \((0,\frac{1}{2})\) \\ \hline \((2,0,\frac{1}{2})\) & \(\mathbf{128}_{1}(0,\frac{1}{2})\) & \((2,0,1)\) & \(\begin{array}{c}[\mathbf{1820}_{1}+\mathbf{120}_{1}+2](0,\frac{1}{2})\oplus \\ (\frac{1}{2},0)\oplus[\mathbf{120}_{1}+1](\frac{1}{2},1)\oplus\\ (1,\frac{3}{2})\end{array}\) \\ \hline \((2,1,0)\) & \(\mathbf{16}_{2}(0,0)\) & \((2,1,\frac{1}{2})\) & \([\mathbf{128}_{1}\cdot\mathbf{16}_{2}+\overline{\mathbf{128}}_{2}](0,0)\) \\ \hline \((2,1,1)\) & \(\begin{array}{c}[\mathbf{128}_{1}\cdot\overline{\mathbf{128}}_{2}+(\mathbf{1 820}_{1}+2\cdot\mathbf{120}_{1}+4)\mathbf{16}_{2}+\mathbf{560}_{2}](0,0)\\ \oplus[(\mathbf{120}_{1}+3)\cdot\mathbf{16}_{2}](\frac{1}{2},\frac{1}{2})+ \mathbf{16}_{2}(1,1)\end{array}\) \\ \hline \end{tabular}
\end{table}
Table 5: BPS spectrum of the rank \(1\)\(E_{8}\times E_{8}\) heterotic LST after introducing the Wilson lines, up to \(d_{1}\leq 2\), \(d_{2}\leq 1\) and \(d_{3}\leq 1\). Here, \(\mathbf{d}=(d_{1},d_{2},d_{3})\) labels the BPS states with charge \(d_{1}(\phi_{1,0}^{\text{\,HE}}-\phi_{2,0}^{\text{\,HE}})+d_{2}(w^{\text{\,HE}}- \phi_{1,0}^{\text{\,HE}}+\phi_{2,0}^{\text{\,HE}})+d_{3}\tau^{\text{\,HE}}\) after the redefinition as in (113). \(\mathbf{R}_{1,2}\) labels representation of \(SO(16)_{1,2}\) whose chemical potentials are \(\{\tilde{m}_{1}^{\text{\,HE}},\cdots,\tilde{m}_{8}^{\text{\,HE}}\}\), and \(\{\tilde{m}_{9}^{\text{\,HE}},\cdots,\tilde{m}_{16}^{\text{\,HE}}\}\) given in (112). The states related by the symmetry \(d_{1}\leftrightarrow d_{2}\) and \(SO(16)_{1}\leftrightarrow SO(16)_{2}\) are omitted in the table. We only show the LST BPS states which have nonzero charge for \(\phi_{1,0}-\phi_{2,0}\).
In the \(SO(32)\) picture, the Wilson lines shift
\[\tilde{m}_{l}^{\rm HO}=m_{l}^{\rm HO}\,\left(1\leq l\leq 8\right), \quad\tilde{m}_{l}^{\rm HO}=m_{l}^{\rm HO}+\frac{\tau^{\rm HO}}{2}\,\left(9 \leq l\leq 16\right), \tag{111}\]
and we redefine
\[w^{\rm HO}\to w^{\rm HO}+\frac{1}{2}\sum_{l=9}^{16}\tilde{m}_{l}^{ \rm HO}-\tau^{\rm HO}\,, \tag{112}\]
where we put a superscript 'HO' to denote the \(SO(32)\) chemical potentials. The perturbative partition function in (109) now becomes
\[Z_{\rm pert}^{\rm HO} =\text{PE}\,\bigg{[}-\frac{1+p_{1}p_{2}}{(1-p_{1})(1-p_{2})} \Big{(}e^{4\pi i\phi_{1}^{\rm HO}}+e^{2\pi i(\tau^{\rm HO}-2\pi i\phi_{1}^{\rm HO })}\Big{)}\frac{1}{1-e^{2\pi i\tau^{\rm HO}}} \tag{113}\] \[+\frac{\sqrt{p_{1}p_{2}}}{(1-p_{1})(1-p_{2})}\Big{(}e^{2\pi i \phi_{1}^{\rm HO}}+e^{2\pi i(\tau^{\rm HO}-\phi_{1}^{\rm HO})}\Big{)}\sum_{l= 1}^{16}\Big{(}e^{2\pi i\tilde{m}_{l}}+e^{-2\pi i\tilde{m}_{l}}\Big{)}\frac{e^ {2\pi i\tau^{\rm HO}}}{1-e^{2\pi i\tau^{\rm HO}}}\bigg{]},\]
where \(r_{l}=0\) for \(1\leq l\leq 8\) and \(r_{l}=1/2\) for \(9\leq l\leq 16\). The first few BPS states of the \(SO(32)\) LST from the elliptic genera are listed in Table 6. Again, we show only the BPS states carrying nonzero charge for \(\phi_{1}^{\rm HO}\).
Now one can verify that the BPS spectra of two LSTs are the same under the exchange of winding number and KK-momentum as \(w^{\rm HE}\leftrightarrow\tau^{\rm HO}/2\) and \(\tau^{\rm HE}/2\leftrightarrow w^{\rm HO}\), as well as the exchange of \(\phi_{1,0}^{\rm HE}-\phi_{2,0}^{\rm HE}\leftrightarrow\phi_{1}^{\rm HO}\) and \(\tilde{m}_{l}^{\rm HE}\leftrightarrow\tilde{m}_{l}^{\rm HO}\). However, due to the presence of extra decoupled states at \(k_{1}=k_{2}\) sectors in the \(E_{8}\times E_{8}\) heterotic LST mentioned above, the spectra at \(k_{1}=k_{2}\) do not match each other.
ModularityEach chiral fermion in Figure 5 contributes to the 2d anomaly polynomial as
\[\lambda_{+}^{\dot{\alpha}A}+\lambda_{+}^{\alpha a} \to k(k-1)\bigg{(}\frac{c_{2}(r)+c_{2}(R)}{2}+\frac{c_{2}(l)+c_{2 }(m_{0})}{2}+\frac{p_{1}(T_{2})}{12}\bigg{)}\,, \tag{114}\] \[\lambda_{-}^{\alpha A}+\lambda_{-}^{\dot{\alpha}a} \to-k(k+1)\bigg{(}\frac{c_{2}(l)+c_{2}(R)}{2}+\frac{c_{2}(r)+c_{2 }(m_{0})}{2}+\frac{p_{1}(T_{2})}{12}\bigg{)}\,,\] \[\psi_{-}^{A}+\psi_{+}^{a}+\Psi_{l} \to 2k\bigg{(}\frac{-c_{2}(R)+c_{2}(m_{0})}{2}\bigg{)}+k\bigg{(} \frac{1}{4}\operatorname{Tr}F_{m}^{2}+\frac{2}{3}p_{1}(T_{2})\bigg{)}\,,\]
where \(F_{m}\) is the 2-form field strength for the \(SO(32)\) global symmetry. The anomaly polynomial is the sum of these contributions. This can also be derived from the anomaly inflow presented in (41) as
\[I_{4}=kX_{4,0}=k\bigg{(}-c_{2}(l)-c_{2}(r)-2c_{2}(R)+\frac{1}{2 }p_{1}(T_{2})+\frac{1}{4}\operatorname{Tr}F_{m}^{2}\bigg{)}\,. \tag{115}\]
Here, \(X_{4,0}\) is the 4-form appearing in the mixed gauge anomalies in the 6d \(Sp(1)\) gauge theory
\[I_{8}^{\rm mixed}=Y_{4}\wedge X_{4,0}=\frac{1}{4}\operatorname{Tr}F_{Sp(1)}^{2} \wedge\left(\frac{1}{2}p_{1}(T_{6})-2c_{2}(R)+\frac{1}{4}\operatorname{Tr}F_{m} ^{2}\right). \tag{3.73}\]
Then, the modular ansatz for the \(k\)-string elliptic genus can be taken as
\[Z_{k}=\frac{1}{\eta^{24k}}\frac{\Phi_{k}(\tau,\epsilon_{\pm},\phi_{1},m_{0},m_{ l})}{\mathcal{D}_{k}^{\rm cm}\cdot\mathcal{D}_{k}^{A_{1}}\cdot\prod_{s=1}^{k} \varphi_{-1,1/2}(\pm sm_{0}-s\epsilon_{+})}\,, \tag{3.74}\]
where \(\Phi_{k}\) is written in terms of the \(SU(2)\) Weyl invariant Jacobi forms for \(\epsilon_{\pm},\phi_{1},m_{0}\) and the \(SO(32)\) Weyl invariant Jacobi forms for \(m_{l=1,\cdots,16}\) given in Appendix A.2. We have found that the 1-string elliptic genus (3.65) can be reproduced by the modular ansatz with coefficients given in Table 7. The coefficients in this ansatz are listed in ascending order with respect to \(\{\epsilon_{+},\epsilon_{-},\phi_{1},m_{0},m_{l=1,\cdots,16}\}\) that we defined in the footnote 4. We expect that this ansatz is consistent with the elliptic genera from the ADHM computation for any value of \(k\), since the 2D quiver theory possesses the SO(32) flavor symmetry explicitly.
Blowup equationSince the partition functions of the \(E_{8}\times E_{8}\) LST and the \(SO(32)\) LST are the same, the partition function of the \(SO(32)\) LST should satisfy the same blowup equation for the \(E_{8}\times E_{8}\) LST in (3.62). More precisely, two partition functions are the same, after the identification of fugacities of two theories, up to decoupled states which are independent of the dynamical Kahler parameter \(\phi_{1,0}^{\rm HE}-\phi_{2,0}^{\rm HE}\) or \(\phi_{1}^{\rm HO}\). In the blowup equation, all the difference from the decoupled states can be absorbed by the prefactor \(\Lambda\). Therefore, the partition function of the \(SO(32)\) LST satisfies the blowup equation in (3.62) with a different prefactor \(\Lambda\) for this theory.
As usual, the blowup equation can be solved iteratively starting from the effective prepotential and the perturbative partition function in (3.70) with a choice of magnetic fluxes on \(\mathbb{P}^{1}\). The effective prepotential of the \(SO(32)\) LST receives tree level contributions from the gauge kinetic term and the counterterm with an auxiliary 2-form field, and 1-loop contributions from the \(Sp(1)\) vector multiplet and the 16 fundamental hypermultiplets. With the parametrization given in (3.68) and (3.69), we compute the 1-loop prepotential as
\[\mathcal{F}=\frac{1}{12}\sum_{n\in\mathbb{Z}}\left(|n\tau\pm 2\phi_{1}|^{3}- \sum_{i=1}^{16}|(n+r_{i})\tau\pm\phi_{1}+m_{i}|^{3}\right)=-\frac{1}{2}\sum_{ i=1}^{8}m_{i}^{2}\phi_{1}\,, \tag{3.75}\]
where \(r_{i}=0\) for \(1\leq i\leq 8\), \(r_{i}=1/2\) for \(9\leq i\leq 16\), and we used the zeta function regularization to compute the infinite KK momenta summations. The mixed Chern-Simons coefficients can be computed in the same manner. Collecting all the
\begin{table}
\begin{tabular}{|c|l|} \hline \(k\) & \multicolumn{1}{c|}{\(\left\{C_{i}^{(k)}\right\}\)} \\ \hline \hline \(\frac{1}{23\cdot 33}\left\{0,0,-27648,13824,0,-248832,0,27648,-13824,-13824,-13824,165888,248832,13824,82944,\right.\) \\ \(-165888,-82944,-165888,-55296,-55296,-165888,-276480,-110592,55296,-663552,\) \\ \(\left.276480,165888,165888,110592,663552,663552,55296,165888,-0,-165888,0,-6635 52,-1,1,0,\) \\ \(\left.884736,-884736,0,0,-2,-18,18,-18,-18,-2,2,0,32,-32,0,0,0,-20736,-124416, 0,20736,\right.\) \\ \(\left.214416,124416,-124416,0,82944,-82944,-41472,746496,27648,165888,-124416, -248832,\right.\) \\ \(\left.1492992,-746496,-27648,-193536,165888,124416,-2239488,-1492992,-55296, 41472,165888,\right.\) \\ \(\left.0,2239488,0,-442386,10592,10592,66352,331776,-110592,-663552,0,-3,-9,-9, 3,-3,\right.\) \\ \(\left.0,0,-12,12,-12,12,0,0,124416,-124416,82944,0,0,62208,373248,248832,-82944, -82944,-29044,\right.\) \\ \(\left.20736,124416,746496,1617408,82944,-20736,-145152,-870912,-746496,-41472,-24 8832,\right.\) \\ \(\left.-1191744,-165888,-82944,414720,-207360,497664,850608,-41472,-3981312, \right.\) \\ \(\left.-248832,-955328,248832,41472,398312,0,0,0,0,0,0,0,-9,-27,-27,9,-9,0, 0,-55296,\right.\) \\ \(\left.6912,0,248832,-373243,55296,-9612,-6912,-6920430,-995326,9921,4172,10368 800,331776,\right.\) \\ \(\left.-82944,-110592,96768,-331776,-801792,380180,89856,1534464,304128,331776,-29 0304,\right.\) \\ \(\left.-6912,-41472,-41472,-110592,58068,-186624,-82944,-1492992,41472,2-2,0,10 22976,\right.\) \\ \(\left.-1022976,0,0,4,-4,36,-36,-36,-36,-4,-4,0,-91,91,0,165888,20736,0,-995328, 55296,\right.\) \\ \(\left.3317776,-0,-2736,-3110400,-12416,-55296,-387072,-20736,0,2985984,870912, -110592,0\right.\) \\ \(\left.20736,11974,-746496,0,-1382400,-5255321,-5525312,-558972,663552,525312,559872, 0,0,\right.\) \\ \(\left.-6,-6,-18,-8,-6,0,0,24,-24,-24,331776,116126,165888,62208,-995328,-165888, \right.\) \\ \(\left.-62208,-62208,3359232,-497664,0,0,62208,-3359232,0,0,0,0,0,0,-18,18,-54, 54,-18,\right.\) \\ \(\left.18,0,-1,1,0,-1492992,149292,0,0,-2,2,-18,18,-18,18,-2,2,0,86,-86,-3,-3,9,-,9,3,-3,\right.\) \\ \(\left.0,0,-12,12,-12,12,0,9,-9,27,-27,9,-9,0,0,-27,27\right)\) \\ \hline \end{tabular}
\end{table}
Table 7: Coefficients in the modular ansatz for the rank 1 \(SO(32)\) heterotic LST.
contributions yields the effective prepotential
\[\mathcal{E}=\frac{1}{\epsilon_{1}\epsilon_{2}}\Bigg{(}-\frac{1}{2}\sum_{i=1}^{8}m _{i}^{2}\phi_{1}+\frac{\epsilon_{1}^{2}+\epsilon_{2}^{2}}{4}\phi_{1}+\epsilon_ {+}^{2}\phi_{1}\Bigg{)}+\mathcal{E}_{\rm tree}^{(0)}\,, \tag{111}\]
where
\[\mathcal{E}_{\rm tree}^{(0)}=\frac{1}{\epsilon_{1}\epsilon_{2}}\Bigg{[}w\phi _{1}^{2}+\Bigg{(}\frac{\epsilon_{1}^{2}+\epsilon_{2}^{2}}{2}-\frac{1}{2}\sum_{ i=1}^{16}m_{i}^{2}+2\epsilon_{+}^{2}\Bigg{)}\phi_{0}\Bigg{]}\,, \tag{112}\]
with the auxiliary scalar VEV \(\phi_{0}\equiv\phi_{0,0}\). Indeed this under the reparametrization \(w\to\tau/2\) and \(\phi_{1}\to\phi_{1,0}-\phi_{2,0}\) coincides with the effective prepotential of the \(E_{8}\times E_{8}\) LST in (103), as expected from the T-duality.
Then we turn on the magnetic fluxes on the blowup background such as
\[n_{1}=n_{1}^{\prime}+n_{0}^{\prime}\in\mathbb{Z}\,,\ n_{2}=n_{0}^ {\prime}\in\mathbb{Z}\,,\ B_{m_{i}}=\left\{\begin{aligned} & 1/2&(0 \leq i\leq 8)\\ &-1/2&(9\leq i\leq 16)\end{aligned}\right.\,,\ B_{\tau}=B_{w}=0\,, \tag{113}\]
where \(n_{0}^{\prime},n_{1}^{\prime}\) denote the fluxes for \(\phi_{0}\) and those for \(\phi_{1}\) respectively.
Using these ingredients, it is now possible to construct the blowup equation for the \(SO(32)\) LST. To compute the elliptic genera for the little strings, we first expand the blowup equation in terms of the Kahler parameter \(e^{2\pi iw}\) for the string number and substitute the modular ansatz into the \(k\)-string elliptic genera that appear at each order in the expansion. The coefficients in the modular ansatz are then determined by solving the blowup equation. We have carried out this calculation for \(k=1\) and reproduced the result in Table 7. We expect that the higher order elliptic genera can also be computed in this manner.
### \(SU(3)+1{\bf sym}+1\boldsymbol{\Lambda}^{2}\)
As a last example, we consider the \(\mathcal{N}=(1,0)\) LST whose low energy theory is given by an \(SU(N)\) gauge theory with a symmetric hypermultiplet and an antisymmetric hypermultiplet, as introduced in [22]. This theory can be realized by \(N\) D6-branes stretched between two half NS5-branes in the type IIA string theory on an interval \(S^{1}/\mathbb{Z}_{2}\) with an O8\({}^{-}\)- and an O8\({}^{+}\)-plane at each end, which is called the O8\({}^{\pm}\) background [100; 101], as depicted in Figure 6(b). Here, the half NS5-branes are located at each orientifold plane.
The index part of partition function is factorized as
\[Z_{\rm GV}=Z_{\rm pert}\cdot Z_{\rm str}=Z_{\rm pert}\cdot\sum_{k=0}^{\infty} e^{2\pi ikw}Z_{k}\,, \tag{114}\]
where \(w=1/g_{\rm YM}^{2}\), and \(Z_{k}\) is the elliptic genus of \(k\)-strings. The 1-loop contribution \(Z_{\rm pert}\) from the \(SU(N)\) vector multiplet and the hypermultiplets is given by
\[Z_{\rm pert} =\text{PE}\left[\,-\frac{1+p_{1}p_{2}}{(1-p_{1})(1-p_{2})(1-q)} \sum_{\rho\in\mathbf{R}^{+}}\left(e^{2\pi i\rho\phi}+qe^{-2\pi i\rho\cdot\phi}\right)\right. \tag{3.80}\] \[\qquad+\left.\frac{\sqrt{p_{1}p_{2}}}{(1-p_{1})(1-p_{2})}\sum_{n \in\mathbb{Z}}\Big{(}\sum_{w\in\mathbf{sym}}e^{2\pi i|n\tau+w\cdot\phi+m_{1}|} +\sum_{w\in\mathbf{\Lambda}^{2}}e^{2\pi i|n\tau+w\cdot\phi+m_{2}|}\Big{)}\right]\]
from (2.23) and (2.24), where \(\mathbf{R}^{+}\) is the set of positive roots of \(SU(N)\), \(\mathbf{sym}\) and \(\mathbf{\Lambda}^{2}\) are weight vectors of symmetric and antisymmetric representations, repectively.
GlsmIn this theory, the little strings are \(SU(N)\) instanton strings realized by \(k\) D2-branes on top of the D6-branes. By examining the brane configuration, it is possible to deduce the 2d \(\mathcal{N}=(0,4)\) gauge theory description, which has a \(U(k)\) gauge symmetry and matter content as summarized in Figure 6(c). The vector multiplet, adjoint hypermultiplet, and hypermultiplets in the bifundamental representation of \(U(k)\times U(N)\) agree with the ADHM data for the \(SU(N)\) instanton moduli space, while the remaining fields charged under \(U(1)_{S}\) and \(U(1)_{A}\) arise from zero modes of
the 6d symmetric and antisymmetric hypermultiplets, respectively, at \(k\)-instantons [102]. Note that the gauge anomaly of the 2d theory is cancelled as
\[-4\times k+4\times k+2N\times\frac{1}{2}+2\times\frac{k-2}{2}-2 \times\frac{k+2}{2}-N\times\frac{1}{2} \tag{113}\] \[\qquad+2\times\frac{k+2}{2}-2\times\frac{k-2}{2}-N\times\frac{1}{ 2}=0\,,\]
where each term comes from the charged chiral fermions given in Figure 6(c).
There are mixed anomalies between the gauge and global \(U(1)\) symmetries. Let \(T_{U(1)}\), \(S\), \(A\), \(G\) be generators of \(U(1)\subset U(k)\), \(U(1)_{S}\), \(U(1)_{A}\) and \(U(1)_{G}\subset U(N)\), respectively. Then mixed anomalies are
\[\operatorname{Tr}\gamma_{3}T_{U(1)}S=-4-N\,,\quad\operatorname{Tr}\gamma_{3}T_ {U(1)}A=4-N\,,\quad\operatorname{Tr}\gamma_{3}T_{U(1)}G=-4N\,. \tag{114}\]
Thus, the anomaly free \(U(1)\) global symmetry in the 2d gauge theory is the subgroup of \(U(1)_{S}\times U(1)_{A}\times U(1)_{G}\) generated by \(2S+2A-G\). There is a decoupled \(U(1)\) symmetry generated by \(T_{U(1)}-2S-2A+G\) which acts trivially on the 2d fields.
We compute the elliptic genera of the little strings using the 2d gauge theory description and the localization technique. The 1-string elliptic genus when \(N=3\) is
\[Z_{1} =-\sum_{j=1}^{3}\frac{\theta_{1}(2a_{j}+m_{1}-\epsilon_{+})\theta _{1}(2a_{j}+m_{1}-\epsilon_{+}-\epsilon_{1,2})}{\theta_{1}(\epsilon_{1,2}) \theta_{1}(2a_{j}+m_{2}-3\epsilon_{+})}\prod_{k\neq j}^{3}\frac{\theta_{1}(a_ {jk}+m_{1,2}-\epsilon_{+})}{\theta_{1}(a_{jk})\theta_{1}(2\epsilon_{+}-a_{jk})}\] \[\quad+\sum_{I=1}^{4}\frac{\theta_{1}(m_{1}-m_{2}+\epsilon_{1,2})} {\theta_{1}(\epsilon_{1,2})}\prod_{j=1}^{3}\frac{\theta_{I}(a_{j}+m_{1}-\frac {m_{2}}{2}+\frac{\epsilon_{+}}{2})}{\theta_{I}(a_{j}-\frac{3\epsilon_{+}-m_{2 }}{2})}\,, \tag{115}\]
where \(a_{1},a_{2},a_{3}\) are the chemical potentials for \(U(3)\), and \(m_{1}\) and \(m_{2}\) are the \(U(1)_{S}\) and \(U(1)_{A}\) chemical potentials, respectively. Here we use a shorthand notation, \(a_{jk}=a_{j}-a_{k}\). We present the computational details and the 2-string elliptic genus in Appendix B.3.
We have checked as expected that the leading order of the elliptic genera in \(q\)-expansion correctly reproduces the BPS spectrum of the 5d \(SU(3)+1\mathbf{sym}+1\mathbf{\Lambda}^{2}\) theory [50], where \(m_{1}\) and \(m_{2}\) are identified as the mass parameters of the symmetric and antisymmetric hypermultiplets in the 5d SCFT. However, when considering the BPS states in the 6d LST, the chemical potentials appearing in the elliptic genera are further constrained by the mixed anomalies given in (114), and for \(N=3\), the chemical potential for the anomaly-free \(U(1)\) is determined by the condition
\[-7m_{1}+m_{2}-4\sum_{i=1}^{3}a_{i}=0\,. \tag{116}\]
We can also set \(\sum_{i}a_{i}=0\) using the fact that the \(U(1)\) symmetry generated by \(T_{U(1)}-2S-2A+G\) decouples from the 2d CFT. By imposing these conditions, we
can rewrite the elliptic genera such that they only depend on a chemical potential \(m_{1}-m_{2}\) as well as \(SU(3)\) chemical potentials \(a_{i}\) with \(\sum_{i}a_{i}=0\). This is consistent with the fact that the 6d LST has only one anomaly-free \(U(1)\) symmetry, as we will show below.
ModularityThe modular property of the elliptic genus can be read from the anomaly polynomial of the 2d theory. Each chiral fermion in Figure 6(c) contributes to the 2d anomaly polynomial as
\[\begin{split}\lambda_{+}^{\dot{\alpha}A}+\lambda_{-}^{\alpha A}& \to 2k^{2}\bigg{(}\frac{c_{2}(r)+c_{2}(R)}{2}-\frac{c_{2}(l)+c_{2}(R)}{2} \bigg{)}\,,\\ \Psi_{+}^{\alpha}+\tilde{\Phi}_{-}^{\dot{\alpha}}& \to k(k+1)\bigg{(}\frac{c_{2}(l)-c_{2}(r)}{2}+\frac{1}{2}F_{1}^{2}- \frac{1}{2}F_{2}^{2}\bigg{)}\,,\\ \Phi_{-}^{\dot{\alpha}}+\tilde{\Psi}_{+}^{\alpha}& \to k(k-1)\bigg{(}\frac{c_{2}(l)-c_{2}(r)}{2}-\frac{1}{2}F_{1}^{2}+ \frac{1}{2}F_{2}^{2}\bigg{)}\,,\\ \psi_{-}^{A}+\psi_{+}+\tilde{\psi}_{+}&\to Nk \bigg{(}-c_{2}(R)+\frac{1}{2}F_{1}^{2}+\frac{1}{2}F_{2}^{2}\bigg{)}\,,\end{split} \tag{3.85}\]
where \(F_{1}\) and \(F_{2}\) are the field strengths for \(U(1)_{S}\) and \(U(1)_{A}\), respectively. Thus the full anomaly polynomial of the 2d theory for \(k\)-strings is given by
\[I_{4}=k\bigg{(}-Nc_{2}(R)+\frac{N+2}{2}F_{1}^{2}+\frac{N-2}{2}F_{2}^{2}\bigg{)}\,. \tag{3.86}\]
The same result can be decuced using the anomaly inflow from the 6d LST in the presence of \(k\)-strings. The 1-loop anomalies from the chiral fields in the 6d \(SU(N)\) gauge theory contain the mixed gauge anomalies
\[\begin{split} I_{8}&\supset\frac{1}{4}\operatorname{ Tr}F_{SU(N)}^{2}\wedge\bigg{(}-Nc_{2}(R)+\frac{N+2}{2}F_{1}^{2}+\frac{N-2}{2}F_{2}^{2} \bigg{)}\\ &\quad+\frac{1}{6}\operatorname{tr}F_{SU(N)}^{3}\wedge((N+4)F_{1 }+(N-4)F_{2})\,,\end{split} \tag{3.87}\]
where 'tr' is the trace in fundamental representation. To obtain this, we used following relations,
\[\operatorname{tr}_{\mathbf{sym}}F^{3}=(N+4)\operatorname{tr}F^{3}\,,\quad \operatorname{tr}_{\mathbf{\Lambda}^{2}}F^{3}=(N-4)\operatorname{tr}F^{3}, \tag{3.88}\]
for the \(SU(N)\) representations. The gauge anomaly in the first line is cancelled by adding the counterterm as (2.31). The second line is the ABJ anomaly and it imposes a constraint on the flavor symmetries as
\[F_{2}=-\frac{N+4}{N-4}F_{1}\,. \tag{3.89}\]
Thus, there is only one anomaly-free global symmetry, given by \(U(1)\subset U(1)_{S}\times U(1)_{A}\). Then, the anomaly inflow from the 6d LST on the \(k\)-string background leads to the same anomaly polynomial in (3.86) for the worldsheet CFT.
Now we make a modular ansatz for the elliptic genus of \(k\)-strings in the \(SU(3)\) LST based on the anomaly polynomial. The elliptic genus \(Z_{k}\) has a modular anomaly \(\int I_{4}=-3k\epsilon_{+}^{2}\). Thus the modular ansatz we propose is
\[Z_{k}=\frac{\Phi_{k}(\tau,\epsilon_{\pm},\phi_{1},\phi_{2})}{ \mathcal{D}_{k}^{\text{cm}}\cdot\mathcal{D}_{k}^{A_{2}}}\,. \tag{3.90}\]
The \(SU(3)\) chemical potentials \(\phi_{1,2}\) are related to \(a_{1,2,3}\) in the elliptic genus given in (3.83) by
\[a_{1}=\phi_{1}\,,\quad a_{2}=-\phi_{1}+\phi_{2}\,,\quad a_{3}=- \phi_{2}\,. \tag{3.91}\]
We turn off the \(U(1)\) flavor chemical potential because \(U(1)\) has trivial Weyl group and does not fit into the standard theory of Weyl invariant Jacobi forms.
At 1-string order, the ansatz has 514 unknown coefficients \(C_{i}^{(k)}\), and we check that this ansatz with the coefficients in Table 8, which are listed in ascending order with respect to \(\{\epsilon_{+},\epsilon_{-},\phi_{1,2}\}\), reproduces the 1-string elliptic genus obtained from the ADHM construction in (3.83).
Blowup equationLastly, let us consider the blowup equation for the \(SU(3)+1\mathbf{sym}+1\mathbf{\Lambda}^{2}\) LST. We first compute the effective prepotential. The 1-loop prepotential
\begin{table}
\begin{tabular}{|l|l|} \hline \(k\) & \(\frac{1}{204\pi^{2}}\{(0,-256,-512,-960,-1024,-512,-10752,-12288,-64512,-36864, -90112,-32768,64,32,125,192,16,256,\\ -1024,-224,1536,-3840,-1536,-1152,-13824,5376,-36864,-12288,-163840,1536,-327 68,-29184,-98304,208896,\\ 36864,02118,481920,-130772,-1,-8,240,-2496,91276,-1728,-4006,12288,-16340,-2 3084,-3912,-30720,-384,-12288,\\ -18432,12288,-101376,36864,-43008,-7378,1842,-70148,-5428,-5248,-18340,-157284, -196608,0589284,\\ 524288,-131072,-4186,-18840,960,-21504,-4992,-2764,-7296,3276,-86016,3106, 365635,-73728,-9216,-196606,-672,\\ -196068,-22114,19660,-673728,-9491,-68812,-9120,14857,-98304,-226144,-719648,-672,\\ -1929,9216,644,-12288,1228,-16384,13312,0,0,-16384,-0,237268,048,09384, 512,-2240,8,9192,32,384,\\ 672,-768,-768,-6912,-6144,4608,1536,12288,-40372,-30760,-7608,-19044,-10444,-22128,3864,-3918,6608,\\ -4,-5676,-756,-756,-756,-756,-756,-756,-756,-756,-756,-756,-756,-756,-756,-756,-75
from the \(SU(3)\) vector and hypermultiplets is
\[6\mathcal{F} =\frac{1}{2}\sum_{n\in\mathbb{Z}}\left(\sum_{e\in\mathbf{R}}\left|n \tau+e\cdot\phi\right|^{3}-\sum_{w\in\mathbf{sym}}\left|n\tau+w\cdot\phi+m_{1} \right|^{3}-\sum_{w\in\mathbf{A}^{2}}\left|n\tau+w\cdot\phi+m_{2}\right|^{3}\right)\] \[=\left(8\phi_{1}^{3}-3\phi_{1}^{2}\phi_{2}-3\phi_{1}\phi_{2}^{2}+8 \phi_{2}^{3}\right)-\frac{1}{2}\big{(}(\phi_{2}+m_{2})^{3}\!+\!(\phi_{2}-\phi_{ 1}-m_{2})^{3}\!+\!(\phi_{1}-m_{2})^{3}\big{)}\] \[\quad\quad-\frac{1}{2}\big{(}(2\phi_{1}+m_{1})^{3}+(\phi_{2}+m_{1 })^{3}+(-2\phi_{1}+2\phi_{2}+m_{1})^{3}+(-\phi_{1}+\phi_{2}-m_{1})^{3}\] \[\qquad\quad+(\phi_{1}-m_{1})^{3}+(2\phi_{2}-m_{1})^{3}\big{)}\,, \tag{3.92}\]
in the chamber \(\phi_{2}\geq\phi_{1}>0\). In the last expression, we keep only the terms dependent on the dynamical Kahler parameter \(\phi_{i}\)'s. We also evaluate the perturbative partition function (3.80) in this chamber. With the tree level contributions and the contributions from the mixed Chern-Simons terms, the full effective prepotential is given by
\[\mathcal{E}=\frac{1}{\epsilon_{1}\epsilon_{2}}\bigg{(}\mathcal{F}-\frac{ \epsilon_{1}^{2}+\epsilon_{2}^{2}}{48}(4\phi_{1}-4\phi_{2})+\epsilon_{+}^{2}( \phi_{1}+\phi_{2})\bigg{)}+\mathcal{E}_{\rm tree}^{(0)}\,, \tag{3.93}\]
where
\[\mathcal{E}_{\rm tree}^{(0)}=\frac{1}{\epsilon_{1}\epsilon_{2}}\bigg{[}w(\phi _{1}^{2}-\phi_{1}\phi_{2}+\phi_{2}^{2})+\phi_{0}\bigg{(}3\epsilon_{+}^{2}- \frac{5}{2}m_{1}^{2}-\frac{1}{2}m_{2}^{2}\bigg{)}\bigg{]}\,, \tag{3.94}\]
with \(w\sim 1/g_{\rm YM}^{2}\) and an auxiliary scalar VEV \(\phi_{0}\equiv\phi_{0,0}\). Also, because of the 6d mixed anomaly free condition given in (3.89), we impose the condition
\[m_{2}=7m_{1} \tag{3.95}\]
in the effective prepotential (3.93). This is compatible with the 2d mixed anomaly free condition (3.84).
The blowup equation can be constructed with a set of magnetic fluxes
\[n_{0}\in\mathbb{Z}\,,\quad n_{1}\in\mathbb{Z}+2/3\,,\quad n_{2}\in\mathbb{Z} +1/3\,,\quad B_{m_{1}}=1/6\,,\quad B_{\tau}=B_{w}=0\,. \tag{3.96}\]
We propose that the blowup equation of the form given in (2.18) with these inputs is satisfied for the partition function of the \(SU(3)\) LST. We have checked that the elliptic genera computed using the 2d gauge theory satisfy this blowup equation order by order in the expansion with respect to the fugacities \(e^{2\pi iw}\), \(q=e^{2\pi i\tau}\), \(t=e^{2\pi i(2\phi_{1}-\phi_{2})}\), and \(u=e^{2\pi i(-\phi_{1}+2\phi_{2})}\), up to 2-strings, \(q^{1}\), \(t^{1}\) and \(u^{1}\) orders.
We also attempted to solve the blowup equation with the help of the modular ansatz in (3.90). By utilizing the blowup equation and modular ansatz, we were able to find a BPS spectrum up to \(q^{1}\), \(t^{-1}\) and \(u^{1}\) order which mathches with the ADHM computation given in (3.83). This fixes 419 unknown coefficients in the modular ansatz, which are compatible with those listed in Table 8. We expect that higher order computations of blowup equation will fix the remaining unknowns in the modular ansatz.
Conclusion
In this paper, we have proposed the blowup equations for six-dimensional little string theories (LSTs), and demonstrated how our proposal works in some cases. In order to formulate the blowup equations, we have found that we need to introduce an auxiliary 2-form field to cancel the mixed gauge-global anomalies and also take into account the summation over its magnetic fluxes on the blown-up \(\mathbb{P}^{1}\) as well as the fluxes for the dynamical tensor and gauge symmetries. Although the flux sum for the auxiliary 2-form field in the blowup equation is divergent, which is essentially because the auxiliary 2-form field has no quadratic kinetic term and it is thus non-dynamical, we have found that the blowup equation still makes sense as a Laurant series expansion in terms of Kahler parameters, and we can even use it to determine the BPS spectra of strings in the LSTs with the help of modular ansatz. As concrete examples, we have computed the elliptic genera of strings in \(\hat{A}_{1}\) type LSTs in IIA/IIB string theories, LSTs in \(E_{8}\times E_{8}\) and \(SO(32)\) heterotic string theories, and an rank-2 LST with \(SU(3)\) guage symmetry and \(1{\bf sym}+1{\boldsymbol{\Lambda}}^{2}\) hypermultiplets. We then checked that these elliptic genera satisfy the blowup equations, and conversely, that the unknown coefficients of their modular ansatz can be fixed by solving the blowup equations.
There are some interesting extensions of the results in this paper. First, it would be quite interesting to generalize the blowup formalism into supergravity theories. The blowup equations for little string theories may suggest the possibility of this generalization because elliptic genera of the string worldsheet theories in some supergravity theories such as 9d/10d heterotic string theories are related with the elliptic genera of LSTs through RG-flows by higgsings. A key difference between supergravity theories and LSTs or SCFTs is that all symmetries in supergravity theories are gauged. As a result, we need to turn on dynamical magnetic fluxes for all the symmetries in the theory, and the \(\Lambda\) factor in the blowup equation can only depend on the \(\Omega\)-deformation parameters. We leave this generalization as a future work.
Another extension of the current work is the consideration of twisted compactifications of little string theories. The blowup formalism for 5d Kaluza-Klein theories resulting from 6d SCFTs compactified on a circle with automorphism twists has been previously explored in [50]. It is straightforward to extend this approach to derive the blowup equations for twisted compactifications of LSTs by simply replacing the intersection form \(\Omega^{\alpha\beta}\) of the tensor nodes and the Killing forms \(K_{ij}\) for the gauge algebras in the blowup equation for untwisted LSTs with their twisted counterparts. One potential use of this formulation is to confirm T-dualities between LSTs including twists along the T-dual circle. For example, the \(SU(3)\) gauge theory with \(1{\bf sym}+1{\boldsymbol{\Lambda}}^{2}\) we discussed in section 3.3 is expected to be T-dual to another little string theory with a twist [103], which is due to the presence of the symmetric hypermultiplet. The blowup equations for twisted LSTs may provide a more rigorous method for
identifying and verifying such dualities.
As another generalization, one can also study little string theories with supersymmetric defects. Various BPS defects in superconformal field theories have been widely studied. For instance, the partition functions of 5d/6d field theories in the presence of the codimension 4 defects were investigated in [55] in the context of the blowup formalism. It should be straightforward to extend this approach to the study of LSTs coupled to codimension 4 defects, offering a concrete method for analyzing the dynamics of these defects within the LSTs.
Recently, a systematic method for calculating the partition functions of LSTs engineered by NS5-branes on D- and E-type singularities using the topological vertex formalism was proposed in [104]. The resulting partition functions for D-type LSTs were found to be consistent with those obtained using the elliptic genus computation in [64], while the partition functions for E-type LSTs represent new results. It would be valuable to verify these proposed partition functions using the blowup equations.
###### Acknowledgements.
We are grateful to Sung-Soo Kim and Kimyeong Lee for valuable discussions. HK, MK and YS thank APCTP for its hospitality during completion of this work. HK also thanks the Simons Center for Geometry and Physics, Stony Brook University for the hospitality and partial support during the final stage of this work at the workshops "2022 Simons Summer Workshop" and "Geometry of (S)QFT". The research of HK, MK and YS is supported by Samsung Science and Technology Foundation under Project Number SSTF-BA2002-05 and by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2018R1D1A1B07042934). Some of the computations that were conducted using mathematica were carried out on the computer _sushiki_ at Yukawa Institute for Theoretical Physics in Kyoto University.
## Appendix A Elliptic functions
In this appendix, we summarize definintions and properties of the modular forms and Jacobi forms used in this paper.
### Modular forms
Let \(\mathcal{H}=\{z\in\mathbb{C}\mid\Im z>0\}\) be the upper half plane of the complex plane, \(\tau\in\mathcal{H}\) be the complex structure of the torus and \(q=e^{2\pi i\tau}\). A _modular form of weight_\(k\) is a function \(f:\mathcal{H}\to\mathbb{C}\) satisfying
\[f\bigg{(}\frac{a\tau+b}{c\tau+d}\bigg{)}=(c\tau+d)^{k}f(\tau)\,,\quad\binom{a }{c}\,\,d\bigg{)}\in\mathrm{SL}(2,\mathbb{Z})\,. \tag{100}\]
An example of the modular form is the Eisenstein series defined by
\[E_{2k}(\tau)=\frac{1}{2\zeta(2k)}\sum_{(m,n)\neq(0,0)}\frac{1}{(m+n \tau)^{2k}}=1+\frac{(2\pi i)^{2k}}{\zeta(2k)(2k-1)!}\sum_{n=1}^{\infty}\sigma_{2 k-1}(n)q^{n}\,, \tag{110}\]
where \(\zeta(s)\) is the Riemann zeta function and \(\sigma_{k}(n)=\sum_{d|n}d^{k}\) is the divisor function. \(E_{2k}(\tau)\) with \(k>1\) are the holomorphic modular forms of weight \(2k\), while \(E_{2}(\tau)\) is only quasi-modular:
\[E_{2}\bigg{(}\frac{a\tau+b}{c\tau+d}\bigg{)}=(c\tau+d)^{2}E_{2}( \tau)-\frac{6i}{\pi}c(c\tau+d)\,. \tag{111}\]
Two Eisenstein series \(E_{4}(\tau)\) and \(E_{6}(\tau)\) generate the ring of holomorphic modular forms \(\mathcal{M}_{*}(\mathrm{SL}(2,\mathbb{Z}))=\bigoplus_{k\geq 0}\mathcal{M}_{2k}( \mathrm{SL}(2,\mathbb{Z}))\), where \(\mathcal{M}_{2k}(\mathrm{SL}(2,\mathbb{Z}))\) is the space of weight \(2k\) modular forms. In other words, \(\mathcal{M}_{2k}(\mathrm{SL}(2,\mathbb{Z}))\) can be written as
\[\mathcal{M}_{2k}(\mathrm{SL}(2,\mathbb{Z}))=\bigoplus_{4a+6b=2k} \mathbb{C}E_{4}(\tau)^{a}E_{6}(\tau)^{b}\,. \tag{112}\]
As a related function with the Eisenstein series, we define the Dedekind eta function as
\[\eta(\tau)=q^{1/24}\prod_{n=1}^{\infty}(1-q^{n})\,. \tag{113}\]
Its 24th power \(\Delta(\tau)=\eta(\tau)^{24}=(E_{4}(\tau)^{3}-E_{6}(\tau)^{2})/1728\) is a weight 12 modular form called _modular discriminant_ and \(\eta(\tau)\) itself has following modular transformation properties:
\[\eta(\tau+1)=e^{\pi i/12}\eta(\tau)\,,\quad\eta(-1/\tau)=\sqrt{- i\tau}\eta(\tau)\,. \tag{114}\]
### Jacobi forms
There is a generalization of the modular forms including additional fugacities. A function \(\varphi_{k,m}:\mathcal{H}\times\mathbb{C}\to\mathbb{C}\) is called a _Jacobi form_[105] if it has two transformation properties
\[\varphi_{k,m}\bigg{(}\frac{a\tau+b}{c\tau+d},\frac{z}{c\tau+d} \bigg{)} =(c\tau+d)^{k}e^{\frac{2\pi imc^{2}}{c\tau+d}}\varphi_{k,m}(\tau, z)\ \ \text{for}\ \ \binom{a\ b}{c\ d}\in\mathrm{SL}(2,\mathbb{Z})\,, \tag{115}\] \[\varphi_{k,m}(\tau,z+\lambda\tau+\mu) =e^{-2\pi im(\lambda^{2}\tau+2\lambda z)}\varphi_{k,m}(\tau,z)\ \ \ \ \text{for}\ \ \lambda,\mu\in\mathbb{Z}\,, \tag{116}\]
and a Fourier expansion of the form
\[\varphi_{k,m}(\tau,z)=\sum_{n,r}c(n,r)q^{n}e^{2\pi irz}\,, \tag{117}\]
where \(k\in\mathbb{Z}\) is called the _weight_ and \(m\in\mathbb{Z}_{\geq 0}\) is called the _index_ or _level_ of the Jacobi form. When \(m=0\), \(\varphi_{k,m}\) is independent of \(z\) and reduces to a modular form of weight \(k\). \(\varphi_{k,m}\) is called a holomorphic Jacobi form if \(c(n,r)=0\) unless \(4mn\geq r^{2}\), a cusp Jacobi form if \(c(n,r)=0\) unless \(4mn>r^{2}\), and a weak Jacobi form if \(c(n,r)=0\) unless \(n\geq 0\).
Let \(J_{k,m}\) be a space of weak Jacobi forms of weight \(k\) and level \(m\). The ring of weak Jacobi form \(J_{*,*}=\bigoplus_{k,m}J_{k,m}\) is freely generated over the ring of modular forms \(\mathcal{M}_{*}(\mathrm{SL}(2,\mathbb{Z}))\), whose generators are
\[\varphi_{-2,1}(\tau,z)=-\frac{\theta_{1}(\tau,z)^{2}}{\eta(\tau)^{6}}\,,\quad \varphi_{0,1}(\tau,z)=4\sum_{i=2}^{4}\frac{\theta_{i}(\tau,z)^{2}}{\theta_{i }(\tau,0)^{2}}\,, \tag{111}\]
where \(\theta_{i}(\tau,x)\) are Jacobi theta functions defined by
\[\begin{split}\theta_{1}(\tau,x)&=-i\sum_{n\in \mathbb{Z}}(-1)^{n}q^{\frac{1}{2}(n+1/2)^{2}}y^{n+1/2}\,,&\theta_ {2}(\tau,x)=\sum_{n\in\mathbb{Z}}q^{\frac{1}{2}(n+1/2)^{2}}y^{n+1/2}\,,\\ \theta_{3}(\tau,x)&=\sum_{n\in\mathbb{Z}}q^{\frac{n ^{2}}{2}}y^{n}\,,&\theta_{4}(\tau,x)=\sum_{n\in\mathbb{Z}}(-1)^{n }q^{\frac{n^{2}}{2}}y^{n}\,,\end{split} \tag{112}\]
for \(y=e^{2\pi ix}\). In other words, any weak Jacobi form \(\varphi_{k,m}\) can be written as
\[J_{k,m}\ni\varphi_{k,m}(\tau,z)=\sum_{\begin{subarray}{c}4a_{1}+6a_{2}-2a_{ 3}=k\\ a_{3}+a_{4}=m,a_{i}\in\mathbb{Z}_{\geq 0}\end{subarray}}C_{a_{i}}E_{4}(\tau)^{a_{ 1}}E_{6}(\tau)^{a_{2}}\varphi_{-2,1}(\tau,z)^{a_{3}}\varphi_{0,1}(\tau,z)^{a_ {4}} \tag{113}\]
for some \(C_{a_{i}}\in\mathbb{C}\). We also frequently use
\[\varphi_{-1,1/2}(\tau,z)=i\frac{\theta_{1}(\tau,z)}{\eta(\tau)^{3}}\,, \tag{114}\]
which satisfies \(\varphi_{-1,1/2}(\tau,z)^{2}=\varphi_{-2,1}(\tau,z)\).
The notion of weak Jacobi forms is further generalized to Weyl invariant Jacobi forms [87]. Let \(\mathfrak{g}\) be a Lie algebra of rank \(l\), \(\mathfrak{h}_{\mathbb{C}}\cong\mathbb{C}^{l}\) be the complexification of the Cartan subalgebra, \(W\) be its Weyl group, \(Q^{\vee}\) be the coroot lattice, and \(P\) be its weight lattice. Denote \(\langle\cdot,\cdot\rangle\) a Killing form on \(\mathfrak{h}_{\mathbb{C}}\) normalized to \(2\) for the shortest coroot. A _Weyl invariant Jacobi form of weight \(k\) and index \(m\)_ is a function \(\varphi_{k,m}:\mathcal{H}\times\mathfrak{h}_{\mathbb{C}}\to\mathbb{C}\) satisfying following conditions.
* Weyl-invariance: for \(w\in W\), \[\varphi_{k,m}(\tau,wz)=\varphi_{k,m}(\tau,z)\,.\] (115)
* Modularity: for \((\begin{smallmatrix}a&b\\ c&d\end{smallmatrix})\in\mathrm{SL}(2,\mathbb{Z})\), \[\varphi_{k,m}\biggl{(}\frac{a\tau+b}{c\tau+d},\frac{z}{c\tau+d}\biggr{)}=(c \tau+d)^{k}\exp\biggl{(}\frac{\pi imc}{c\tau+d}\langle z,z\rangle\biggr{)} \varphi_{k,m}(\tau,z)\,.\] (116)
3. Quasi-periodicity: for \(\lambda,\mu\in Q^{\vee}\), \[\varphi_{k,m}(\tau,z+\lambda\tau+\mu)=\exp(-\pi im[(\lambda,\lambda)\tau+2 \langle\lambda,z\rangle])\varphi_{k,m}(\tau,z)\,.\] (111)
4. Fourier expansion: \[\varphi_{k,m}(\tau,z)=\sum_{n=0}^{\infty}\sum_{\ell\in P}c(n,\ell)q^{n}e^{2\pi i \langle\ell,z\rangle}\,.\] (112)
The weak Jacobi forms defined above are \(\mathfrak{g}=A_{1}\) case.
Let \(J_{k,m}(\mathfrak{g})\) be the space of the \(\mathfrak{g}\) Weyl invariant Jacobi forms with weight \(k\) and index \(m\). Then, for a simple Lie algebra except for \(E_{8}\), the bigraded ring,
\[J_{*,*}(\mathfrak{g})=\bigoplus_{k,m\in\mathbb{Z}}J_{k,m}(\mathfrak{g}) \tag{113}\]
is freely generated by \(l+1\) fundamental Weyl invariant Jacobi forms over the ring of modular forms \(\mathcal{M}_{*}(\mathrm{SL}(2,\mathbb{Z}))\). The Wirthmuller's theorem [87] provides weights and indices for fundamental Weyl invariant Jacobi forms of simple Lie algebras except for \(E_{8}\) as we list in Table 9. Although the theorem does not give explicit form of the Jacobi forms, generators of Weyl invariant Jacobi forms for each Lie algebra have been studied in many literatures [106; 107; 108; 109; 110; 111]. The \(E_{8}\) is exceptional case for Wirthmuller's theorem, but its Weyl invariant Jacobi forms are also studied recently [112; 113; 88; 114]. See also [115; 65] for a review in physics liturature. Here, we give a construction of Weyl invariant Jacobi forms used in this paper.
Let us consider \(\mathfrak{g}=A_{l}\). The weight \(-k\) Jacobi form \(\varphi_{k}^{A_{l}}\in J_{-k,1}(A_{l})\) is given by
\[\varphi_{k}^{A_{l}}=\left.\mathcal{Z}^{l+1-k}\prod_{j=1}^{l+1}\frac{i\theta_{ 1}(x_{j})}{\eta^{3}}\right|_{\sum x_{j}=0},\quad(k=0,2,3,\cdots,l+1) \tag{114}\]
\begin{table}
\begin{tabular}{c|c} \(\mathfrak{g}\) & \((-k,m)\) \\ \hline \(A_{l}\) & \((0,1),(j,1)\) for \(2\leq j\leq l+1\) \\ \(B_{l}\) & \((2j,1)\) for \(0\leq j\leq l\) \\ \(C_{l}\) & \((0,1),(2,1),(4,1),(2j,2)\) for \(3\leq j\leq l\) \\ \(D_{l}\) & \((0,1),(2,1),(4,1),(l,1),(2j,2)\) for \(3\leq j\leq l-1\) \\ \(E_{6}\) & \((0,1),(2,1),(5,1),(6,2),(8,2),(9,2),(12,3)\) \\ \(E_{7}\) & \((0,1),(2,1),(6,2),(8,2),(10,2),(12,3),(14,3),(18,4)\) \\ \(F_{4}\) & \((0,1),(2,1),(6,2),(8,2),(12,3)\) \\ \(G_{2}\) & \((0,1),(2,1),(6,2)\) \\ \end{tabular}
\end{table}
Table 9: Weights and indices for the fundamental Weyl invariant Jacobi forms
where
\[\mathcal{Z}=\frac{1}{2\pi i}\Bigg{(}\sum_{j=1}^{l+1}\frac{\partial}{ \partial x_{j}}+\frac{\pi^{2}}{3}E_{2}(\tau)\sum_{j=1}^{l+1}x_{j}\Bigg{)}\,. \tag{114}\]
The orthogonal basis \(x_{j}\)'s are related with the Dynkin basis \(\phi_{i}\)'s by \(x_{1}=\phi_{1}\), \(x_{j}=-\phi_{j-1}+\phi_{j}\) for \(2\leq j\leq l\) and \(x_{l+1}=-\phi_{l}\). In particular, we use
\[\varphi_{3}^{A_{2}} =(\chi_{\mathbf{3}}-\chi_{\overline{\mathbf{3}}})+(\chi_{ \overline{\mathbf{6}}}-\chi_{\mathbf{6}}+7\chi_{\mathbf{3}}-7\chi_{\overline{ \mathbf{3}}})q+\mathcal{O}(q^{2}), \tag{115}\] \[\varphi_{2}^{A_{2}} =\frac{1}{2}\big{[}(6-\chi_{\mathbf{3}}-\chi_{\overline{\mathbf{ 3}}})+(42+6\chi_{\mathbf{8}}-\chi_{\mathbf{6}}-\chi_{\overline{\mathbf{6}}}-13 \chi_{\mathbf{3}}-13\chi_{\overline{\mathbf{3}}})q+\mathcal{O}(q^{2})\big{]},\] \[\varphi_{0}^{A_{2}} =\frac{1}{4}\big{[}(18+\chi_{\mathbf{3}}+\chi_{\overline{\mathbf{ 3}}})+(342+18\chi_{\mathbf{8}}+\chi_{\mathbf{6}}+\chi_{\overline{\mathbf{6}}} -83\chi_{\mathbf{3}}-83\chi_{\overline{\mathbf{3}}})q+\mathcal{O}(q^{2}) \big{]},\]
for \(\mathfrak{g}=A_{2}\) in section 3.3 to write the modular ansatz for the \(SU(3)+1\mathbf{sym}+1\mathbf{\Lambda}^{2}\) LST, where \(\chi_{\mathbf{R}}\) denotes character of \(SU(3)\) for representation \(\mathbf{R}\).5
Footnote 5: Note that \(\varphi_{0}^{A_{2}}\) in our paper is \(-6\varphi_{0}\) defined in Appendix B of [77].
Next, to study the \(D_{l}\) Jacobi forms, we first consider the \(B_{l}\) Jacobi forms. The generators of \(B_{l}\) Jacobi forms \(\varphi_{2j}^{B_{l}}\in J_{-2j,1}\) can be computed from the generating function
\[\prod_{j=1}^{l}\frac{i\theta_{1}(v-x_{i})}{\eta^{3}}\frac{i\theta _{1}(v+x_{i})}{\eta^{3}}=\left(i\frac{\theta_{1}(v)}{\eta^{3}}\right)^{2l}\sum _{j=0}^{l}\frac{\wp^{(2j-2)}(v)}{(2j-1)!}\varphi_{2j}^{B_{l}}(x_{1},\cdots,x_{ l})\,, \tag{116}\]
where \(j=0\) term in the summation is understood as \(\varphi_{0}^{B_{l}}(x_{1},\cdots,x_{l})\), and \(\wp\) is the Weierstrass\(\wp\) function defined as
\[\wp(z)=\frac{\theta_{3}(0)^{2}\theta_{2}(0)^{2}}{4}\frac{\theta_{4}(z)^{2}}{ \theta_{1}(z)^{2}}-\frac{1}{12}\big{(}\theta_{3}(0)^{4}+\theta_{2}(0)^{4}\big{)}. \tag{117}\]
Then the \(l-3\) generators of \(D_{l}\) Jacobi forms with index \(2\) is identified with \(B_{l}\) Jacobi forms:
\[\varphi_{-k,2}^{D_{l}}=\varphi_{k}^{B_{l}}\quad(k=6,8,\cdots,2l-2)\,, \tag{118}\]
where \(\varphi_{-k,2}^{D_{l}}\in J_{-k,2}(D_{l})\) and \(\varphi_{k}^{B_{l}}\in J_{-k,1}(B_{l})\). The index \(1\) generators are
\[\varphi_{-n,1}^{D_{l}} =\prod_{j=1}^{l}\frac{\theta_{1}(x_{j})}{\eta^{3}}\,,\quad\varphi _{-4,1}^{D_{l}}=\frac{1}{\eta^{12}}\Bigg{(}\frac{\prod_{j=1}^{l}\theta_{3}(x_ {j})}{\theta_{3}(0)^{l-4}}-\frac{\prod_{j=1}^{l}\theta_{4}(x_{j})}{\theta_{4} (0)^{l-4}}-\frac{\prod_{j=1}^{l}\theta_{2}(x_{j})}{\theta_{2}(0)^{l-4}}\Bigg{)},\] \[\varphi_{-2,1}^{D_{l}} =\frac{\theta_{3}(0)^{4}+\theta_{4}(0)^{4}}{\eta^{12}}\Bigg{(} \frac{\prod_{j=1}^{l}\theta_{3}(x_{j})}{\theta_{3}(0)^{l-4}}-\frac{\prod_{j=1 }^{l}\theta_{4}(x_{j})}{\theta_{4}(0)^{l-4}}+\frac{2\prod_{j=1}^{l}\theta_{2 }(x_{j})}{\theta_{2}(0)^{l-4}}\Bigg{)}\] \[\quad-\frac{3\theta_{2}(0)^{4}}{\eta^{12}}\Bigg{(}\frac{\prod_{j =1}^{l}\theta_{3}(x_{j})}{\theta_{3}(0)^{l-4}}+\frac{\prod_{j=1}^{l}\theta_{4 }(x_{j})}{\theta_{4}(0)^{l-4}}\Bigg{)}\,,\] \[\varphi_{0,1}^{D_{l}} =\frac{1}{\eta^{12}}\Bigg{(}\frac{\prod_{j=1}^{l}\theta_{3}(x_{j })}{\theta_{3}(0)^{l-12}}-\frac{\prod_{j=1}^{l}\theta_{4}(x_{j})}{\theta_{4} (0)^{l-12}}-\frac{\prod_{j=1}^{l}\theta_{2}(x_{j})}{\theta_{2}(0)^{l-12}} \Bigg{)}\,, \tag{119}\]
where \(\varphi^{D_{l}}_{-k,1}\in J_{-k,1}(D_{l})\). These level 1 Jacobi forms are used to construct the 1-string elliptic genus of the \(SO(32)\) heterotic LST in subsection 3.2.2.
Lastly, we review the \(E_{8}\) Jacobi forms. The bigraded ring \(J_{*,*}(E_{8})\) for the \(E_{8}\) Weyl invariant Jacobi forms are contained in a polynomial algebra over \(\mathcal{M}_{*}(\mathrm{SL}(2,\mathbb{Z}))\) generated by nine functions [88]:
\[J_{*,*}(E_{8})\subsetneq\mathcal{M}_{*}(\mathrm{SL}(2,\mathbb{Z}))[A_{1},A_{2},A_{3},A_{4},A_{5},B_{2},B_{3},B_{4},B_{6}]\,, \tag{111}\]
where [112]
\[A_{1} =\Theta_{E_{8}}(\tau,x)=\frac{1}{2}\sum_{k=1}^{4}\prod_{j=1}^{8} \theta_{k}(\tau,x_{j})\,,\quad A_{4}=\Theta_{E_{8}}(\tau,2x)\,, \tag{112}\] \[A_{n} =\frac{n^{3}}{n^{3}+1}\Bigg{(}\Theta_{E_{8}}(n\tau,nx)+\frac{1}{ n^{4}}\sum_{k=0}^{n-1}\Theta_{E_{8}}(\tfrac{\tau+k}{n},x)\Bigg{)}\quad(n=2,3,5)\,,\] \[B_{2} =\frac{32}{5}\bigg{(}e_{1}(\tau)\Theta_{E_{8}}(2\tau,2x)+\frac{1 }{2^{4}}e_{3}(\tau)\Theta_{E_{8}}(\tfrac{\tau}{2},x)+\frac{1}{2^{4}}\Theta_{E _{8}}(\tfrac{\tau+1}{2},x)\bigg{)}\,,\] \[B_{3} =\frac{81}{80}\Bigg{(}h(\tau)^{2}\Theta_{E_{8}}(3\tau,3x)-\frac{ 1}{3^{5}}\sum_{k=0}^{2}h(\tfrac{\tau+k}{3})^{2}\Theta_{E_{8}}(\tfrac{\tau+k}{3 },x)\Bigg{)}\,,\] \[B_{4} =\frac{16}{15}\bigg{(}\theta_{4}(2\tau,0)^{4}\Theta_{E_{8}}(4 \tau,4x)-\frac{1}{2^{4}}\theta_{4}(2\tau,0)^{4}\Theta_{E_{8}}(\tau+\tfrac{1}{ 2},2x)\] \[\qquad\qquad-\frac{1}{2^{10}}\sum_{k=0}^{3}\theta_{2}(\tfrac{ \tau+k}{2},0)^{4}\Theta_{E_{8}}(\tfrac{\tau+k}{4},x)\bigg{)}\,,\] \[B_{6} =\frac{9}{10}\bigg{(}h(\tau)^{2}\Theta_{E_{8}}(6\tau,6x)+\frac{1} {2^{4}}\sum_{k=0}^{1}h(\tau+k)^{2}\Theta_{E_{8}}(\tfrac{3\tau+3k}{2},3x)\] \[\qquad\qquad-\frac{1}{3^{5}}\sum_{k=0}^{2}h(\tfrac{\tau+k}{3})^{ 2}\Theta_{E_{8}}(\tfrac{2\tau+2k}{3},2x)-\frac{1}{2^{4}\cdot 3^{5}}\sum_{k=0}^{5}h( \tfrac{\tau+k}{3})^{2}\Theta_{E_{8}}(\tfrac{\tau+k}{6},x)\bigg{)}\,.\]
Here,
\[e_{1}(\tau) =\frac{1}{12}\big{(}\theta_{3}(\tau,0)^{4}+\theta_{4}(\tau,0)^{4 }\big{)}\,, e_{2}(\tau)=\frac{1}{12}\big{(}\theta_{2}(\tau,0)^{4}-\theta_{4}( \tau,0)^{4}\big{)}\,, \tag{113}\] \[e_{2}(\tau) =\frac{1}{12}\big{(}-\theta_{2}(\tau,0)^{4}-\theta_{3}(\tau,0)^{4 }\big{)}\,, h(\tau)=\theta_{3}(2\tau,0)\theta_{3}(6\tau,0)+\theta_{2}(2\tau,0) \theta_{2}(6\tau,0)\,,\]
\(A_{n}\) and \(B_{n}\) have index \(n\) and weight 4 and 6, repectively, and normalized such that \(A_{n}(\tau,0)=E_{4}(\tau)\) and \(B_{n}(\tau,0)=E_{6}(\tau)\). They are used to construct modular ansatz of \(E_{8}\times E_{8}\) LST in subsection 3.2.1.
## Appendix B Derivation of elliptic genera
In this appendix, we present the details for elliptic genus computations of \(E_{8}\times E_{8}\) heterotic LST, \(SO(32)\) heterotic LST and \(SU(3)+1\mathbf{sym}+1\mathbf{\Lambda}^{2}\) LST using the 2d ADHM constructions for the moduli spaces of (instanton) strings.
### Elliptic genus of \(E_{8}\times E_{8}\) heterotic LST
We can evaluate the elliptic genera of the rank \(1\)\(E_{8}\times E_{8}\) heterotic LST from the 2d \(\mathcal{N}=(0,4)\) gauge theory description given in Figure 4. The elliptic genus is given by the integration of 1-loop determinants of supermultiplets in 2d gauge theory over flat connections of the \(O(k_{1})\times O(k_{2})\) gauge group. Note that we also have to sum over disconnected sectors of the flat connections corresponding to the disconnected components of the orthogonal gauge group. For a \(O(k)\) group with \(k\geq 3\), there are at most \(\lfloor k/2\rfloor\) complex moduli \(u_{I}\) and in total eight disconnected sectors for flat connections, while \(O(2)\) has seven sectors consist of one continuous complex modulus and six discrete holonomies, and \(O(1)\) has four discrete sectors [93]. In total, \((k_{1},k_{2})\)-string elliptic genus is given by
\[Z_{(k_{1},k_{2})}=\sum_{I_{1},I_{2}}\frac{1}{|W^{(I_{1})}|\cdot|W^{(I_{2})}|} \frac{1}{(2\pi i)^{r_{1}+r_{2}}}\oint Z_{\rm 1-loop}\,,\] (B.1)
where \(I_{1}\) and \(I_{2}\) represents disconnected sectors of \(O(k_{1})\) and \(O(k_{2})\) flat conections, \(W^{(I_{1,2})}\) are corresponding Weyl group factors and \(r_{1,2}\) are number of continuous complex moduli. The integration contour is chosen by Jeffery-Kirwan residue (JK-residue for short) prescription as discussed in [75; 89]. The 1-loop determinant \(Z_{\rm 1-loop}\) is the collection of following 1-loop determinants
\[Z^{(j)}_{\rm vec} =\left(\prod_{I=1}^{r_{j}}\frac{2\pi\eta^{2}du_{I}}{i}\frac{i \theta_{1}(2\epsilon_{+})}{\eta}\right)\!\left(\prod_{e\in\mathbf{R}_{j}}\frac {i\theta_{1}(e\cdot u)}{\eta}\frac{i\theta_{1}(2\epsilon_{+}+e\cdot u)}{\eta} \right),\] (B.2) \[Z^{(j)}_{\rm sym,hyp} =\prod_{w\in\mathbf{sym}_{j}}\frac{(i\eta)^{2}}{\theta_{1}( \epsilon_{1,2}+w\cdot u)}\,,\quad Z^{(j)}_{\rm fund,Fermi}=\prod_{w\in\mathbf{ fund}_{j}}\prod_{l=l_{j}}^{l_{j}+7}\frac{i\theta_{1}(m_{l}+w\cdot u)}{\eta}\,,\] \[Z_{\rm bifund} =\prod_{w\in\mathbf{bifund}}\frac{\theta_{1}(\pm m_{0}+\epsilon_ {-}+w\cdot u)}{\theta_{1}(\pm m_{0}-\epsilon_{+}+w\cdot u)}\,,\]
for \(j=1,2\), where \(\mathbf{R}_{j}\), \(\mathbf{sym}_{j}\) and \(\mathbf{fund}_{j}\) denotes root system, symmetric and fundamental representation of \(SO(k_{j})\), repectively, \(\mathbf{bifund}\) is the bifundamental representation of \(SO(k_{1})\times SO(k_{2})\) and \((l_{1},l_{2})=(1,9)\). The details of the contour integral for \(O(k)\) gauge group are explained in [93], and we will use some of their results.
(1,0)-stringThe \(O(1)\) gauge group consists of four discrete flat connections labelled by \(u^{I}=0,\frac{1}{2},\frac{\tau+1}{2},\frac{\tau}{2}\). For each sector, the 1-loop determinant is
\[Z^{I}_{(1,0)}=\frac{(i\eta)^{2}}{\theta_{1}(\epsilon_{1}+2u)\theta_{1}( \epsilon_{2}+2u^{I})}\prod_{l=1}^{8}\frac{i\theta_{1}(m_{l}+u^{I})}{\eta}\,.\] (B.3)
Thus, the \((1,0)\)-string elliptic genus is
\[Z_{(1,0)}=-\frac{1}{2}\sum_{I=1}^{4}\frac{\prod_{l=1}^{8}\theta_{I}(m_{l})}{ \eta^{6}\theta_{1}(\epsilon_{1})\theta_{1}(\epsilon_{2})}\,,\] (B.4)
where \(1/2\) is the Weyl group factor.
(2,0)-stringThe \(O(2)\) gauge group has one continuous flat connection and six discrete flat connections. The contribution from the continuous sector is
\[Z^{(0)}_{(2,0)}=\frac{1}{2\pi i}\oint\frac{2\pi\eta^{2}du}{i}\frac{i\theta_{1}(2 \epsilon_{+})}{\eta}\frac{(i\eta)^{6}}{\theta_{1}(\epsilon_{1,2})\theta_{1}( \epsilon_{1,2}\pm 2u)}\prod_{l=1}^{8}\frac{i\theta_{1}(m_{l}\pm u)}{\eta}\,.\] (B.5)
The JK-residue comes from \(u=-\frac{\epsilon_{1,2}}{2}+u^{I}\), where \(u^{I}=0,\frac{1}{2},\frac{\tau+1}{2},\frac{\tau}{2}\). In total, we compute6
Footnote 6: The overall sign is chosen by requiring the GV-invariant structure (2.3).
\[Z^{(0)}_{(2,0)}=\frac{1}{2\eta^{12}\theta_{1}(\epsilon_{1})\theta_{1}( \epsilon_{2})}\sum_{I=1}^{4}\left(\frac{\prod_{l=1}^{8}\theta_{I}(m_{l}\pm \frac{\epsilon_{1}}{2})}{\theta_{1}(2\epsilon_{1})\theta_{1}(\epsilon_{2}- \epsilon_{1})}+\frac{\prod_{l=1}^{8}\theta_{I}(m_{l}\pm\frac{\epsilon_{2}}{2} )}{\theta_{1}(2\epsilon_{2})\theta_{1}(\epsilon_{1}-\epsilon_{2})}\right).\] (B.6)
The six discrete sectors are
\[Z^{(I,J)}_{(2,0)} =\frac{i\theta_{1}(u^{I}+u^{J})}{\eta}\frac{i\theta_{1}(2\epsilon _{+}+u^{I}+u^{J})}{\eta}\] \[\quad\cdot\frac{(i\eta)^{6}}{\theta_{1}(\epsilon_{1,2}+2u^{I}) \theta_{1}(\epsilon_{1,2}+2u^{J})\theta_{1}(\epsilon_{1,2}+u^{I}+u^{J})}\prod _{l=1}^{8}\frac{i\theta_{1}(m_{l}+u^{I,J})}{\eta},\] (B.7)
where \((I,J)=(1,2),(1,3),(1,4),(2,3),(2,4),(3,4)\) labels the six sectors of flat connections for \((u^{1},u^{2},u^{3},u^{4})=(0,\frac{1}{2},\frac{\tau+1}{2},\frac{\tau}{2})\). These sectors can be rewritten as
\[Z^{(I,J)}_{(2,0)}=\frac{\theta_{\sigma(I,J)}(0)\theta_{\sigma(I,J)}(2 \epsilon_{+})\prod_{l=1}^{8}\theta_{I}(m_{l})\theta_{J}(m_{l})}{\eta^{12} \theta_{1}(\epsilon_{1,2})^{2}\theta_{\sigma(I,J)}(\epsilon_{1})\theta_{ \sigma(I,J)}(\epsilon_{2})}\,,\] (B.8)
where
\[\begin{split}&\sigma(I,J)=\sigma(J,I)\,,\quad\sigma(I,I)=0\,, \quad\sigma(1,I)=I\,,\\ &\sigma(2,3)=4\,,\quad\quad\quad\quad\sigma(2,4)=3\,,\quad \sigma(3,4)=2\,.\end{split}\] (B.9)
After dividing it by the Weyl group factor, the \((2,0)\)-string elliptic genus is given by
\[Z_{(2,0)}=\frac{1}{2}Z^{(0)}_{(2,0)}+\frac{1}{4}\sum_{I=1}^{4}\sum_{J=I+1}^{4 }Z^{(I,J)}_{(2,0)}\,.\] (B.10)
(1,1)-stringLet \(u_{1}\) and \(u_{2}\) label the flat connections for two \(O(1)\) gauge groups. As we explained in the (1,0)-string case above, there are four distinct flat connections labelled by \(u^{I}=0,\frac{1}{2},\frac{\tau+1}{2},\frac{\tau}{2}\) for each \(O(1)\) gauge group. The 1-loop determinant is
\[\begin{split} Z^{(I,J)}_{(1,1)}&=\frac{(i\eta)^{4} }{\theta_{1}(\epsilon_{1}+2u^{I,J})\theta_{1}(\epsilon_{2}+2u^{I,J})}\frac{ \theta_{1}(\pm m+\epsilon_{-}+u^{I}+u^{I})}{\theta_{1}(\pm m-\epsilon_{+}+u^{I }+u^{J})}\\ &\quad\cdot\left(\prod_{l=1}^{8}\frac{i\theta_{1}(m_{l}+u^{I})}{ \eta}\right)\!\left(\prod_{l=9}^{16}\frac{i\theta_{1}(m_{l}+u^{J})}{\eta} \right),\end{split}\] (B.11)
where \(I,J=1,2,3,4\) when \(u^{I,J}=0,\frac{1}{2},\frac{\tau+1}{2},\frac{\tau}{2}\), repectively. By dividing it by the Weyl group factor, the \((1,1)\)-string elliptic genus is
\[Z_{(1,1)}=\frac{1}{4}\sum_{I,J=1}^{4}\frac{\prod_{l=1}^{8}\theta_{I}(m_{l}) \cdot\prod_{l=9}^{16}\theta_{J}(m_{l})\,\theta_{\sigma(I,J)}(\pm m_{0}+\epsilon _{-})}{\eta^{12}\theta_{1}(\epsilon_{1})^{2}\theta_{1}(\epsilon_{2})^{2}}\frac{ \theta_{\sigma(I,J)}(\pm m_{0}-\epsilon_{+})}\,.\] (B.12)
(2,1)-stringTo compute the \((2,1)\)-string elliptic genus, we need to consider all combinations of \(O(2)\) and \(O(1)\) flat connections. First, from the continuous sector of \(O(2)\) and four discrete sectors of \(O(1)\), we have
\[Z_{(2,1)}^{(0)} =\frac{1}{2\pi i}\oint\frac{2\pi\eta^{2}du_{1}}{i}\frac{i\theta_{ 1}(2\epsilon_{+})}{\eta}\frac{(i\eta)^{6}}{\theta_{1}(\epsilon_{1,2})\theta_ {1}(\epsilon_{1,2}\pm 2u_{1})}\frac{(i\eta)^{2}}{\theta_{1}(\epsilon_{1}+2u^{J}) \theta_{1}(\epsilon_{2}+2u^{J})}\] (B.13) \[\quad\cdot\frac{\theta_{1}(\pm m+\epsilon_{-}+u_{1}+u^{J})\theta _{1}(\pm m+\epsilon_{-}-u_{1}+u^{J})}{\theta_{1}(\pm m-\epsilon_{+}+u_{1}+u^ {J})\theta_{1}(\pm m-\epsilon_{+}-u_{1}+u^{J})}\] \[\quad\cdot\left(\prod_{l=1}^{8}\frac{i\theta_{1}(m_{l}\pm u_{1}) }{\eta}\right)\!\left(\prod_{l=9}^{16}\frac{i\theta_{1}(m_{l}+u^{J})}{\eta} \right),\]
where \(u^{J}=0,\frac{1}{2},\frac{\tau+1}{2},\frac{\tau}{2}\) labels \(O(1)\) flat connections. Then \(Z_{(2,1)}^{(0)}\) is given by sum of following two JK-residues:
* \(u_{1}=-\frac{\epsilon_{1,2}}{2}+u^{I}\) for \(u^{I}=0,\frac{1}{2},\frac{\tau+1}{2},\frac{\tau}{2}\) \[\sum_{I,J=1}^{4}\frac{-\prod_{l=1}^{8}\theta_{I}(m_{l}\pm\frac{ \epsilon_{1}}{2})\cdot\prod_{l=9}^{16}\theta_{J}(m_{l})}{2\eta^{18}\theta_{1 }(\epsilon_{1,2})^{2}\theta_{1}(2\epsilon_{1})\theta_{1}(\epsilon_{2}- \epsilon_{1})}\frac{\theta_{\sigma(I,J)}(\pm m_{0}+\epsilon_{1}-\frac{ \epsilon_{2}}{2})}{\theta_{\sigma(I,J)}(\pm m_{0}-\epsilon_{1}-\frac{ \epsilon_{2}}{2})}+(\epsilon_{1}\leftrightarrow\epsilon_{2})\]
* \(u_{1}=\pm m+\epsilon_{+}-u^{J}\) \[\sum_{I=1}^{4}\frac{-\prod_{l=1}^{8}\theta_{I}(m_{l}\pm(m_{0}+ \epsilon_{+}))\cdot\prod_{l=9}^{16}\theta_{I}(m_{l})}{\eta^{18}\theta_{1}( \epsilon_{1,2})\theta_{1}(2m_{0})\theta_{1}(2m_{0}+2\epsilon_{+})\theta_{1}(2 m_{0}+2\epsilon_{+}+\epsilon_{1,2})}+(m_{0}\rightarrow-m_{0})\]
Next, there are combinations of six discrete sectors for \(O(2)\) and four discrete sectors for \(O(1)\). If we denote \((u^{I},u^{J})=(0,\frac{1}{2}),(0,\frac{\tau+1}{2}),(0,\frac{\tau}{2}),(\frac{ 1}{2},\frac{\tau+1}{2}),(\frac{1}{2},\frac{\tau}{2}),(\frac{\tau+1}{2},\frac{ \tau}{2})\) as the \(O(2)\) discrete flat connections and \(u^{K}=0,\frac{1}{2},\frac{\tau+1}{2},\frac{\tau}{2}\) as the \(O(1)\) flat connections, the 1-loop determinant is
\[Z_{(2,1)}^{(I,J,K)} =\frac{i\theta_{1}(u^{I}+u^{J})}{\eta}\frac{i\theta_{1}(2\epsilon _{+}+u^{I}+u^{J})}{\eta}\frac{(i\eta)^{6}}{\theta_{1}(\epsilon_{1,2}+2u^{I,J}) \theta_{1}(\epsilon_{1,2}+u^{I}+u^{J})}\] \[\quad\cdot\frac{(i\eta)^{2}}{\theta_{1}(\epsilon_{1,2}+2u^{K})} \frac{\theta_{1}(\pm m+\epsilon_{-}+u^{I}+u^{K})\theta_{1}(\pm m+\epsilon_{-}+u ^{J}+u^{K})}{\theta_{1}(\pm m-\epsilon_{+}+u^{I}+u^{K})\theta_{1}(\pm m- \epsilon_{+}+u^{J}+u^{K})}\] \[\quad\cdot\left(\prod_{l=1}^{8}\frac{i\theta_{1}(m_{l}+u^{I,J})} {\eta}\right)\!\left(\prod_{l=9}^{16}\frac{i\theta_{1}(m_{l}+u^{K})}{\eta} \right).\] (B.16)
Then we get
\[Z_{(2,1)}^{(I,J,K)} =-\frac{\theta_{\sigma(I,J)}(0)\theta_{\sigma(I,J)}(2\epsilon_{+})}{ \eta^{18}\theta_{1}(\epsilon_{1,2})^{3}\theta_{\sigma(I,J)}(\epsilon_{1,2})} \frac{\theta_{\sigma(I,K)}(\pm m_{0}+\epsilon_{-})\theta_{\sigma(J,K)}(\pm m_{0 }+\epsilon_{-})}{\theta_{\sigma(I,K)}(\pm m_{0}-\epsilon_{+})\theta_{\sigma(J,K)}(\pm m_{0}-\epsilon_{+})}\] \[\quad\cdot\prod_{l=1}^{8}\theta_{I}(m_{l})\theta_{J}(m_{l})\cdot \prod_{l=9}^{16}\theta_{K}(m_{l})\,. \tag{113}\]
By dividing it by the Weyl group factor, the \((2,1)\)-string elliptic genus can be written as
\[Z_{(2,1)}=\frac{1}{4}Z_{(2,1)}^{(0)}+\frac{1}{8}\sum_{K=1}^{4} \sum_{I<J}^{4}Z_{(2,1)}^{(I,J,K)}\,. \tag{114}\]
### Elliptic genus of \(So(32)\) heterotic LST
In this appendix, we compute the elliptic genus of the rank 1 \(SO(32)\) heterotic LST based on the 2d gauge theory description given in Figure 5. The 2d theory has orthogonal gauge group, so \(k\)-string elliptic genus can be written as
\[Z_{k} =\sum_{K}\frac{1}{|W^{(K)}|}\frac{1}{(2\pi i)^{r}}\oint\left( \prod_{l=1}^{r}\frac{2\pi\eta^{2}du_{I}}{i}\frac{i\theta_{1}(2\epsilon_{+})}{ \eta}\right)\!\left(\prod_{e\in\mathbf{R}}\frac{i\theta_{1}(e\cdot u)}{\eta} \frac{i\theta_{1}(2\epsilon_{+}+e\cdot u)}{\eta}\right)\] \[\left(\prod_{\rho\in\mathbf{sym}}\frac{(i\eta)^{2}}{\theta_{1}( \epsilon_{1,2}+\rho(u))}\frac{(i\eta)^{2}}{\theta_{1}(\pm m_{0}-\epsilon_{+}+ \rho(u))}\right)\!\left(\prod_{\rho\in\mathbf{anti}}\frac{i^{2}\theta_{1}(\pm m _{0}+\epsilon_{-}+\rho(u))}{\eta^{2}}\right)\] \[\left(\prod_{\rho\in\mathbf{bifund}}\frac{\theta_{1}(m_{0}+\rho( a,u))}{\theta_{1}(\epsilon_{+}+\rho(a,u))}\right)\!\left(\prod_{\rho\in\mathbf{ fund}}\prod_{l=1}^{16}\frac{i\theta_{1}(m_{l}+\rho(u))}{\eta}\right), \tag{115}\]
for a number \(r\) of continuous complex moduli \(u_{I}\) as explained in [93] and briefly reviewed in previous subsection. Here, \(K\) denotes the disconnected sectors of \(O(k)\) flat connections, \(W^{(K)}\) is corresponding Weyl group, \(\mathbf{R}\), \(\mathbf{sym}\), \(\mathbf{anti}\) and \(\mathbf{fund}\) are \(SO(k)\) root system, symmetric, antisymmetric (i.e., adjoint) and fundamental representations, repectively, and \(\mathbf{bifund}\) is the bifundamental representation in \(SO(k)\times Sp(1)\).
1-stringThere are 4 discrete flat connections in the \(O(1)\) gauge group, labelled by \(u^{I}=0,\frac{1}{2},\frac{\tau+1}{2},\frac{\tau}{2}\). The 1-string elliptic genus by summing over the contributions for these flat connections is
\[Z_{1}=-\sum_{I=1}^{4}\frac{\theta_{I}(m_{0}\pm a)\prod_{l=1}^{16} \theta_{I}(m_{l})}{2\eta^{12}\theta_{1}(\epsilon_{1})\theta_{1}(\epsilon_{2}) \theta_{1}(\pm m_{0}-\epsilon_{+})\theta_{I}(\epsilon_{+}\pm a)}\,, \tag{116}\]
where \(1/2\) factor comes from the Weyl group.
2-stringThere one continuous sector and 6 discrete sectors of the \(O(2)\) flat connections. The continuous sector contribution is
\[Z_{2}^{(0)} =\oint\frac{2\pi\eta^{2}du}{i}\frac{i\theta_{1}(2\epsilon_{+})}{ \eta}\frac{(i\eta)^{6}}{\theta_{1}(\epsilon_{1,2})\theta_{1}(\epsilon_{1,2}\pm 2u)}\] \[\quad\cdot\frac{(i\eta)^{6}}{\theta_{1}(\pm m_{0}-\epsilon_{+}) \theta_{1}(\pm m_{0}-\epsilon_{+}+2u)\theta_{1}(\pm m_{0}-\epsilon_{+}-2u)} \frac{i^{2}\theta_{1}(\pm m_{0}+\epsilon_{-})}{\eta^{2}}\] \[\quad\cdot\frac{\theta_{1}(m_{0}\pm a+u)\theta_{1}(m_{0}\pm a-u)} {\theta_{1}(\epsilon_{+}\pm a+u)\theta_{1}(\epsilon_{+}\pm a-u)}\prod_{l=1}^{1 6}\frac{i^{2}\theta_{1}(m_{l}\pm u)}{\eta^{2}}\,. \tag{111}\]
The integral can be evaluated by summing over the following JK-residues:
* \(\epsilon_{1,2}+2u=0,1,\tau,\tau+1\) \[Z_{2}^{(1)}=\sum_{J=1}^{4}\frac{\prod_{l=1}^{16}\theta_{J}(m_{l }\pm\frac{\epsilon_{1}}{2})}{2\eta^{24}\theta_{1}(\epsilon_{1,2})\theta_{1}( 2\epsilon_{1})\theta_{1}(\epsilon_{2}-\epsilon_{1})\theta_{1}(\pm m_{0}- \epsilon_{+})\theta_{1}(\pm m_{0}-\epsilon_{+}-\epsilon_{1})}\] \[\qquad\qquad\qquad\cdot\frac{\theta_{J}(m_{0}+\frac{\epsilon_{1}} {2}\pm a)\theta_{J}(m_{0}-\frac{\epsilon_{1}}{2}\pm a)}{\theta_{J}(\epsilon_{ +}+\frac{\epsilon_{1}}{2}\pm a)\theta_{J}(\epsilon_{+}-\frac{\epsilon_{1}}{2} \pm a)}+(\epsilon_{1}\leftrightarrow\epsilon_{2})\] (112)
* \(\pm m_{0}-\epsilon_{+}+2u=0,1,\tau,\tau+1\) \[Z_{2}^{(2)}=-\sum_{J=1}^{4}\frac{\prod_{l=1}^{16}\theta_{J}(m_{l }\pm\frac{m_{0}+\epsilon_{+}}{2})}{2\eta^{24}\theta_{1}(\epsilon_{1,2}) \theta_{1}(2m_{0})\theta_{1}(2m_{0}+2\epsilon_{+})\theta_{1}(\epsilon_{+}\pm m _{0})\theta_{1}(m_{0}+\epsilon_{+}+\epsilon_{1,2})}\] \[\qquad\qquad\cdot\frac{\theta_{J}(\frac{3m_{0}}{2}+\frac{ \epsilon_{+}}{2}\pm a)}{\theta_{J}(\frac{m_{0}}{2}+\frac{3}{2}\epsilon_{+}\pm a )}+(m_{0}\rightarrow-m_{0})\] (113)
* \(\epsilon_{+}\pm a+u=0\) \[Z_{2}^{(3)}=\frac{\prod_{l=1}^{16}\theta_{1}(m_{l}\pm(\epsilon_{+} +a))}{\eta^{24}\theta_{1}(\epsilon_{1,2})\theta_{1}(2a)\theta_{1}(\epsilon_{1, 2}+2a)\theta_{1}(2\epsilon_{+}+2a)\theta_{1}(2\epsilon_{+}+\epsilon_{1,2}+2a)}\] \[\qquad\qquad\cdot\frac{\theta_{1}(\pm m_{0}+\epsilon_{-})}{ \theta_{1}(\pm m_{0}-3\epsilon_{+}-2a)}+(a\rightarrow-a)\,.\] (114)
The contributions coming from the discrete sectors are
\[Z_{2}^{(I,J)} =\frac{i\eta(u^{I}+u^{J})}{\eta}\frac{i\theta_{1}(2\epsilon_{+}+u ^{I}+u^{J})}{\eta}\frac{(i\eta)^{6}}{\theta_{1}(\epsilon_{1,2}+2u^{I,J}) \theta_{1}(\epsilon_{1,2}+u^{I}+u^{J})}\] \[\quad\cdot\frac{(i\eta)^{6}}{\theta_{1}(\pm m_{0}-\epsilon_{+}+2 u^{I,J})\theta_{1}(\pm m_{0}-\epsilon_{+}+u^{I}+u^{J})}\frac{i^{2}\theta_{1}( \pm m_{0}+\epsilon_{-}+u^{I}+u^{J})}{\eta^{2}}\] \[\quad\cdot\frac{\theta_{1}(m_{0}\pm a+u^{I,J})}{\theta_{1}( \epsilon_{+}\pm a+u^{I,J})}\prod_{l=1}^{16}\frac{i^{2}\theta_{1}(m_{l}+u^{I,J })}{\eta^{2}}\,, \tag{115}\]
where \((I,J)=(1,2),(1,3),(1,4),(2,3),(2,4),(3,4)\) corresponds to six flat connections \((u^{I},u^{J})=(0,\frac{1}{2}),(0,\frac{\tau+1}{2}),(0,\frac{\tau}{2}),(\frac{1}{ 2},\frac{\tau+1}{2}),(\frac{1}{2},\frac{\tau}{2}),(\frac{\tau+1}{2},\frac{\tau }{2})\). This can be written as
\[\begin{split} Z_{2}^{(I,J)}&=\frac{\theta_{\sigma( I,J)}(0)\theta_{\sigma(I,J)}(2\epsilon_{+})\theta_{\sigma(I,J)}(\pm m_{0}+ \epsilon_{-})\theta_{I}(m_{0}\pm a)\theta_{J}(m_{0}\pm a)}{\eta^{24}\theta_{ 1}(\epsilon_{1,2})^{2}\theta_{\sigma(I,J)}(\epsilon_{1,2})\theta_{1}(\pm m_{0 }-\epsilon_{+})^{2}\theta_{\sigma(I,J)}(\pm m_{0}-\epsilon_{+})}\\ &\qquad\cdot\frac{\prod_{l=1}^{16}\theta_{I}(m_{l})\theta_{J}(m_ {l})}{\theta_{I}(\epsilon_{+}\pm a)\theta_{J}(\epsilon_{+}\pm a)}\,,\end{split} \tag{103}\]
where \(\sigma(I,J)\) is defined in (102). In total, the 2-string elliptic genus is
\[Z_{2}=\frac{1}{2}\sum_{I=1}^{3}Z_{2}^{(I)}+\frac{1}{4}\sum_{I<J}^{4}Z_{2}^{(I,J)}\,, \tag{104}\]
where \(1/2\) and \(1/4\) are Weyl group factors.
### Elliptic genus of \(SU(3)+1\mathbf{sym}+1\mathbf{\Lambda}^{2}\)
From the 2d gauge theory description given in Figure 6, the elliptic genus of \(k\)-string can be written as
\[Z_{k} =\frac{1}{k!}\frac{1}{(2\pi i)^{k}}\oint\left(\prod_{I=1}^{k} \frac{2\pi\eta^{2}du_{I}}{i}\frac{i\theta_{1}(2\epsilon_{+})}{\eta}\right) \!\left(\prod_{I\neq J}\frac{i\theta_{1}(u_{IJ})}{\eta}\frac{i\theta_{1}(2 \epsilon_{+}+u_{IJ})}{\eta}\right)\] \[\left(\prod_{I,J}\frac{(i\eta)^{2}}{\theta_{1}(\epsilon_{1,2}+u_{ IJ})}\right)\!\left(\prod_{I}\prod_{j=1}^{N}\frac{(i\eta)^{2}}{\theta_{1}( \epsilon_{+}\pm(u_{I}-a_{j}))}\right)\] \[\left(\prod_{I\leq J}\frac{(i\eta)^{2}}{\theta_{1}(-\epsilon_{+} \pm(u_{I}+u_{J}+m_{2}))}\right)\!\left(\prod_{I<J}\frac{i^{2}\theta_{1}(- \epsilon_{-}\pm(u_{I}+u_{J}+m_{2}))}{\eta^{2}}\right)\] \[\left(\prod_{I}\prod_{j=1}^{N}\frac{i\theta_{1}(u_{I}+a_{j}+m_{2} )}{\eta}\right)\!\left(\prod_{I<J}\frac{(i\eta)^{2}}{\theta_{1}(-\epsilon_{+} \pm(u_{I}+u_{J}+m_{1}))}\right)\] \[\left(\prod_{I\leq J}\frac{i^{2}\theta_{1}(-\epsilon_{-}\pm(u_{I} +u_{J}+m_{1}))}{\eta^{2}}\right)\!\left(\prod_{I}\prod_{j=1}^{N}\frac{i\theta _{1}(u_{I}+a_{j}+m_{1})}{\eta}\right), \tag{105}\]
where \(u_{IJ}=u_{I}-u_{J}\), \(a_{1,2,3}\) are \(U(3)\) chemical potentials, \(m_{1}\) and \(m_{2}\) are \(U(1)_{S}\) and \(U(1)_{A}\) chemical potentials. Here we focus on \(N=3\) case, which gives the elliptic genera of strings in the \(SU(3)+1\mathbf{sym}+1\mathbf{\Lambda}^{2}\) LST.
1-stringThe relevant poles are \(\epsilon_{+}+u_{1}-a_{j}=0\) and \(-\epsilon_{+}+2u_{1}+m_{2}=0\), and the contributions from these poles are
* \(u_{1}=-\epsilon_{+}+a_{j}\) \[-\sum_{j=1}^{3}\frac{\theta_{1}(2a_{j}+m_{1}-\epsilon_{+})\theta_{1}(2a_{j}+m_{1 }-\epsilon_{+}-\epsilon_{1,2})}{\theta_{1}(\epsilon_{1,2})\theta_{1}(2a_{j}+ m_{2}-3\epsilon_{+})}\prod_{k\neq j}^{3}\frac{\theta_{1}(a_{j}+a_{k}+m_{1,2}- \epsilon_{+})}{\theta_{1}(a_{jk})\theta_{1}(2\epsilon_{+}-a_{jk})}\] (106)
* \(u_{1}=\frac{\epsilon_{+}-m_{2}}{2}+x\), where \(x=0,\frac{1}{2},\frac{\tau+1}{2},\frac{\tau}{2}\) \[\sum_{I=1}^{4}\frac{\theta_{1}(m_{1}-m_{2}+\epsilon_{1,2})}{2\theta_{1}(\epsilon_ {1,2})}\prod_{j=1}^{3}\frac{\theta_{I}(a_{j}+m_{1}-\frac{m_{2}}{2}+\frac{ \epsilon_{+}}{2})}{\theta_{I}(a_{j}-\frac{3\epsilon_{+}-m_{2}}{2})}\] (B.30)
The 1-string elliptic genus \(Z_{1}\) is given by the summation of these two contributions.
2-stringLet \(x_{1},x_{2}\) be \(0,\frac{1}{2},\frac{\tau}{2},\frac{\tau+1}{2}\). The 2-string elliptic genus is given by summation of the following contributions from the JK-residues:
* \(\epsilon_{+}+u_{1}-a_{j}=0\), \(-\epsilon_{+}+u_{1}+u_{2}+m_{1}=0\) \[\sum_{j=1}^{3}\frac{\theta_{1}(2\epsilon_{+})\theta_{1}(m_{1}-m_{2}- \epsilon_{1,2})\theta_{1}(2a_{j}+m_{1}-\epsilon_{+})}{2\theta_{1}(\epsilon_{ 1,2})\theta_{1}(m_{1}-m_{2})\theta_{1}(2a_{j}+m_{2}-3\epsilon_{+})}\] \[\qquad\cdot\frac{\theta_{1}(2a_{j}+m_{1}-3\epsilon_{+})\theta_{1} (2a_{j}+m_{1}-5\epsilon_{+})}{\theta_{1}(2a_{j}+2m_{1}-m_{2}-3\epsilon_{+}) \theta_{1}(2a_{j}+2m_{1}-m_{2}-5\epsilon_{+})}\] (B.31) \[\qquad\cdot\prod_{k\neq j}^{3}\frac{\theta_{1}(a_{jk}+m_{1}-m_{2}-2 \epsilon_{+})\theta_{1}(a_{j}+a_{k}+m_{2}-\epsilon_{+})}{\theta_{1}(a_{jk}) \theta_{1}(a_{j}+a_{k}+m_{1}-3\epsilon_{+})}\]
* \(\epsilon_{+}+u_{1}-a_{j}=0\), \(-\epsilon_{+}+u_{1}+u_{2}+m_{2}=0\) \[-\sum_{j=1}^{3}\frac{\theta_{1}(2\epsilon_{+})\theta_{1}(m_{1}-m_{2}+ \epsilon_{1,2})\theta_{1}(2a_{j}+m_{1}-\epsilon_{+})}{2\theta_{1}(\epsilon_{ 1,2})\theta_{1}(m_{1}-m_{2})\theta_{1}(2a_{j}+m_{2}-3\epsilon_{+})}\] \[\qquad\cdot\frac{\theta_{1}(2a_{j}+m_{1}-\epsilon_{+}-\epsilon_{ 1,2})\theta_{1}(2a_{j}-m_{1}+2m_{2}-3\epsilon_{+}-\epsilon_{1,2})}{\theta_{1} (2a_{j}+m_{2}-\epsilon_{+}-\epsilon_{1,2})\theta_{1}(2a_{j}+m_{2}-3\epsilon_{+ }-\epsilon_{1,2})}\] (B.32) \[\qquad\cdot\prod_{k\neq j}^{3}\frac{\theta_{1}(a_{jk}-m_{1}+m_{2}-2 \epsilon_{+})\theta_{1}(a_{j}+a_{k}+m_{1}-\epsilon_{+})}{\theta_{1}(a_{jk}) \theta_{1}(a_{j}+a_{k}+m_{2}-3\epsilon_{+})}\]
* \(-\epsilon_{+}+2u_{1}+m_{2}=x_{1}\), \(-\epsilon_{+}+u_{1}+u_{2}+m_{1}=0\) \[\sum_{I=1}^{4}\frac{\theta_{1}(m_{1}-m_{2})\theta_{1}(m_{1}-m_{2}- \epsilon_{1,2})\theta_{1}(m_{1}-m_{2}+2\epsilon_{+})}{4\theta_{1}(\epsilon_{ 1,2})\theta_{1}(2m_{1}-2m_{2})\theta_{1}(2m_{1}-2m_{2}-2\epsilon_{+})}\] (B.33) \[\qquad\cdot\prod_{j=1}^{3}\frac{\theta_{I}(a_{j}+\frac{m_{2}}{2}+ \frac{\epsilon_{+}}{2})\theta_{I}(a_{j}-m_{1}+\frac{3m_{2}}{2}+\frac{\epsilon _{+}}{2})}{\theta_{I}(a_{j}+\frac{m_{2}}{2}-\frac{3\epsilon_{+}}{2})}\theta_{ I}(a_{j}+m_{1}-\frac{m_{2}}{2}-\frac{3\epsilon_{+}}{2})\]
* \(\epsilon_{+}+u_{1}-a_{j}=0\), \(\epsilon_{+}+u_{2}-a_{k}=0\) (\(j\neq k\)) \[\sum_{j\neq k}^{3}\frac{\theta_{1}(2a_{j,k}+m_{1}-\epsilon_{+}) \prod_{i=1}^{2}\theta_{1}(2a_{j,k}+m_{1}-\epsilon_{+}-\epsilon_{i})}{2\theta_ {1}(\epsilon_{1,2})^{2}\theta_{1}(a_{jk}+\epsilon_{1,2})\theta_{1}(a_{jk}- \epsilon_{1,2})\theta_{1}(2a_{j,k}+m_{2}-3\epsilon_{+})}\] \[\qquad\cdot\frac{\prod_{i=1}^{2}\theta_{1}(a_{j}+a_{k}+m_{i}- \epsilon_{+})\theta_{1}(a_{j}+a_{k}+m_{i}-\epsilon_{+}-\epsilon_{1,2})}{\theta_ {1}(a_{j}+a_{k}+m_{1,2}-3\epsilon_{+})}\] (B.34) \[\qquad\cdot\prod_{l\neq j,k}^{3}\frac{\theta_{1}(a_{j}+a_{l}+m_{1,2}- \epsilon_{+})\theta_{1}(a_{k}+a_{l}+m_{1,2}-\epsilon_{+})}{\theta_{1}(a_{jl}) \theta_{1}(a_{kl})\theta_{1}(a_{jl}-2\epsilon_{+})\theta_{1}(a_{kl}-2\epsilon_{ +})}\]
* \(\epsilon_{+}+u_{1}-a_{j}=0\), \(-\epsilon_{+}+2u_{2}+m_{2}=x_{2}\) and \(-\epsilon_{+}+2u_{1}+m_{2}=x_{1}\), \(\epsilon_{+}+u_{2}-a_{j}=0\) \[-2\sum_{I=1}^{4}\sum_{j=1}^{3}\frac{\theta_{1}(m_{1}-m_{2}+ \epsilon_{1,2})\theta_{1}(2a_{1}+m_{1}-\epsilon_{+})\theta_{1}(2a_{j}+m_{1}- \epsilon_{+}-\epsilon_{1,2})}{4\theta_{1}(\epsilon_{1,2})^{2}\theta_{1}(2a_{j}+ m_{2}-3\epsilon_{+})\theta_{I}(a_{j}+m_{1}-\frac{m_{2}}{2}-\frac{3\epsilon_{+}}{2})}\] \[\cdot\frac{\theta_{I}(a_{j}+m_{1}-\frac{m_{2}}{2}+\frac{\epsilon_ {+}}{2}-\epsilon_{1,2})\theta_{I}(a_{j}+\frac{m_{2}}{2}-\frac{7\epsilon_{+}}{2 })}{\theta_{I}(a_{i}+\frac{m_{2}}{2}-\frac{3\epsilon_{+}}{2}-\epsilon_{1,2})}\] (B.35) \[\cdot\prod_{k\neq j}^{3}\frac{\theta_{1}(a_{j}+a_{k}+m_{1,2}- \epsilon_{+})\theta_{I}(a_{k}+m_{1}-\frac{m_{2}}{2}+\frac{\epsilon_{+}}{2})}{ \theta_{1}(a_{jk})\theta_{1}(a_{jk}-2\epsilon_{+})\theta_{I}(a_{k}+\frac{m_{2} }{2}-\frac{3\epsilon_{+}}{2})}\]
* \(-\epsilon_{+}+2u_{1}+m_{2}=x_{1}\), \(-\epsilon_{+}+2u_{2}+m_{2}=x_{2}\) \[\sum_{I,J=1}^{4}\frac{\theta_{\sigma(I,J)}(0)\theta_{\sigma(I,J)}(2 \epsilon_{+})\theta_{1}(m_{1}-m_{2}+\epsilon_{1,2})^{2}\theta_{\sigma(I,J)}(m _{1}-m_{2}+\epsilon_{1,2})}{8\theta_{1}(\epsilon_{1,2})^{2}\theta_{\sigma(I,J) }(\epsilon_{1,2})\theta_{\sigma(I,J)}(m_{1}-m_{2})\theta_{\sigma(I,J)}(m_{1}-m _{2}+2\epsilon_{+})}\] (B.36) \[\cdot\prod_{j=1}^{3}\frac{\theta_{I}(a_{j}+m_{1}-\frac{m_{2}}{2}+ \frac{\epsilon_{+}}{2})\theta_{J}(a_{j}+m_{1}-\frac{m_{2}}{2}+\frac{\epsilon_ {+}}{2})}{\theta_{I}(a_{k}+\frac{m_{2}}{2}-\frac{3\epsilon_{+}}{2})}\]
* \(\epsilon_{+}+u_{1}-a_{j}=0\), \(\epsilon_{1,2}-u_{1}+u_{2}=0\) and \(\epsilon_{1,2}+u_{1}-u_{2}=0\), \(\epsilon_{+}+u_{2}-a_{j}=0\) \[2\sum_{j=1}^{3}\frac{\theta_{1}(2a_{j}+m_{1}-\epsilon_{+}) \theta_{1}(2a_{j}+m_{1}-3\epsilon_{+})\theta_{1}(2a_{j}+m_{1}-3\epsilon_{+}+ \epsilon_{1,2})}{2\theta_{1}(\epsilon_{1,2})\theta_{1}(2\epsilon_{1})\theta_{ 1}(\epsilon_{2}-\epsilon_{1})\theta_{1}(2a_{j}+m_{2}-3\epsilon_{+}-\epsilon_{ 1})}\] \[\cdot\frac{\theta_{1}(2a_{j}+m_{1}-\epsilon_{+}-2\epsilon_{1}) \theta_{1}(2a_{j}+m_{1}-\epsilon_{+}-3\epsilon_{1})}{\theta_{1}(2a_{j}+m_{2}- 3\epsilon_{+}-2\epsilon_{1})}\] (B.37) \[\cdot\prod_{k\neq j}^{3}\frac{\theta_{1}(a_{j}+a_{k}+m_{1,2}- \epsilon_{+})\theta_{1}(a_{j}+a_{k}+m_{1,2}-\epsilon_{+}-\epsilon_{1})}{ \theta_{1}(a_{jk})\theta_{1}(a_{jk}-\epsilon_{1})\theta_{1}(a_{jk}-2\epsilon_ {+})\theta_{1}(a_{jk}-2\epsilon_{+}-\epsilon_{1})}+(\epsilon_{1}\leftrightarrow \epsilon_{2})\]
* \(\epsilon_{1,2}+u_{1}-u_{2}=x_{1}\), \(-\epsilon_{+}+u_{1}+u_{2}+m_{2}=0\) \[\sum_{I=1}^{4}\frac{\theta_{1}(m_{1}-m_{2}+\epsilon_{1,2}) \theta_{1}(m_{1}-m_{2}+2\epsilon_{1})\theta_{1}(m_{1}-m_{2}-\epsilon_{1}+ \epsilon_{2})}{4\theta_{1}(\epsilon_{1,2})\theta_{1}(2\epsilon_{1})\theta_{1}( \epsilon_{2}-\epsilon_{1})}\] (B.38) \[\cdot\prod_{j=1}^{3}\frac{\theta_{I}(a_{j}+m_{1}-\frac{m_{2}}{2}+ \frac{\epsilon_{+}}{2}\pm\frac{\epsilon_{1}}{2})}{\theta_{I}(a_{j}+\frac{m_{2} }{2}-\frac{3\epsilon_{+}}{2}\pm\frac{\epsilon_{1}}{2})}+(\epsilon_{1} \leftrightarrow\epsilon_{2})\]
* \(-\epsilon_{+}-2u_{2}-m_{2}=x_{1}\), \(-\epsilon_{+}+u_{1}+u_{2}+m_{1}=0\) \[\sum_{I=1}^{4}\frac{\theta_{1}(m_{1}-m_{2}-\epsilon_{1,2})\theta_{1}(m_{1}-m _{2}-2\epsilon_{+})\theta_{1}(m_{1}-m_{2}-4\epsilon_{+})}{4\theta_{1}(\epsilon_ {1,2})\theta_{1}(2m_{1}-2m_{2}-2\epsilon_{+})\theta_{1}(2m_{1}-2m_{2}-4 \epsilon_{+})}\] (B.39) \[\cdot\prod_{j=1}^{3}\frac{\theta_{I}(a_{j}-m_{1}+\frac{3m_{2}}{2}+ \frac{3\epsilon_{+}}{2})}{\theta_{I}(a_{j}+m_{1}-\frac{m_{2}}{2}-\frac{5 \epsilon_{+}}{2})}\] |
2308.08260 | Classical information and collapse in Wigner's friend setups | The famous Wigner's friend experiment considers an observer -- the friend --
and a superobserver -- Wigner -- who treats the friend as a quantum system and
her interaction with other quantum systems as unitary dynamics. This is at odds
with the friend describing this interaction via collapse dynamics, if she
interacts with the quantum system in a way that she would consider a
measurement. These different descriptions constitute the Wigner's friend
paradox. Extended Wigner's friend experiments combine the original thought
experiment with non-locality setups. This allows for deriving local
friendliness inequalities, similar to Bell's theorem, which can be violated for
certain extended Wigner's friend scenarios. A Wigner's friend paradox and the
violation of local friendliness inequalities require that no classical record
exists, which reveals the result the friend observed during her measurement.
Otherwise Wigner agrees with his friend's description and no local friendliness
inequality can be violated. In this article, I introduce classical
communication between Wigner and his friend and discuss its effects on the
simple as well as extended Wigner's friend experiments. By controlling the
properties of a (quasi) classical communication channel between Wigner and the
friend one can regulate how much outcome information about the friend's
measurement is revealed. This gives a smooth transition between the paradoxical
description and the possibility of violating local friendliness inequalities,
on the one hand, and the effectively collapsed case, on the other hand. | Veronika Baumann | 2023-08-16T09:54:18Z | http://arxiv.org/abs/2308.08260v1 | # Classical information and collapse in Wigner's friend setups
###### Abstract
The famous Wigner's friend experiment considers an observer - the friend- and a superobserver - Wigner- who treats the friend as a quantum system and her interaction with other quantum systems as unitary dynamics. This is at odds with the friend describing this interaction via collapse dynamics, if she interacts with the quantum system in a way that she would consider a measurement. These different descriptions constitute the Wigner's friend paradox. Extended Wigner's friend experiments combine the original thought experiment with non-locality setups. This allows for deriving local friendliness inequalities, similar to Bell's theorem, which can be violated for certain extended Wigner's friend scenarios. A Wigner's friend paradox and the violation of local friendliness inequalities require that no classical record exists, which reveals the result the friend observed during her measurement. Otherwise Wigner agrees with his friend's description and no local friendliness inequality can be violated. In this article, I introduce classical communication between Wigner and his friend and discuss its effects on the simple as well as extended Wigner's friend experiments. By controlling the properties of a (quasi) classical communication channel between Wigner and the friend one can regulate how much outcome information about the friend's measurement is revealed. This gives a smooth transition between the paradoxical description and the possibility of violating local friendliness inequalities, on the one hand, and the effectively collapsed case, on the other hand.
Introduction
The Wigner's friend thought experiment was originally proposed in [1] to reason about the applicability of the two dynamics featured by quantum mechanics. On the one hand sufficiently isolated quantum systems evolve unitarily. On the other hand, quantum systems upon measurement undergo a seemingly instantaneous collapse to the eigenstate corresponding to the observed outcome. These two different dynamics and the lack of a clear prescription of when to use one or the other is one of the main aspects of the quantum measurement problem [2; 3; 4].
The original thought experiment features an observer -called Wigner's friend- who measures a quantum system, as well as a so-called superobserver -Wigner- who performs a joint measurement on the quantum system S and the friend F. Provided that the joint system S+F is sufficiently isolated, Wigner describes the friend's interaction with the system via unitary dynamics. To Wigner's friend, however, this interaction constitutes a measurement and she uses the collapse postulate after observing an outcome. These disagreeing descriptions of one and the same situation is called the Wigner's friend paradox, see Fig. 1. Let the source emit a qubit in the state
Figure 1: Simple Wigner’s friend experiment: The source emits a quantum state \(\left|\phi\right\rangle_{S}\), which is measured by the friend in the computational basis \(\left\{\left|0\right\rangle_{S},\left|1\right\rangle_{S}\right\}\). The result observed by the friend is stored in some memory register \(\left|\cdot\right\rangle_{F}\) and she ascribes the respective product \(\left|i\right\rangle_{S}\left|i\right\rangle_{F}\), with \(i\in\left\{0,1\right\}\), to herself and the system she measured. Wigner performs a measurement on both the system and the friend’s memory register, which according to his unitary description is in state \(\left|\Phi\right\rangle_{SF}\) that is, in general, a superposition of \(\left|0\right\rangle_{S}\left|\mathbf{0}\right\rangle_{F}\) and \(\left|1\right\rangle_{S}\left|\mathbf{1}\right\rangle_{F}\). The different state assignments to the joint system \(S+F\) will lead to different probability assignments for Wigner’s measurement.
\(\alpha\left|0\right\rangle_{S}+\beta\left|1\right\rangle_{S}\), which is measured by the friend in the computational basis \(\left\{\left|0\right\rangle,\left|1\right\rangle\right\}\). The result she observes, \(f\in\left\{\mathbf{0},\mathbf{1}\right\}\) is stored in her memory register \(\left|\cdot\right\rangle_{F}\), which is supposed to correspond to the friend having a perception of the outcome \(f\). Wigner then performs a measurement on the qubit and the friend's memory given by the states \(\left|1\right\rangle_{SF}=a\left|0,\mathbf{0}\right\rangle_{SF}+b\left|1, \mathbf{1}\right\rangle_{SF}\) and \(\left|2\right\rangle_{SF}=b^{*}\left|0,\mathbf{0}\right\rangle_{SF}-a^{*} \left|1,\mathbf{1}\right\rangle_{SF}\) and their orthogonal complement. Due to their different descriptions of the friend's measurement Wigner and his friend will assign different probabilities to the outcomes of Wigner's measurement. According to the friend after her measurement the qubit and her memory are either in state \(\left|0,\mathbf{0}\right\rangle_{SF}\), which happens with probability \(p(0)=\left|\alpha\right|^{2}\), or in state \(\left|1,\mathbf{1}\right\rangle_{SF}\), which happens with probability \(p(1)=\left|\beta\right|^{2}\). Hence, the friend will assign the following probability to Wigner's results
\[p^{F}(w)=p^{dps}(w)=\left|\alpha\right|^{2}\left|\left\langle w|0,\mathbf{0} \right\rangle\right|^{2}+\left|\beta\right|^{2}\left|\left\langle w|1,\mathbf{ 1}\right\rangle\right|^{2},\] (I.1)
where \(\left|w\right\rangle\) is either \(\left|1\right\rangle_{SF}\) or \(\left|2\right\rangle_{SF}\). Wigner, however, assigns the state \(\left|\Phi\right\rangle_{SF}=\alpha\left|0,\mathbf{0}\right\rangle_{SF}+ \beta\left|0,\mathbf{0}\right\rangle_{SF}\) to the qubit and his friend's memory and, hence, probabilities
\[p^{W}(w)=p^{uni}(w)=\left|\left\langle w|\Phi\right\rangle_{SF}\right|^{2}.\] (I.2)
More concretely, we obtain the following predictions for Wigner's measurement according to Wigner and his friend in the simple Wigner's friend experiment
\[p^{W}(w):\] (I.3) \[\left(\alpha a^{*}+\beta b^{*}\right)^{2}\left|(\beta a-\alpha b)^{2}\right.\]
Wigner originally argued in favor of the friend's description (and probability assignment) claiming that at the level of an observer, at the latest, unitary quantum theory must break down. Since then, however, the idea of observers in superposition has become accepted [5], which begs the question of whether the disagreement between Wigner and his friend can be experimentally verified. As already discussed in [6, 7, 8], the unitary description of the friend's measurement is incompatible with the existence of a record revealing which result the friend observed before Wigner performs his measurement. Such a record destroys any coherences Wigner could reveal in his measurement. However, there are persistent records, which do not contain any outcome information of the friend's measurement and can, therefore, be present without destroying the coherence of an
observer in superposition [5; 8]. In special cases the disagreement between Wigner and his friend becomes manifest in terms of contradicting records that both Wigner and his friend can access after the thought experiment has been performed. Note that, in these cases there is no persistent record of which result the friend observed at her measurement, since Wigner's measurement will, in general, alter the friend's perception of her measurement result, i.e. the result stored in the internal memory register [9].
The newest versions of the thought experiment combine multiple Wigner's friend scenarios with various non-locality proofs [10; 11; 12; 13; 14; 15; 16; 17]. Some of these extended Wigner's friend experiments rely on conflicting probability assignments of observers and superobservers similar to the simple Wigner's friend experiment above. This lead to closer investigations of how agents in these settings can make predictions and consistently reason about each other [18; 19]. Other extended Wigner's friend scenarios concern the joint probabilities of the results of observers and superobservers, similar to Bell's theorem. More concretely, a set of assumptions called local friendliness - namely, that the superobservers' and observers' results are both "objective facts", locality, freedom of choice, and universality of quantum theory, meaning observers can exist in superpositions of different observation states - cannot all hold in extended Wigner's friend setups. In the simplest extended Wigner's friend scenario, as depicted in Fig. 2, a bipartite entangled state is shared between a Wigner's friend setup and one additional observer - Bob. The violation of a CHSH-like-inequality between Wigner and Bob asserts that the local friendliness assumptions cannot hold simultaneously.
Consider the setup in Fig. 2, where Wigner randomly chooses between the measurement that reveals which result his friend observed in her measurement \(W_{z}=\left|0,\mathbf{0}\right\rangle\!\left\langle 0,\mathbf{0}\right|_{2F} -\left|1,\mathbf{1}\right\rangle\!\left\langle 1,\mathbf{1}\right|_{2F}\) and one projecting on the states \(\left|\Phi^{\pm}\right\rangle_{2F}=1/\sqrt{2}(\left|0,\mathbf{0}\right\rangle _{2F}\pm\left|1,\mathbf{1}\right\rangle_{2F})\), i.e. \(W_{x}=\left|0,\mathbf{0}\right\rangle\!\left\langle 1,\mathbf{1}\right|_{2F}+ \left|1,\mathbf{1}\right\rangle\!\left\langle 0,\mathbf{0}\right|_{2F}\). Bob, on the other hand, performs measurements \(B_{z}=1\sqrt{2}(\left|0\right\rangle\!\left\langle 0\right|+\left|0\right\rangle\! \left\langle 1\right|+\left|1\right\rangle\!\left\langle 0\right|-\left|1 \right\rangle\!\left\langle 1\right|)\) and \(B_{x}=1\sqrt{2}(\left|0\right\rangle\!\left\langle 0\right|-\left|0\right\rangle\! \left\langle 1\right|-\left|1\right\rangle\!\left\langle 0\right|-\left|1 \right\rangle\!\left\langle 1\right|)\) on qubit 1. One local friendliness inequality for this scenario is given by the following CHSH-expression
\[\left\langle B_{z}\otimes W_{z}\right\rangle+\left\langle B_{x}\otimes W_{z} \right\rangle-\left\langle B_{z}\otimes W_{x}\right\rangle+\left\langle B_{x} \otimes W_{x}\right\rangle\leq 2.\] (I.4)
Other inequalities can be obtained by making an alternative choice for which expectation value
is subtracted on the left hand side. If the source emits the singlet state \(\left|\psi^{-}\right\rangle_{12}=1/\sqrt{2}(\left|0,0\right\rangle_{12}-\left|1,1 \right\rangle_{12})\) and the friend measures qubit 2 in the computational basis, we obtain the overall state
\[\left|\Psi\right\rangle=\frac{1}{\sqrt{2}}\left(\left|0\right\rangle_{1}\left|0, \mathbf{0}\right\rangle_{2F}-\left|1\right\rangle_{1}\left|1,\mathbf{1} \right\rangle_{2F}\right)\] (I.5)
on which Wigner and Bob perform their measurements. This give the following violation of the inequality in Eq. (I.4)
\[\left\langle B_{z}\otimes W_{z}\right\rangle+\left\langle B_{x}\otimes W_{z} \right\rangle-\left\langle B_{z}\otimes W_{x}\right\rangle+\left\langle B_{x} \otimes W_{x}\right\rangle=4\cdot\frac{1}{\sqrt{2}}=2\sqrt{2}>2.\] (I.6)
The violations of local friendliness inequalities have been confirmed experimentally in proof of principle experiments [14; 20], where an additional qubit played the role of Wigner's friend. Such experiments arguably do not constitute genuine Wigner's friend experiments, since the interaction between two qubits does not satisfy many basic characteristics of an observation [21]. In response to that, an extended Wigner's friend experiment involving a human level AI on a quantum computer playing the role of the friend has been proposed [22]. Such a friend would satisfy most qualitative features of an observer while operating fully unitarily by construction.
The rest of this article is structured as follows. In the main Section II I consider communication between Wigner and his friend first by incorporating record systems that are not subject to
Figure 2: Simplest extended Wigner’s friend experiment: A bipartite state \(\left|\Psi\right\rangle_{12}\) is emitted by the source. One subsystem is measured by the friend – F – in a simple Wigner’s friend setup. Together with the subsystem she interacted with the friend is then measured by Wigner – W. The other subsystem is measured by an additional observer Bob – B.
Wigner's measurement in II.1. Second, I introduce a (quasi) classical communication channel into the simple Wigner's friend experiment in II.2. This allows to recover probabilities in agreement with collapse dynamics as well as with unitary dynamics and anything in between depending on the properties of the communication channel. In Section II.3 I then consider the records as well as the communication channel in an extended Wigner's friend setup and discuss their implications on the violation of the local friendliness inequality presented above. Finally, the conclusions are summarize in Section III.
## II Classical Information and Collapse
As discussed in [5; 7; 23] unitary dynamics is incompatible with simultaneously preserving a classical record of a measurement result. For Wigner's friend experiments this means that there cannot be a record revealing the friend's observed outcome. In general, records or messages that are not subject to Wigner's measurement influence the probabilities for Wigner's results according to a unitary description of the setup.
### Effective collapse
If the friend produces a classical record revealing her observed result, also Wigner's description of the setup gives rise to the probabilities induced by collapse dynamics. Consider a record Hilbert space \(\mathcal{H}_{R}\) with a fixed basis \(\{\ket{r_{i}}\}\), which encodes the (quasi) classical messages Wigner receives from his friend. For example, in the simplest Wigner's friend experiment in Fig. 1 these messages could correspond to \(\ket{r_{0}}=\ket{\text{``I saw 0.''}}\) and \(\ket{r_{1}}=\ket{\text{``I saw 1.''}}\). A unitary description of the setup, then, leads to the following over all state
\[\ket{\Psi^{r}}=\alpha\ket{0,\mathbf{0}}_{SF}\ket{r_{0}}_{R}+\beta\ket{1, \mathbf{1}}_{SF}\ket{r_{1}}_{R},\] (II.1)
upon which Wigner performs his measurement. The probabilities for Wigner's measurement result are then given by
\[p^{W}(w)=\text{Tr}\left(\ket{w}\bra{w}_{SF}\ket{\Psi^{r}}\bra{\Psi^{r}}\right),\] (II.2)
which is equal the ones assigned by the friend who uses collapse dynamics
\[p^{W}(w)=p^{F}(w):\qquad\qquad 1\qquad 2\] (II.3)
Note that, tracing out the record system can be understood as Wigner ignoring the record of the friend's result. Yet the existence of such a message alone, even if Wigner does not know what it reads, effectively collapses the state of the system and the friend. When taking into account which result the friend observed we need to consider conditional probabilities. In the collapse description employed by the friend, this means either using the state \(|0,\mathbf{0}\rangle\) or \(|1,\mathbf{1}\rangle\) when calculating \(p^{F}(w|f)=|\,\langle w|f,\mathbf{f}\rangle|^{2}\). In Wigner's unitary description we condition on the record by evaluating probabilities
\[p^{W}(w,j)=\mathrm{Tr}\{|w\rangle\langle w|_{SF}\otimes|r_{j}\rangle\langle r _{j}|_{R}|\Psi^{r}\rangle\langle\Psi^{r}|\}=p(j)\cdot p^{W}(w|j)\] (II.4)
where \(p(j)=\mathrm{Tr}(\mathds{1}\otimes|r_{j}\rangle\langle r_{j}|_{R}|\Psi^{r} \rangle\langle\Psi^{r}|)\) is the probability of message \(r_{j}\) and we define \(p(w|j)=0\) if \(p(j)=0\). This gives
\[p^{W}(w|0)=p^{F}(w|f=0):\qquad 1\qquad 2\qquad\qquad p^{W}(w|1)=p^{F}(w|f=1): \quad 1\qquad 2\] (II.5) \[|a|^{2}\,\overline{|b|^{2}}\qquad\qquad\qquad|b|^{2}\qquad\qquad \qquad|b|^{2}\,\overline{|a|^{2}}.\]
if the record reveals which result the friend observed. Conversely, if the record space is only one dimensional, for example \(|r^{\prime}\rangle=|``\mathrm{I}\) saw a definite outcome"\(\rangle\), the record factors out and
\[|\Psi^{r^{\prime}}\rangle=\left(\alpha\,|0,\mathbf{0}\right)_{SF}+\beta\,|1, \mathbf{1}\rangle_{SF}\rangle\,|r^{\prime}\rangle_{R}\,,\] (II.6)
which preserves the disagreement between Wigner and his friend. Conditioning on the record is obsolete in this case and the probabilities \(p^{W}(w)=\mathrm{Tr}\left(|w\rangle\langle w|_{SF}|\Psi^{r^{\prime}}\rangle \langle\Psi^{r^{\prime}}|\right)\neq p^{F}(w)\) are again those in Eq. (I.3). This is due to the fact that such a one dimensional record cannot reveal any outcome information of the friend's two-outcome measurement.
### Partial collapse
We now consider a more general scenario, where instead of directly exchanging messages, there is a (quasi) classical communication channel [24] between Wigner and his friend, see Fig. 3. Such a channel measures the incoming state and prepares a corresponding outcome in some fixed basis \(\{\ket{n}\}\) of the record Hilbert space \(\mathcal{H}_{R}\)
\[\mathcal{C}[\sigma]:=\sum_{m}\bra{m}\sigma\ket{m}|m\rangle\langle m|.\] (II.7)
Other than in Section II.1 the messages \(\ket{m}\) sent to Wigner can now be encoded in a different basis of \(\mathcal{H}_{R}\) than the records \(\ket{r}\) the friend produces. This allows for these messages to only partially reveal which outcome the friend observed. The state Wigner performs his measurement on is now
\[\rho_{SFR}= (\mathds{1}\otimes\mathcal{C})|\Psi^{r}\rangle\langle\Psi^{r}|= \sum_{m}|\alpha|^{2}|\bra{m}r_{0}\rangle\,|^{2}\rho_{SF}^{00}\otimes|m\rangle \langle m|_{R}+|\beta|^{2}|\,\langle m|r_{1}\rangle\,|^{2}\rho_{SF}^{11} \otimes|m\rangle\langle m|\] \[+\alpha\beta^{*}\,\langle m|r_{0}\rangle\,\langle r_{1}|m \rangle\,\rho_{SF}^{01}\otimes|m\rangle\langle m|+\alpha^{*}\beta\,\langle m |r_{1}\rangle\,\langle r_{0}|m\rangle\,\rho_{SF}^{10}\otimes|m\rangle\langle m|,\] (II.8)
Figure 3: Communication channel \(\mathcal{C}\) between Wigner and his friend: The friend can encode a message via a fixed basis \(\{\ket{r}\}\) of some record Hilbert space \(\mathcal{H}_{R}\). The channel \(\mathcal{C}\) takes these record states as input and produces a classical message, which can in principle be encoded in another basis \(\{\ket{m}\}\) of the record Hilbert space. This freedom to choose different bases for the incoming and the outgoing messages allows for controlling how much outcome information the friend can send to Wigner via this channel.
where
\[\rho_{SF}^{00} =|a|^{2}|1\rangle\langle 1|_{SF}+|b|^{2}|2\rangle\langle 2|_{SF}+a^{ *}b^{*}\left|1\rangle\!\langle 2\right|_{SF}+ab\left|2\rangle\!\langle 1\right|_{ SF},\] \[\rho_{SF}^{11} =|b|^{2}|1\rangle\langle 1|_{SF}+|a|^{2}|2\rangle\langle 2|_{SF}-a^{ *}b^{*}\left|1\rangle\!\langle 2\right|_{SF}-ab\left|2\rangle\!\langle 1\right|_{ SF},\] \[\rho_{SF}^{01} =a^{*}b(|1\rangle\langle 1|_{SF}-|2\rangle\langle 2|_{SF})-a^{ *}a^{*}\left|1\rangle\!\langle 2\right|_{SF}+bb\left|2\rangle\!\langle 1\right|_{ SF},\] \[\rho_{SF}^{10} =b^{*}a(|1\rangle\langle 1|_{SF}-|2\rangle\langle 2|_{SF})+b^{ *}b^{*}\left|1\rangle\!\langle 2\right|_{SF}-aa\left|2\rangle\!\langle 1\right|_{ SF},\]
with \(\left|1\right\rangle_{SF}=a\left|0,\mathbf{0}\right\rangle_{SF}+b\left|1, \mathbf{1}\right\rangle_{SF}\), \(\left|2\right\rangle_{SF}=b^{*}\left|0,\mathbf{0}\right\rangle_{SF}-a^{*} \left|1,\mathbf{1}\right\rangle_{SF}\) being the eigenstates corresponding to Wigner's measurement results "1" and "2" respectively. Analogous to the Eq. (II.4) in section II.1 we consider the conditional probabilities of Wigner's result \(w\) given message \(n\), i.e. \(p(n)p(w|n)=\mathrm{Tr}(|w\rangle\langle w|_{SF}\otimes|n\rangle\langle n|_{R} \rho_{SFR})\), obtaining the following joint probabilities
\[\begin{array}{c|c|c}p(w=1,n)&p(w=2,n)\\ \hline|\alpha|^{2}|a|^{2}|\left\langle n|r_{0}\right\rangle|^{2}+|\beta|^{2}|b |^{2}|\left\langle n|r_{1}\right\rangle|^{2}&|\beta|^{2}|a|^{2}|\left\langle n |r_{0}\right\rangle|^{2}+|\alpha|^{2}|b|^{2}|\left\langle n|r_{1}\right\rangle| ^{2}\\ +\alpha\beta^{*}a^{*}b\left\langle n|r_{0}\right\rangle\left\langle r_{1}|n \right\rangle+\alpha^{*}\beta ab^{*}\left\langle n|r_{1}\right\rangle\left\langle r _{0}|n\right\rangle&-\alpha\beta^{*}a^{*}b\left\langle n|r_{0}\right\rangle \left\langle r_{1}|n\right\rangle-\alpha^{*}\beta ab^{*}\left\langle n|r_{1} \right\rangle\left\langle r_{0}|n\right\rangle.\end{array}\] (II.9)
The overlaps \(\langle n|r_{i}\rangle\) indicate how much outcome information Wigner can obtain from the channel output message \(n\). If the basis \(\{|n\rangle\}\) is the same as \(\{|r_{i}\rangle\}\), i.e.\(\langle n|r_{i}\rangle=\delta_{ni}\), the message perfectly reveals which result the friend observed and \(p(n=0)=|\alpha|^{2}\), \(p(n=1)=|\beta|^{2}\). In this case we recover the collapse behavior in Eq.(II.5), as discussed in Section II.1. However, if the two bases are mutually unbiased, for example \(\langle 0|r_{i}\rangle=\langle 1|r_{0}\rangle=1/\sqrt{2}=-\left\langle 1|r_{1}\right\rangle\), the records reveal no outcome information about the friend's measurement and \(p(n)=1/2\) for both \(n\). In this case, we obtain probabilities in accordance with a unitary description for each of the records
\[p(w|0):\begin{array}{c|c}1&2\\ \hline(\alpha a^{*}+\beta b^{*})^{2}&(\beta a-\alpha b)^{2}&p(w|1):\begin{array} []{c|c}1&2\\ \hline(\alpha a^{*}-\beta b^{*})^{2}&(\beta a+\alpha b)^{2}.\end{array}\end{array}\] (II.10)
Note that the phase shift between the expressions for the two messages does not allow for simply adding them when wanting to preserve the resemblance to the unitary description without records. Evaluating \(p(w|0)+p(w|1)\) corresponds to tracing out the record system and, as already mentioned
in Section II.1, gives the collapse probabilities in Eq. (II.3). In general, we can express the messages Wigner receives in the basis of the records generated by the friend as follows
\[|0\rangle =\cos\theta\left|r_{0}\right\rangle+e^{i\phi}\sin\theta\left|r_{1}\right\rangle\] (II.11) \[|1\rangle =e^{-i\phi}\sin\theta\left|r_{0}\right\rangle-\cos\theta\left|r_{1 }\right\rangle,\] (II.12)
where one can think of \(\theta,\phi\) as variable parameters of the communication channel \(\mathcal{C}\). Hence, the joint probabilities for Wigner's measurement result and message \(n\) are given by
\[p(w,0): 1\] (II.13) \[\frac{\ \
the extended Wigner's friend setup depicted in Fig. 2. This, in turn, prevents the violation of the local friendliness inequality presented in Section I. More concretely, using state
\[\left|\Psi^{r}\right\rangle=\frac{1}{\sqrt{2}}\left(\left|0\right\rangle_{1} \left|0,\mathbf{0}\right\rangle_{2F}\left|r_{0}\right\rangle_{R}-\left|1 \right\rangle_{1}\left|1,\mathbf{1}\right\rangle_{2F}\left|r_{1}\right\rangle_ {R}\right)\] (II.15)
instead of the one in Eq. (I.5) leads to the following expression for the CHSH-like local friendliness inequality
\[\langle B_{z}\otimes W_{z}\rangle+\langle B_{x}\otimes W_{z}\rangle-\langle B_{z }\otimes W_{x}\rangle+\langle B_{x}\otimes W_{x}\rangle=\frac{1}{\sqrt{2}}+ \frac{1}{\sqrt{2}}+0+0=\sqrt{2}<2,\] (II.16)
which means that none of the local friendliness assumptions needs to be rejected. This is due to the fact that the presence of the records, revealing which result the friend observed, effectively collapse the state in Eq. (II.15) to
\[\mathrm{Tr}_{R}(|\Psi^{r}\rangle\langle\Psi^{r}|)=\frac{1}{2}\left(|0\rangle \langle 0|_{1}|0,\mathbf{0}\rangle\langle 0,\mathbf{0}|_{2F}+|1\rangle \langle 1|_{1}|1,\mathbf{1}\rangle\langle 1,\mathbf{1}|_{2F}\right),\] (II.17)
which means that any expectation value containing Wigner's \(W_{x}\)-measurement vanishes. Note that, this is also true when we condition on the friend's observed result, meaning that we use either \(|0\rangle\langle 0|_{1}|0,\mathbf{0}\rangle\langle 0,\mathbf{0}|_{2F}\) or \(|1\rangle\langle 1|_{1}|1,\mathbf{1}\rangle\langle 1,\mathbf{1}|_{2F}\) as the effective state.
We now, again, consider a general communication channel \(\mathcal{C}\) between the friend and Wigner, as depicted in Fig. 3, also for this extended setup. Starting from the state in Eq. (II.15) we now let the channel act on the register space and obtain the state
\[\rho^{\prime}_{12FR} =(\mathds{1}\otimes\mathcal{C})\left(|\Psi^{r}\rangle\langle\Psi ^{r}|\right)\] (II.18) \[=\frac{1}{2}\Big{(}|0\rangle\langle 0|_{1}\otimes|0,\mathbf{0} \rangle\langle 0,\mathbf{0}|_{2F}\otimes\mathcal{C}(|r_{0}\rangle\langle r_{0}|)-|0 \rangle\!\langle 1|_{1}\otimes|0,\mathbf{0}\rangle\!\langle 1,\mathbf{1}|_{2F} \,\mathcal{C}(|r_{0}\rangle\!\langle r_{1}|)\] \[\quad-|1\rangle\!\langle 0|_{1}\otimes|1,\mathbf{1}\rangle\! \langle 0,\mathbf{0}|_{2F}\,\mathcal{C}(|r_{1}\rangle\!\langle r_{0}|)+|1 \rangle\langle 1|_{1}\otimes|1,\mathbf{1}\rangle\langle 1,\mathbf{1}|_{2F} \otimes\mathcal{C}(|r_{1}\rangle\langle r_{1}|)\Big{)},\]
upon which Wigner and Bob perform their measurements. The action of the classical channel on the records, again, gives terms of the form
\[\sum_{m}\left\langle m|r_{i}\right\rangle\langle r_{j}|m\rangle\left|m\right\rangle \langle m|,\] (II.19)
which, if there is no conditioning on the message lead to the effective collapse discussed above regardless of the properties of the channel, i.e. parameters \(\phi,\theta\), since
\[\mathrm{Tr}\left(\sum_{m}\left\langle m|r_{i}\right\rangle\langle r_{j}|m \rangle\left|m\right\rangle\langle m|\right)=\sum_{m}\left\langle m|r_{i} \right\rangle\langle r_{j}|m\rangle=\langle r_{j}|r_{i}\rangle=\delta_{ij}.\] (II.20)
Similar to the probabilities for Wigner's outcome in the simple Wigner's friend setup, we can now define the expectation values for the measurements of Bob and Wigner, conditioned on the message \(n\) put out by the classical channel \(\mathcal{C}\) as follows
\[\langle B\otimes W\rangle^{|n}:=\begin{cases}\frac{1}{p(n)}\operatorname{Tr} \left(B\otimes W\otimes|n\rangle\langle n|\cdot\rho\right)&\text{for }p(n)>0\\ 0&\text{for }p(n)=0,\end{cases}\] (II.21)
where the probability for the messages is now given by \(p(n)=\operatorname{Tr}\left(\mathds{1}\otimes|n\rangle\langle n|\cdot\rho^{ \prime}_{12FR}\right)=1/2(\cos^{2}(\theta)+\sin^{2}(\theta))=1/2\). Plugging the state in Eq. (II.18) into this expression, then gives
\[\langle B\otimes W\rangle^{|n}=\] (II.22) \[\quad-\langle 1|B|0\rangle\,\langle 1,\mathds{1}|W|0,\mathbf{0} \rangle\,\langle n|r_{0}\rangle\,\langle r_{1}|n\rangle-\langle 0|B|1\rangle\, \langle 0,\mathbf{0}|W|1,\mathbf{1}\rangle\,\langle n|r_{1}\rangle\,\langle r_{0}| n\rangle\,\Big{)}.\]
For the settings of Bob and Wigner presented in Section I we obtain the conditional expectation values
\[\langle B_{z}\otimes W_{z}\rangle^{|n}=\] \[\langle B_{z}\otimes W_{x}\rangle^{|n}= -\frac{1}{\sqrt{2}}(\langle n|r_{0}\rangle\,\langle r_{1}|n \rangle+\langle n|r_{1}\rangle\,\langle r_{0}|n\rangle)=-\langle B_{x}\otimes W _{x}\rangle^{|n}.\]
Hence, when conditioned on the message \(n\) the local friendliness inequality in Eq. (II.16) becomes
\[\langle B_{z}\otimes W_{z}\rangle^{|0}+\langle B_{x}\otimes W_{z} \rangle^{|0}-\langle B_{z}\otimes W_{x}\rangle^{|0}+\langle B_{x}\otimes W_{x }\rangle^{|0}=\sqrt{2}+\sqrt{2}\cdot\cos(\phi)\sin(2\theta),\] (II.23) \[\langle B_{z}\otimes W_{z}\rangle^{|1}+\langle B_{x}\otimes W_{z }\rangle^{|1}-\langle B_{z}\otimes W_{x}\rangle^{|1}+\langle B_{x}\otimes W_{x }\rangle^{|1}=\sqrt{2}-\sqrt{2}\cdot\cos(\phi)\sin(2\theta).\] (II.24)
where the term \(\cos(\phi)\sin(2\theta)\) is determined by the properties of the communication channel. If the messages perfectly reveal which result the friend observed, i.e. \(\phi=k\cdot\pi\) and \(\theta=l\cdot\pi/2\), the channel dependent term, which corresponds to the expectation values \(\langle B\otimes W_{x}\rangle^{|n}\), vanishes and we obtain the expression in Eq. (II.16) for both messages. If the messages reveal no outcome information about the friend's measurement, i.e. \(\theta=l\cdot\pi/4\) and \(\phi=k\cdot\pi/2\), conditioning on one of the two messages gives the maximal violation of \(2\sqrt{2}\). Since the channel dependent term smoothly varies in the interval \([-1,1]\), one can obtain all values from \(0\) to \(2\sqrt{2}\) for the CHSH-like local friendliness inequality by controlling the channel parameters \(\phi\) and \(\theta\). Note that, due to the
different signs for the two messages, whenever the conditional expectation values for one message violate local friendliness, the conditional expectation values for the other message do not violate the inequality, compare Fig. 5. This is similar to the conditional probabilities \(p(w|n)\) for the simple Wigner's friend experiment discussed in Section II.2. There due to the different signs in \(p(w|0)\) and \(p(w|1)\) these probabilites can exactly reproduce those according to unitary dynamics only for one of the two messages.
Figure 5: CHSH-values for the extended Wigner’s friend setup with communication: The conditional expectation values \(\langle B\otimes W\rangle^{|n}\) for the extended Wigner’s friend experiment in Fig. 2 give CHSH-values depend on the communication channel parameters \(\phi\) and \(\theta\). For \(\phi=0\) and \(\theta\in[0,\pi]\), the expression given by conditioning on message “0” are shown in blue, the one corresponding to message “1” is depicted in green. There are values of \(\theta\) where neither of the conditional CHSH-values lies above the local friendliness threshold of 2, meaning no violation occurs. Whenever the CHSH-like local friendliness inequality is violated for one of the messages, when conditioning on the other message the local friendliness inequality is satisfied.
## III Conclusions
We considered the possibility of communication between Wigner and his friend in both the simple Wigner's friend scenario and the simplest extended Wigner's friend setup that allows for the violation of local friendliness inequalities. As we showed explicitly a classical message revealing which result the friend observed during her measurement effectively collapses the state of her and the system she measured also in the unitary description employed by Wigner. For the simple Wigner's friend experiment, this means that the probabilities for Wigner's measurement result according to the unitary description of the setup are the same as those corresponding to collapse dynamics. For the extended Wigner's friend setup such records and the corresponding effective collapse prevent the violation of local friendliness inequalities.
We further considered the more general scenario of a (quasi) classical communication channel between Wigner and his friend. In that case, the records the friend produces are the input to the channel while the messages Wigner receives are the output of that channel. Provided that the friend's records encode which outcome she observed, how much of that which-outcome information is revealed by the output now depends on the properties of the channel. Controling the channel parameters then allows for gradually changing between collapse and unitary behavior for Wigner's friend experiments. In case of the simple Wigner's friend experiment this means that the probabilities based on the unitary description of the setup, associated with Wigner, will gradually approach those assigned by the friend based on the collapse description. For the extended Wigner's friend scenario, the channel properties determine whether and to what extent local friendliness inequalities can be violated.
Both the recovery of probabilities corresponding to unitary dynamics without records and the maximum violation of local friendliness inequalities not only occur when the messages Wigner receives contain no information about which outcome the friend observed, but also require conditioning on the messages. Simply ignoring the messages put out by the channel always leads to a full effective collapse. This can be understood as showing that Wigner's unitary description does not just signify his ignorance about which result his friend observed. |
2304.10922 | The evolution problem for the 1D nonlocal Fisher-KPP equation with a top
hat kernel. Part 1. The Cauchy problem on the real line | We study the Cauchy problem on the real line for the nonlocal Fisher-KPP
equation in one spatial dimension, \[ u_t = D u_{xx} + u(1-\phi*u), \] where
$\phi*u$ is a spatial convolution with the top hat kernel, $\phi(y) \equiv
H\left(\frac{1}{4}-y^2\right)$.
After showing that the problem is globally well-posed, we demonstrate that
positive, spatially-periodic solutions bifurcate from the spatially-uniform
steady state solution $u=1$ as the diffusivity, $D$, decreases through
$\Delta_1 \approx 0.00297$. We explicitly construct these spatially-periodic
solutions as uniformly-valid asymptotic approximations for $D \ll 1$, over one
wavelength, via the method of matched asymptotic expansions. These consist, at
leading order, of regularly-spaced, compactly-supported regions with width of
$O(1)$ where $u=O(1)$, separated by regions where $u$ is exponentially small at
leading order as $D \to 0^+$.
From numerical solutions, we find that for $D \geq \Delta_1$, permanent form
travelling waves, with minimum wavespeed, $2 \sqrt{D}$, are generated, whilst
for $0 < D < \Delta_1$, the wavefronts generated separate the regions where
$u=0$ from a region where a steady periodic solution is created. The structure
of these transitional travelling waves is examined in some detail. | D. J. Needham, J. Billingham, N. M. Ladas, J. C. Meyer | 2023-04-21T12:44:34Z | http://arxiv.org/abs/2304.10922v3 | The evolution problem for the 1D nonlocal Fisher-KPP equation with a top hat kernel. Part 1. The Cauchy problem on the real line
###### Abstract.
We study the Cauchy problem on the real line for the nonlocal Fisher-KPP equation in one spatial dimension,
\[u_{t}=Du_{xx}+u(1-\phi*u),\]
where \(\phi*u\) is a spatial convolution with the top hat kernel, \(\phi(y)\equiv H\left(\frac{1}{4}-y^{2}\right)\).
After showing that the problem is globally well-posed, we demonstrate that positive, spatially-periodic solutions bifurcate from the spatially-uniform steady state solution \(u=1\) as the diffusivity, \(D\), decreases through \(\Delta_{1}\approx 0.00297\). We explicitly construct these spatially-periodic solutions as uniformly-valid asymptotic approximations for \(D\ll 1\), over one wavelength, via the method of matched asymptotic expansions. These consist, at leading order, of regularly-spaced, compactly-supported regions with width of \(O(1)\) where \(u=O(1)\), separated by regions where \(u\) is exponentially small at leading order as \(D\to 0^{+}\).
From numerical solutions, we find that for \(D\geq\Delta_{1}\), permanent form travelling waves, with minimum wavespeed, \(2\sqrt{D}\), are generated, whilst for \(0<D<\Delta_{1}\), the wavefronts generated separate the regions where \(u=0\) from a region where a steady periodic solution is created. The structure of these transitional travelling waves is examined in some detail.
2020 Mathematics Subject Classification: 35K57, 35B40, 35G20, 35C07, 65M06
Key words and phrases: nonlocal partial differential equations, Fisher-KPP equation, numerical solutions to IBVP, permanent form travelling wave solutions, bifurcation to periodic steady states
## 1. Introduction
Nonlocal reaction-diffusion equations arise in many different scientific areas (see, for example, [5, 6, 7, 8, 14, 19] and [13, 20]). Many of these applications are biomedical, including tumour modelling and models for evolution and speciation (see [2] for an extensive review). The most studied of these equations is the nonlocal Fisher-KPP equation
\[\frac{\partial u}{\partial t}=D\frac{\partial^{2}u}{\partial x^{2}}+u\left\{1- \int_{-\infty}^{\infty}\phi\left(x-y\right)u\left(y,t\right)\,dy\right\}. \tag{1.1}\]
Introduction
The study of the linear equation (LDE) is a well-known and well-known problem in the field of nonlinear equations. The nonlinear equation (LDE) is a nonlinear equation, which is a nonlinear equation, which is a nonlinear equation, which is a nonlinear equation, which is a nonlinear equation, which is a nonlinear equation. The nonlinear equation is a nonlinear equation, which is a nonlinear equation, which is a nonlinear equation, which is a nonlinear equation, is a
with closure \(\overline{D}_{\infty}\). The Cauchy problem we consider is that concerned with classical solutions \(u:\overline{D}_{T}\to\mathbb{R}\) to the semilinear, nonlocal, evolution problem,
\[u_{t}=Du_{xx}+u(1-J(u,\phi)),\quad\text{on }D_{T}; \tag{1.6}\] \[u(x,0)=Ag(x):=u_{0}(x),\quad\forall\,x\in\mathbb{R}\] (1.7) \[u(x,t)\to 0\text{ as }|x|\to\infty\text{ uniformly for }t\in[0,T]. \tag{1.5}\]
Here \(A>0\), \(g\in C^{1}(\mathbb{R})\cap L^{\infty}(\mathbb{R})\) and non-negative, \(\left\|g\right\|_{\infty}=1\) whilst \(\text{supp}(g)\subseteq[-x_{0},x_{0}]\) (\(x_{0}>0\)). With \(A\), \(x_{0}\) and \(g\) prescribed, a solution to (1.5)-(1.7) must have
\[u\in C(\overline{D}_{T})\cap L^{\infty}(\overline{D}_{T})\cap C^{2,1}(D_{T}). \tag{1.8}\]
In (1.5), \(J:(C(\overline{D}_{T})\cap L^{\infty}(\overline{D}_{T}))\times(PC(\mathbb{R}) \cap L^{\infty}(\mathbb{R})\cap L^{1}(\mathbb{R}))\to C(\overline{D}_{T})\), is such that, with \((v,\psi)\in(C(\overline{D}_{T})\cap L^{\infty}(\overline{D}_{T}))\times(PC( \mathbb{R})\cap L^{\infty}(\mathbb{R})\cap L^{1}(\mathbb{R}))\), then for each \((x,t)\in\overline{D}_{T}\),
\[J(v,\psi)(x,t)=\int_{-\infty}^{\infty}\psi(x-y)v(y,t)dy. \tag{1.9}\]
In addition, in (1.5), \(\phi\in PC(\mathbb{R})\cap L^{\infty}(\mathbb{R})\cap L^{1}(\mathbb{R})\) is a specified _nonlocal kernel_ which, in general, should satisfy the following conditions
(K1) \[\phi(\lambda)\geq 0\quad\forall\lambda\in\mathbb{R}\] (K2) \[\phi(\lambda)\to 0\quad\text{as }|\lambda|\to\infty\] (K3) \[\int_{-\infty}^{\infty}\phi(\lambda)d\lambda=\left\|\phi\right\| _{1}=1\] (K4) \[\phi(\lambda)=\phi(-\lambda)\quad\forall\lambda\in\mathbb{R}.\]
Finally, \(D>0\) is the constant diffusivity, and this measures the ratio of the square of the diffusive length scale (based on the reaction time scale) to the square of the nonlocal length scale.
Throughout this paper, we will consider the situation when the nonlocal kernel \(\phi:\mathbb{R}\to\mathbb{R}\) has the simple structure,
\[\phi(y)=\begin{cases}1,&-\frac{1}{2}\leq y\leq\frac{1}{2}\\ 0,&\text{elsewhere},\end{cases} \tag{1.10}\]
after which, for \((x,t)\in\overline{D}_{T}\),
\[J(u,\phi)(x,t)=\int_{x-\frac{1}{2}}^{x+\frac{1}{2}}u(y,t)dy:=I(u)(x,t). \tag{1.11}\]
The main focus of the paper will be both the qualitative and quantitative study of the Cauchy problem (1.5)-(1.8) with (1.10) and (1.11). Of particular interest will be the large-\(t\) structure of the solution. For brevity, we will refer to this Cauchy problem as (IBVP) for the rest of the paper. With this objective in mind, the paper is structured in the following way. In Section 2 we consider the fundamental questions of uniqueness and global existence for (IBVP), together with some very general basic bounds on the solution. In Section 3 we will present some illustrative numerical solutions to (IBVP), which will enable us to formulate a number of structural conjectures concerning the evolution of the solutions to (IBVP) as \(t\to\infty\). The substance of the paper will then be directed towards an elucidation of these conjectures. With this in mind, Section 4 examines the temporal stability of the equilibrium solutions \(u=0\) and \(u=1\) to the nonlocal PDE (1.5) with (1.9), and how this may relate to (IBVP). The discussion in Section 4 then leads us to Section 5, in which we consider the bifurcation to periodic steady states from the equilibrium state \(u=1\). In Section 6 and Section 7 we consider the existence of permanent form travelling wave solutions to the nonlocal PDE (1.5) with (1.9). Of particular interest are three specific classes of such travelling waves, namely: periodic travelling waves; transitional travelling waves from \(u=1\) (rear) to \(u=0\) (ahead); and, transitional travelling waves from \(u\) being spatially periodic (rear) and \(u=0\) (ahead). Finally, in Section 8, we bring all of the subsequent results together, with emphasis on their bearing on (IBVP).
## 2. General Theory For (IBVP)
It is first useful to observe that with \(u:\overline{D}_{T}\to\mathbb{R}\) being a solution to (IBVP), then the regularity in (1.8) readily establishes that, with (1.11),
\[I(u)\in C(\overline{D}_{T})\cap C^{2,1}(D_{T}), \tag{2.1}\]
with partial derivatives given by, on using (1.5), for each \((x,t)\in D_{T}\),
\[I(u)_{x}(x,t)=u\left(x+\frac{1}{2},t\right)-u\left(x-\frac{1}{2},t\right)\] \[I(u)_{xx}(x,t)=u_{x}\left(x+\frac{1}{2},t\right)-u_{x}\left(x- \frac{1}{2},t\right)\]
\[I(u)_{t}(x,t)=D\left(u_{x}\left(x+\frac{1}{2},t\right)-u_{x}\left(x-\frac{1}{2},t \right)\right)+I(u)(x,t)-\int_{x-\frac{1}{2}}^{x+\frac{1}{2}}u(s,t)I(u)(s,t)ds. \tag{2.2}\]
We now establish that (IBVP) is _a priori_ bounded on \(\overline{D}_{T}\) (for any \(T>0\)), with bounds independent of \(T\). First, it is a direct consequence of the maximum principle on \(\overline{D}_{T}\) (see, for example, [15]), via (1.5)-(1.8), with (1.11), that any solution \(u:\overline{D}_{T}\to\mathbb{R}\) to (IBVP) must have
\[u(x,t)\geq 0\quad\forall\left(x,t\right)\in\overline{D}_{T}. \tag{2.3}\]
The strong maximum principle then refines this, when \(A>0\), to establish that
\[u(x,t)>0\quad\forall\left(x,t\right)\in D_{T}. \tag{2.4}\]
Next we introduce the function \(v:\overline{D}_{\infty}\to\mathbb{R}\) given by
\[v(x,t)=(t+t_{0})^{-\frac{1}{2}}e^{(t+t_{0})}e^{-x^{2}/4D(t+t_{0})}\quad \forall\left(x,t\right)\in\overline{D}_{\infty}, \tag{2.5}\]
with the constant \(t_{0}>0\) chosen so that
\[v(x,0)\geq Ag(x)\quad\forall\,x\in\mathbb{R}, \tag{2.6}\]
which can always be achieved (\(t_{0}\) will, in general, depend upon \(A\) and the details of \(g\)). It then follows, via (2.3) and the Comparison theorem (see, for example, [15]) (using the parabolic operator \(N[w]:=w_{t}-Dw_{xx}-w\)), that,
\[u(x,t)\leq v(x,t)\leq(t+t_{0})^{-\frac{1}{2}}e^{(t+t_{0})}\quad\forall\left(x,t\right)\in\overline{D}_{T}. \tag{2.7}\]
We now have
**Proposition 2.1**.: _(IBVP) has a global solution \(u:\overline{D}_{\infty}\to\mathbb{R}\) and this solution is unique._
Proof.: This follows from the a priori bounds (2.3) and (2.7), which allow for an application of Theorem 9.2.10 in [19].
From now on, \(u:\overline{D}_{\infty}\to\mathbb{R}\) will represent the global and unique solution to (IBVP). We note here that it is established in [12], that \(u:\overline{D}_{\infty}\to\mathbb{R}\) is uniformly bounded, with,
\[\left\|u\right\|_{\infty}\leq\frac{1}{\sqrt{\pi}}\max\left\{\frac{1}{2}A,2 \right\}e^{1/D}\sum_{n=0}^{\infty}e^{\frac{-n^{2}}{16}}. \tag{2.8}\]
In what follows we denote by \(\mathcal{A}\) the set of admissible initial data \(u_{0}:\mathbb{R}\to\mathbb{R}\), as introduced for (IBVP) in Section 1. We then have the following continuous dependence result.
**Proposition 2.2**.: _For each \(T>0\), \(u_{0}\in\mathcal{A}\) and \(\varepsilon>0\) there exists \(\delta>0\), such that for each \(v_{0}\in\mathcal{A}\) with \(\left\|v_{0}-u_{0}\right\|_{\infty}<\delta\) then \(\left\|v-u\right\|_{\infty}<\varepsilon\) on \(\overline{D}_{T}\), with \(u:\overline{D}_{\infty}\to\mathbb{R}\) and \(v:\overline{D}_{\infty}\to\mathbb{R}\) being the unique global solutions to (IBVP) with initial data \(u_{0}\in\mathcal{A}\) and \(v_{0}\in\mathcal{A}\) respectively._
Proof.: This is proved in direct analogy to [16, Theorems 6.7 and 6.8] using the integral representations of \(u\) and \(v\), utilising Duhamel's principle and Gronwall's inequality, owing to the fact that \((u,J(u,\phi))\mapsto u(1-J(u,\phi))\) is locally Lipschitz continuous with respect to both \(u\) and \(J(u,\phi)\). The details are omitted for brevity.
As a consequence of the two previous propositions, we have.
**Proposition 2.3**.: _(IBVP) is well-posed (in the classical Hadamard sense) with respect to initial data \(u_{0}\in\mathcal{A}\) and with respect to \(\left\|\cdot\right\|_{\infty}\) on \(\overline{D}_{T}\)._
We finally observe from (2.4), (2.5) and (2.7) that should a travelling wave structure develop when \(|x|\sim S(t)\) as \(t\to\infty\), then,
\[0<S(t)\leq 2\sqrt{D}t-\frac{1}{2}\sqrt{D}\log t+O(1), \tag{2.9}\]
as \(t\to\infty\).
## 3. Numerical Solutions to (IBVP)
In this section we develop a numerical scheme to approximate solutions to (IBVP), and use this to investigate the qualitative and quantitative structure of solutions to (IBVP), with particular attention to the structure of the solution as \(t\to\infty\). For each of the numerical solutions, we take \(x_{0}=\frac{1}{2}\) and \(g:\mathbb{R}\to\mathbb{R}\) as
\[g(x)=\begin{cases}(1-2x)^{2}(1+2x)^{2},&|x|\leq\frac{1}{2}\\ 0,&|x|>\frac{1}{2},\end{cases} \tag{3.1}\]
We discretise \(u\) on a uniform spatial grid of \(N\) points, truncated to \(0\leq x\leq L\), approximating the second derivative using central finite differences. The convolution term is evaluated using the trapezium rule, in other words assuming a linear variation of \(u\) between grid points, dealing carefully with cases where the edge of the support of the top hat kernel lies between grid points. We also take into account the symmetry of the solution about \(x=0\) and assume that \(u=0\)
for \(x>L\). In the simulations discussed below, we take \(L=10\) and \(N=1000\). Timestepping is done using the midpoint method (second order accurate), with time step chosen adaptively so that the maximum change in \(u\) at each step is below \(10^{-4}\).
Figure 3.1 shows the solution of (IBVP) when \(t=9/2\sqrt{D}\), which is just before the wavefront reaches the edge of the truncated domain, with \(A=1\). For all values of \(A\) that we investigated, indeed for all localised initial inputs of \(u\) that we tried, the solution was qualitatively similar to those shown in Figure 3.1. In each case, a wavefront propagates in the positive \(x\)-direction. Also shown is the corresponding minimum speed travelling wave solution (see section 7 for details of the minimum speed travelling wave solution), calculated numerically using the same finite difference method to set up the discretised equations and 'fsolve' in Matlab to solve them. The equilibrium state \(u=1\) is temporally unstable for \(D<\Delta_{1}\approx 0.00297\) whilst temporally stable for \(D\) larger than this critical value (see section 4 below). For \(D>\Delta_{1}\) the minimum speed
Figure 3.1. The numerical solution of (IBVP) for various values of \(D\) with \(A=1\) (solid black line), along with the minimum speed travelling wave (broken blue line).
travelling wave solution emerges, which leaves the equilibrium state \(u=1\) in its wake. For \(0<D<\Delta_{1}\) however, a non-propagating stationary, spatially periodic state, with wavelength close to \(0.7\), is left in the wake of the wavefront. This stationary, spatially periodic state is constructed from the propagating local wavefront via a temporally periodic "egg laying" process (movies of the numerical solutions can be found here). The temporal periodicity of this process is simply the ratio of the spatial wavelength of the stationary, spatially periodic state left behind the local wavefront to the local wavefront propagation speed; with the local wavefront propagation speed taken as \(2\sqrt{D}\), and the wavelength as \(0.7\), this gives the temporal period of the process as \(0.7/(2\sqrt{D})\). This temporal periodicity is also reflected in a weak periodic variation in the local wavefront propagation speed from an average speed of approximately \(2\sqrt{D}\), the minimum wavespeed, which is clearly illustrated in Figure 3.2 (see sections 4 and 7, regarding the specific notion of minimum wavespeed in the present context), with the temporal periodicity comparing well with the predicted form above. For the solutions shown in Figure 3.2 with \(D>0.001\), the oscillation behind the wavefront decays rapidly enough that it does not
Figure 3.2. The numerically-calculated position of the wavefront for various values of \(D\). The broken line has slope \(2\sqrt{D}\), the minimum wavespeed.
affect the position of the wavefront (defined to be the largest value of \(x\) at which \(u=\frac{1}{2}\)), whereas for \(D=0.001\) the decay of the oscillation is slow enough that there is a weak effect.
In relation to the selection of the wavelength of the stationary, steady periodic state which emerges in the wake of the local wavefront, we shall see below that this is close to the most unstable wavelength of the uniform steady state \(u=1\). Although the process by which the periodic state is generated, particularly for small \(D\), is not by a simple linear perturbation of \(u=1\), this wavelength (around \(0.7\), as shown in Figure 4.1) is a reasonable predictor of the wavelength of the fully-nonlinear state that emerges.
## 4. Equilibrium States
The nonlocal PDE (1.5), with (1.11), has two equilibrium states. The _unreacted_ state with
\[u(x,t)=0\quad\forall\,(x,t)\in\overline{D}_{\infty}, \tag{4.1}\]
and the _fully reacted_ state
\[u(x,t)=1\quad\forall\,(x,t)\in\overline{D}_{\infty}. \tag{4.2}\]
We have seen in Section 3 that these equilibrium states play a key role in the large-\(t\) structure of the solution to (IBVP). Of particular significance is the temporal stability of these equilibrium states. In this section we examine the linearised temporal stability of each of these equilibrium states. To this end, we formulate a linearised initial value problem. We write,
\[u(x,t)=u_{e}+\delta\overline{u}(x,t),\quad(x,t)\in\overline{D}_{\infty}, \tag{4.3}\]
with \(\delta\ll 1\) and \(u_{e}=1\) when considering the fully reacted state or \(u_{e}=0\) when considering the unreacted state. On substituting from (4.3) into (1.5), and neglecting terms of \(O(\delta^{2})\) as \(\delta\to 0\), we obtain a linear evolution equation for \(\overline{u}\), namely
\[\overline{u}_{t}=D\overline{u}_{xx}+L(\overline{u}),\quad\text{on }D_{\infty}, \tag{4.4}\]
where
\[L(\overline{u})=\begin{cases}\overline{u};&u_{e}=0\\ -\int_{x-\frac{1}{2}}^{x+\frac{1}{2}}\overline{u}(y,t)dy;&u_{e}=1\end{cases} \tag{4.5}\]
after using (1.11). The linearised initial value problem is then composed of (4.4), with associated initial and far field conditions,
\[\overline{u}(x,0)=\overline{g}(x)\quad\forall\,x\in\mathbb{R} \tag{4.7}\] \[\overline{u}(x,t)\to 0\text{ as }|x|\to\infty\text{ uniformly on }\overline{D}_{T}\text{ (each }T>0). \tag{4.6}\]
Here \(\overline{g}\in C^{1}(\mathbb{R})\cap L^{\infty}(\mathbb{R})\) and non-negative, \(\left\|\overline{g}\right\|_{\infty}=1\) whilst \(\text{supp}(\overline{g})\subseteq[-x_{0},x_{0}]\) (\(x_{0}>0\)). This problem will be referred to as \(\left(\text{LIVP}\right)_{0}\) when \(u_{e}=0\), and \(\left(\text{LIVP}\right)_{1}\) when \(u_{e}=1\). We now consider \(\left(\text{LIVP}\right)_{0}\) and \(\left(\text{LIVP}\right)_{1}\) in turn.
### Analysis of \(\boldsymbol{\left(\text{LIVP}\right)_{0}}\)
We first seek elementary solutions to (4.4) and (4.5) in the form,
\[\overline{u}(x,t)=e^{ikx-wt},\quad(x,t)\in\overline{D}_{\infty} \tag{4.8}\]
with \(k\in\mathbb{R}\) and \(w\in\mathbb{C}\). On substitution from (4.8) in (4.4) and (4.5) we obtain the dispersion relation
\[w=w_{0}(k)=Dk^{2}-1\quad\forall\,k\in\mathbb{R} \tag{4.9}\]
and so, as expected, \(\left(\text{LIVP}\right)_{0}\) is _nondispersive_\((w_{0}(k)\in\mathbb{R}\)\(\forall\)\(k\in\mathbb{R})\), and moreover, for each \(D>0\),
\[w_{0}(k)<0\quad\forall\,k\in\left(\frac{-1}{\sqrt{D}},\frac{1}{\sqrt{D}}\right) \tag{4.10}\]
with
\[w_{0}(0)=-1=\inf_{k\in\mathbb{R}}w_{0}(k). \tag{4.11}\]
We immediately conclude that the equilibrium state \(u_{e}=0\) is temporally unstable, according to the linearised theory, at each \(D>0\) (this can readily extend to apply to the fully nonlinear and nonlocal PDE (1.5) with (1.11), via an application of the parabolic comparison theorem to the operator \(N(w):=w_{t}-Dw_{xx}-w\) on \(D_{T}\); for brevity we omit the details).
The Fourier integral theorem allows us to write down the solution to \(\left(\text{LIVP}\right)_{0}\) as
\[\overline{u}(x,t)=\int_{-\infty}^{\infty}\widehat{g}(k)e^{-(Dk^{2}-1)t}e^{ikx }dk \tag{4.12}\]
for \((x,t)\in\overline{D}_{\infty}\) with \(\widehat{g}\in C^{\omega}(\mathbb{C})\) given by
\[\widehat{g}(k)=\frac{1}{2\pi}\int_{-x_{0}}^{x_{0}}\overline{g}(s)e^{-isk}ds \quad\forall\,k\in\mathbb{C}, \tag{4.13}\]
being the complex Fourier transform of \(\overline{g}\in C^{1}(\mathbb{R})\) (\(C^{\omega}(\mathbb{C})\) represents the set of entire complex analytic functions). We observe (after re-writing (4.12) via the Fourier convolution theorem) the positivity of \(\overline{u}:\overline{D}_{\infty}\to\mathbb{R}\). Moreover, an application of the method of steepest descent establishes the asymptotic form,
\[\overline{u}(x,t)\sim\frac{\sqrt{\pi}}{\sqrt{D}t^{\frac{1}{2}}}\widehat{g}(0)e ^{t}e^{-x^{2}/4Dt} \tag{4.14}\]
as \(t\to\infty\) uniformly for \(x\in\mathbb{R}\), with
\[\widehat{g}(0)=\frac{1}{2\pi}\int_{-x_{0}}^{x_{0}}\overline{g}(s)ds\quad(>0).\]
We observe from (4.14) that, as \(t\to\infty\), there are two symmetric 'wavefronts' where
\[|x|\sim 2\sqrt{D}t\text{ as }t\to\infty, \tag{4.15}\]
and behind the wavefronts \(\overline{u}\) is growing exponentially in \(t\), whilst ahead of the wavefronts \(\overline{u}\) is decaying exponentially in \(t\). This structure is consistent with (2.9), and the numerical solutions to (IBVP) in section 3.
### Analysis of (Livp)\({}_{1}\)
We seek elementary solutions to (4.4) and (4.5) in the form of (4.8), with again \(k\in\mathbb{R}\) and \(w\in\mathbb{C}\). This now leads directly to the dispersion relation
\[w=w_{1}(k)=Dk^{2}+\frac{2}{k}\sin\frac{1}{2}k\quad\forall\,k\in\mathbb{R} \tag{4.16}\]
and we observe that \(w_{1}(k)\) is an even function of \(k\). Moreover \(w_{1}(k)\in\mathbb{R}\) for all \(k\in\mathbb{R}\) and so \(\left(\text{LIVP}\right)_{1}\) is _nondispersive_. In further analysing (4.16), it is convenient to introduce the function \(\Delta:\mathbb{R}^{+}\to\mathbb{R}\) such that
\[\Delta(X)=-\frac{2}{X^{3}}\sin\frac{1}{2}X\quad\forall\,X\in\mathbb{R}^{+}. \tag{4.17}\]
The zeros of \(\Delta(X)\) are at
\[X=2n\pi,\quad n\in\mathbb{N}, \tag{4.18}\]
whilst the turning points are at
\[X=\delta_{n},\quad n\in\mathbb{N}, \tag{4.19}\]
with \(2n\pi<\delta_{n}<2(n+1)\pi\), and
\[\delta_{n}\sim(2n+1)\pi\text{ as }n\to\infty. \tag{4.20}\]
Furthermore, \(\delta_{n}\) is a local maximum when \(n\) is odd and a local minimum when \(n\) is even. At each local maximum point we write,
\[\Delta_{r}=\Delta(\delta_{2r-1}),\quad r=1,2,\dots. \tag{4.21}\]
We observe that,
\[\Delta_{r}>0,\,\Delta_{r+1}<\Delta_{r},\quad r=1,2,\dots \tag{4.22}\]
and
\[\Delta_{r}\sim\frac{2}{\pi^{3}(4r-1)^{3}}\text{ as }r\to\infty \tag{4.23}\]
whilst a straightforward numerical calculation gives \(\Delta_{1}\approx 0.00297\).
We can now readily interpret the dispersion relation (4.16). We first observe that for
\[D>\Delta_{1} \tag{4.24}\]
then \(w_{1}(k)>0\) for all \(k\in\mathbb{R}\) and so the equilibrium state \(u_{e}=1\) is temporally asymptotically stable. However, for
\[0<D<\Delta_{1} \tag{4.25}\]
then
\[\inf_{k\geq 0}w_{1}(k)=w_{1}(k_{m})=k_{m}^{2}(D-\Delta(k_{m}))<0, \tag{4.26}\]
and so the equilibrium state \(u_{e}=1\) is now temporally unstable. We note that \(k=k_{m}\) is the smallest positive root of the transcendental equation,
\[2Dk^{3}-2\sin\frac{1}{2}k+k\cos\frac{1}{2}k=0 \tag{4.27}\]
and
\[k_{m}\to\begin{cases}\delta_{1}\text{ as }&D\to\Delta_{1}^{-}\\ k_{0}\text{ as }&D\to 0^{+}\end{cases} \tag{4.28}\]
where \(k=k_{0}\) is the smallest positive root of the equation
\[\tan\frac{1}{2}k=\frac{1}{2}k \tag{4.29}\]
so that \(2\pi<k_{0}<3\pi\), and
\[w_{1}(k_{m})\to\frac{2}{k_{0}}\sin\frac{1}{2}k_{0}\text{ as }D\to 0^{+}. \tag{4.30}\]
We note that in both cases,
\[w_{1}(k)\sim Dk^{2}\text{ as }|k|\to\infty\]
and so \(\left(\text{LIVP}\right)_{1}\) is well-posed. Figure 4.1 shows the wavelength of the most unstable mode as a function of \(D\).
The solution to \(\left(\text{LIVP}\right)_{1}\) is readily obtained as
\[\overline{u}(x,t)=\int_{-\infty}^{\infty}\widehat{g}(k)e^{-w_{1}(k)t}e^{ikx}dk \tag{4.31}\]
for \((x,t)\in\overline{D}_{\infty}\), with \(\widehat{g}\) given in (4.13). We obtain from (4.31), via Laplace's method, that,
\[\overline{u}(x,t)\sim\frac{\sqrt{\pi}2^{3/2}R(k_{m})}{(w_{1}^{\prime\prime}(k _{m}))^{\frac{1}{2}}t^{\frac{1}{2}}}e^{-w_{1}(k_{m})t}\cos(k_{m}x+\phi(k_{m})) \tag{4.32}\]
Figure 4.1. The wavelength, \(2\pi/k_{m}\), of the most unstable linear mode of instability of \(u=1\), for \(D\) up to \(\Delta_{1}\approx 0.00297\), calculated from the position of the minimum of \(w_{1}(k)\) given by (4.16).
for \(|x|=O(1)\) as \(t\to\infty\). Here \(R(k_{m})=|\widehat{g}(k_{m})|\), \(\phi(k_{m})=\arg(\widehat{g}(k_{m}))\). It should be noted that further regions in the large-\(t\) structure of \(\overline{u}\) are required when \(|x|=O(t)\) as \(t\to\infty\), which gives the transition into the far field for \(|x|\gg O(t)\). We observe, from (4.32), that when \(0<D<\Delta_{1}\), the solution to \(\eqref{LIVP}_{1}\) evolves into an exponentially growing harmonic periodic state with spatial wave number \(k_{m}\), which depends upon \(D\), and temporal exponential growth rate \(|w_{1}(k_{m})|\). This periodic state is stationary, and evolves when \(|x|=O(1)\) as \(t\to\infty\), behind transition into the far field when \(|x|=O(t)\) as \(t\to\infty\).
In relation to (IBVP), the above analyses indicate that when \(t\) is large, two symmetric permanent form wavefronts propagate to left and right, with asymptotic propagation speeds of \(\pm 2\sqrt{D}\). However, _in the region to the rear of the two wavefronts at_\(|x|\sim 2\sqrt{D}t\), the large-\(t\) structure to \(\eqref{LIVP}_{1}\) indicates that, as \(t\to\infty\) in (IBVP), and specifically when \(|x|=O(1)\) as \(t\to\infty\), _there are two possible steady spatial structures which emerge as a consequence of wavefront passage_, depending upon \(D\). With \(u:\overline{D}_{\infty}\to\mathbb{R}\) being the solution to (IBVP), these possibilities are:
1. when \(D>\Delta_{1}\), then \(u(x,t)\to 1\) as \(t\to\infty\), uniformly with \(|x|=O(1)\)
2. when \(0<D<\Delta_{1}\), then \(u(x,t)\to P(x)\) as \(t\to\infty\), uniformly with \(|x|=O(1)\). Here \(P:\mathbb{R}\to\mathbb{R}\) is a steady periodic solution to equation (1.5) with (1.11), which is positive, has \(1\in\mathrm{Im}(P)\) and has wavelength close to \(2\pi/k_{m}(D)\).
We recall that both (P1) and (P2) are supported by the numerical solutions to (IBVP) presented in section 3.
As a consequence of the theory in this section, and in particular to further enable the development of theory to support and extend the conjecture (P2), the next natural key step is to investigate the existence and structural nature of the positive periodic steady solutions to the nonlocal equation (1.5) with (1.11), which are conjectured to emerge in the large-\(t\) development of the solution to (IBVP). The starting point for this study begins by investigating the emergence of periodic steady solutions via steady state bifurcations from the equilibrium solution \(u_{e}=1\). Thereafter, the detailed structure of these fully nonlinear and nonlocal periodic solutions is developed in detail for significant asymptotic limits. In particular, a fully developed theory is obtained in the small \(D\) limit, when the nonlocal length scale is much larger than the diffusion length scale, leading to spatially periodic states consisting of separated, periodically distributed, localised humps, characterised by nonlocal effects which are regulated by weak diffusion.
## 5. Positive Periodic Steady States
We begin by considering the steady state form of the nonlocal equation (1.5) with (1.11), and in particular we seek to identify steady state bifurcations from the equilibrium state \(u_{e}=1\), which give rise to periodic steady states. Any steady state to the nonlocal equation (1.5) with (1.11), in the present context, is a function \(F:\mathbb{R}\to\mathbb{R}\), with \(F\in C^{2}(\mathbb{R})\cap L^{\infty}(\mathbb{R})\), and such that,
\[DF_{xx}+F\left(1-\int_{x-\frac{1}{2}}^{x+\frac{1}{2}}F(y)dy\right)=0,\quad x\in \mathbb{R}. \tag{5.1}\]
We restrict attention to considering the existence of positive periodic steady states, which oscillate about the equilibrium state \(u_{e}=1\). Specifically, we will fix \(D>0\), and consider the bifurcation to periodic steady states from \(u_{e}=1\), with fundamental wavelength \(\lambda\), as the bifurcation parameter. As a preliminary we first observe, via bootstrapping in (5.1), that when \(F\in C^{2}(\mathbb{R})\cap L^{\infty}(\mathbb{R})\) is a steady state, then in fact, \(F\in C^{\infty}(\mathbb{R})\). Moreover (see for example [11]) this can be improved to
\[F\in C^{\omega}(\mathbb{R})\cap L^{\infty}(\mathbb{R}). \tag{5.2}\]
Now, let \(F=F_{p}(x,\lambda,D)\) be a positive, periodic, steady state at diffusivity \(D\) and with fundamental wavelength \(\lambda>0\). We define
\[\alpha\equiv\max_{x\in[0,\lambda)}F_{p}(x,\lambda,D)-\min_{x\in[0,\lambda)}F_ {p}(x,\lambda,D). \tag{5.3}\]
the peak-to-trough magnitude of this periodic steady state. In general, we anticipate that \(\alpha=\alpha(\lambda,D)\). For fixed \(D>0\), we now suppose that a bifurcation to periodic steady states occurs from \(u_{e}=1\) as \(\lambda\) passes through \(\lambda=\lambda_{b}\) (\(>0\)). To determine the possible values of \(\lambda_{b}\), we consider those values \(\lambda=\lambda_{b}\) when the linearized form of (5.1) has solution
\[F(x)=1+\alpha\cos\biggl{(}\frac{2\pi}{\lambda_{b}}x+\phi\biggr{)},\quad x\in \mathbb{R}, \tag{5.4}\]
as \(\alpha\to 0^{+}\), with \(\phi\) being a constant phase. On substitution from (5.4) into (5.1), and allowing \(\alpha\to 0^{+}\), a non-trivial solution requires that \(\lambda_{b}\) satisfies the transcendental equation, with fixed \(D>0\),
\[D=-\frac{\lambda_{b}^{3}}{4\pi^{3}}\sin\frac{\pi}{\lambda_{b}}=\Delta\left( \frac{2\pi}{\lambda_{b}}\right) \tag{5.5}\]
with \(\Delta(\cdot)\) as introduced in (4.17). We can now interpret the root structure of (5.5), using (4.18)-(4.24). In particular, for each \(r=1,2,\dots\), then, with
\[\Delta_{r+1}<D<\Delta_{r}, \tag{5.6}\]
equation (5.5) has exactly \(2r\) positive roots, which we label as \(\lambda_{i}^{\pm}(D)\) for \(\ i=1,2,\dots,r\), with
\[\frac{1}{2i}<\lambda_{i}^{-}(D)<\frac{2\pi}{\delta_{2i-1}}<\lambda_{i}^{+}(D)< \frac{1}{(2i-1)} \tag{5.7}\]
for each \(i=1,\dots,r\). We note that, for fixed \(k\in\mathbb{N}\), then \(\lambda_{k}^{\pm}(D)\) exist and are continuous for \(D\in[0,\Delta_{k})\). Moreover, the following properties are readily established:
\[\lambda_{k}^{\pm}(D)\to\frac{2\pi}{\delta_{2k-1}}\ \text{as}\ D\to\Delta_{k}^{-}, \tag{5.9}\] \[\lambda_{k}^{+(-)}(D)\ \text{is monotone decreasing (increasing) with}\ D\in[0,\Delta_{k}),\] (5.10) \[\lambda_{k}^{+}(D)\to\frac{1}{(2k-1)}^{-}\ \text{as}\ D\to 0^{+},\] (5.11) \[\lambda_{k}^{-}(D)\to\frac{1}{2k}^{+}\ \text{as}\ D\to 0^{+}. \tag{5.8}\]
Now, fix \(r\in\mathbb{N}\), and fix \(\Delta_{r+1}<D<\Delta_{r}\). At this fixed \(D\), there are thus \(2r\) local bifurcation points to periodic steady states, namely at critical wavelengths
\[\lambda=\lambda_{k}^{\pm}(D),\quad k=1,2,\dots,r. \tag{5.12}\]
Further lengthy, but straightforward calculations establish that each bifurcation is a steady state pitchfork bifurcation from \(u_{e}=1\), being subcritical at \(\lambda=\lambda_{k}^{+}(D)\) and supercritical at \(\lambda=\lambda_{k}^{-}(D)\ (k=1,2,\dots,r)\). Additional investigation reveals that, for each \(k\), the two bifurcated branches at \(\lambda_{k}^{\pm}(D)\) are connected in the \((\lambda,\alpha)\) bifurcation diagram. We can represent this curve as \(\alpha=\alpha(\lambda;D)\) for \(\lambda_{k}^{-}(D)\leq\lambda\leq\lambda_{k}^{+}(D)\) (for each \(k=1,2,\dots,r\)). Close to the bifurcation points \(\lambda=\lambda_{k}^{\pm}(D)\) a weakly nonlinear theory is readily developed (details are omitted for brevity) to establish that,
\[\alpha(\lambda,D)=C_{k}^{\pm}(D)\left|\lambda-\lambda_{k}^{\pm}(D)\right|^{ \frac{1}{2}}+O(\left|\lambda-\lambda_{k}^{\pm}(D)\right|) \tag{5.13}\]
as \(\lambda\to(\lambda_{k}^{+}(D))^{-}\) or as \(\lambda\to(\lambda_{k}^{-}(D))^{+}\) respectively. Here \(C_{k}^{\pm}(D)\) is a positive constant. The bifurcated periodic steady states have the form
\[u=F_{p}(x,\lambda,D)=1+\alpha(\lambda,D)\cos\!\left(\frac{2\pi}{\lambda}x+ \phi\right)+O(\alpha^{2}(\lambda,D)) \tag{5.14}\]
as \(\lambda\to\lambda_{k}^{\pm}(D)\) respectively, with \(\alpha(\lambda,D)\) as in (5.13), and \(\phi\in[0,2\pi)\) being an arbitrary phase (representing translational invariance of (5.1) in \(x\)). In addition, numerical investigation confirms that, with \(D\) fixed, then \(\alpha(\lambda,D)\) has a single stationary point for \(\lambda\in[\lambda_{k}^{-}(D),\lambda_{k}^{+}(D)]\), which is a maximum point.
The numerically-calculated \((\lambda,\alpha)\) bifurcation diagram is given in Figure 5.1 for various values of \(D\). It is now straightforward to construct the bifurcation locus in the \((\lambda,D)\) plane. This is given directly by the intersection of the curve
\[D=-\frac{\lambda^{3}}{4\pi^{3}}\sin\frac{\pi}{\lambda}\]
with the positive quadrant of the \((\lambda,D)\) plane. This intersection is in the form of a countably infinite sequence of 'tongue like' curves based on the \(\lambda-\)axis, and with monotone decreasing 'height'. The \(i^{th}\) 'tongue' has base points at \(\lambda=1/(2i-1)\) and \(\lambda=1/2i\), with 'height' \(D=\Delta_{i}\). We label the set of interior points of the \(i^{th}\) 'tongue' as \(\Omega_{i}\), and its boundary as \(\partial\Omega_{i}=\overline{\Omega_{i}}\setminus\Omega_{i}\). We write
\[\Omega=\bigcup_{i=1}^{\infty}\Omega_{i},\quad\partial\Omega=\bigcup_{i=1}^{ \infty}\partial\Omega_{i}. \tag{5.15}\]
At each point \((\lambda,D)\in\Omega\) there is a unique (up to translation in \(x\)) periodic steady state, with fundamental wavelength \(\lambda\) and amplitude \(\alpha(\lambda,D)\). Since the equation (5.1) is invariant under the transformation \(x\mapsto-x\), this allows for the existence of a translation in \(x\), so that at each
\((\lambda,D)\in\Omega\) we may select a representative periodic steady state \(u=F_{p}(x,\lambda,D)\) which is an _even function_ of \(x\in\mathbb{R}\), so that
\[F_{p}(-x,\lambda,D)=F_{p}(x,\lambda,D)\quad\forall x\in\mathbb{R} \tag{5.17}\] \[F_{p}^{\prime}(0,\lambda,D)=0 \tag{5.16}\]
and
\[F_{p}^{\prime}\left(\pm\frac{1}{2}\lambda,\lambda,D\right)=0. \tag{5.18}\]
A plot of \(\Omega\) in the \((\lambda,D)\) plane is shown in Figure 5.2. The numerically-calculated maximum value of \(u=F_{p}(x,\lambda,D)\), which we denote by \(u_{max}\), is shown in Figure 5.3 for the first four tongues of \(\Omega\). This is a more suitable measure of amplitude than \(\alpha\) for the purposes of this semi-logarithmic plot, since \(\alpha\) is zero at the edges of the region where the nonlinear solutions exist.
We now examine the structure of the periodic steady states with \((\lambda,D)\in\overline{\Omega}\). Firstly, on the bifurcation locus, with \((\lambda,D)\in\partial\Omega\cap(\mathbb{R}^{+})^{2}\) we have
\[F_{p}(x,\lambda,D)=1\quad\forall x\in\mathbb{R}, \tag{5.19}\]
Figure 5.2. The region \(\Omega\) lies below these curves, or tongues.
whilst it follows from (5.13) and (5.14) that
\[F_{p}(x,\lambda,D)=1+O(|\lambda-\lambda_{0}|^{\frac{1}{2}}+|D-D_{0}|^{\frac{1}{2} })\text{ as }(\lambda,D)\to(\lambda_{0},D_{0}) \tag{5.20}\]
uniformly for \(x\in\mathbb{R}\), with \((\lambda_{0},D_{0})\in\partial\Omega\cap(\mathbb{R}^{+})^{2}\) and \((\lambda,D)\in\Omega\). In terms of regularity, it follows from (5.2) and (5.1) (see for example, [11]) that
\[F_{p}\in C^{\omega,0,0}(\mathbb{R}\times(\overline{\Omega}\cap(\mathbb{R}^{+}) ^{2})). \tag{5.21}\]
We can now establish the following bounds:
* For any \((\lambda,D)\in\Omega\), then, \[\inf_{x\in\mathbb{R}}F_{p}(x,\lambda,D)>0\]
Proof.: Set \((\lambda^{*},D^{*})\in\Omega\), then \((\lambda^{*},D^{*})\in\Omega_{i}\) for some \(i\in\mathbb{N}\). The point \((\lambda,D)\) can then be connected, in \(\Omega_{i}\), to a point \((\lambda_{0},D_{0})\in\partial\Omega_{i}\cap(\mathbb{R}^{+})^{2}\) via a straight line segment \(L\). Now, suppose
Figure 5.3. Semilogarithmic plots of \(u_{max}\) in the first four tongues of \(\Omega\).
that
\[\inf_{x\in\mathbb{R}}F_{p}(x,\lambda^{*},D^{*})\leq 0. \tag{5.22}\]
We recall from (5.19) that
\[\inf_{x\in\mathbb{R}}F_{p}(x,\lambda_{0},D_{0})=1. \tag{5.23}\]
Thus, from (5.21), it follows that there exists \((\lambda_{1},D_{1})\in L\cap\Omega_{i}\), such that
\[\inf_{x\in\mathbb{R}}F_{p}(x,\lambda_{1},D_{1})=0. \tag{5.24}\]
As a consequence (since \(F_{p}\) is periodic in \(x\in\mathbb{R}\), together with (5.21)) there exists \(x_{1}\in\mathbb{R}\) such that
\[F_{p}(x_{1},\lambda_{1},D_{1})=0. \tag{5.25}\]
It then follows, via (5.24), (5.25), (5.21) and (5.1), that
\[F_{p}^{(n)}(x_{1},\lambda_{1},D_{1})=0\quad\forall\,n\in\mathbb{N}. \tag{5.26}\]
It is then a consequence of (5.26) with (5.21) that,
\[F_{p}(x,\lambda_{1},D_{1})=0\quad\forall\,x\in\mathbb{R}.\]
However, \((\lambda_{1},D_{1})\in\Omega_{i}\), and so \(F_{p}(x,\lambda_{1},D_{1})\) is a _non-trivial_ periodic function in \(x\in\mathbb{R}\), with fundamental wavelength \(\lambda_{1}\), and so we arrive at a contradiction. The result follows.
We next have
* For any \((\lambda,D)\in\Omega\), then, \[\sup_{x\in\mathbb{R}}F_{p}(x,\lambda,D)>1\]
Proof.: Set \((\lambda,D)\in\Omega\), then \((\lambda,D)\in\Omega_{i}\) for some \(i\in\mathbb{N}\). Suppose that
\[\sup_{x\in\mathbb{R}}F_{p}(x,\lambda,D)\leq 1. \tag{5.27}\]
Then since \(F_{p}(x,\lambda,D)\) is a non-trivial periodic function in \(x\in\mathbb{R}\), with fundamental wavelength \(\lambda\), it follows that there exists \(x^{*}\in\mathbb{R}\) such that
\[0<F_{p}(x^{*},\lambda,D)=\inf_{x\in\mathbb{R}}F_{p}(x,\lambda,D)<1, \tag{5.28}\]
with the left hand inequality following from (B1). From (5.27) and (5.28) it follows that,
\[\int_{x^{*}-\frac{1}{2}}^{x^{*}+\frac{1}{2}}F_{p}(y,\lambda,D)dy<1. \tag{5.29}\]
Also, from (5.1), we have
\[F_{p}^{\prime\prime}(x^{*},\lambda,D)=-F_{p}(x^{*},\lambda,D)\left(1-\int_{x^{ *}-\frac{1}{2}}^{x^{*}+\frac{1}{2}}F_{p}(y,\lambda,D)dy\right)<0, \tag{5.30}\]
via (5.28) and (5.29). However, via (5.28) and (5.21), \(x=x^{*}\) is a local minimum point for \(F_{p}(x,\lambda,D)\), and so
\[F_{p}^{\prime\prime}(x^{*},\lambda,D)\geq 0\]
which contradicts (5.30). Thus,
\[\sup_{x\in\mathbb{R}}F_{p}(x,\lambda,D)>1,\]
as required.
Correspondingly, we now have
* For any \((\lambda,D)\in\Omega\), then, \[\inf_{x\in\mathbb{R}}F_{p}(x,\lambda,D)<1\]
Proof.: This follows, with the obvious adjustments, that of (B2).
To complete this section, we next consider in detail the structure of the periodic steady states in the \((\lambda,D)\) plane as \(D\to 0^{+}\). We begin by examining this limit for the periodic steady states in the principal tongue \(\Omega_{1}\), and then give the corresponding results for \(\Omega_{i}\), \(i=2,3,4,\ldots\).
### Asymptotic structure as \(D\to 0^{+}\) with \((\lambda,D)\in\Omega_{1}\)
We consider the asymptotic structure of \(F_{p}(x,\lambda,D)\) with \((\lambda,D)\in\Omega_{1}\) as \(D\to 0^{+}\). Due to the evenness of \(F_{p}\) in \(x\), we need only consider this structure for \(x\in\left[0,\frac{1}{2}\lambda\right]\), with \(\lambda\in\left(\frac{1}{2},1\right)\) fixed as \(D\to 0^{+}\). We recall that, \(F_{p}(x,\lambda,D)>0\) for all \(x\in\left[0,\frac{1}{2}\lambda\right]\), whilst evenness requires
\[F_{p}^{\prime}(0,\lambda,D)=F_{p}^{\prime}\left(\frac{1}{2}\lambda,\lambda,D \right)=0. \tag{5.31}\]
An examination of (5.1), together with a consideration of numerical solutions, dictates that, for \(\lambda\in\left(\frac{1}{2},1\right)\) fixed, the asymptotic structure of \(F_{p}(x,\lambda,D)\), as \(D\to 0^{+}\), develops into a three region structure as follows:
_Region \(I\)_ (support region)
\[x\in\left[0,S-O(D^{\gamma})\right)\text{, }F_{p}=O(1)^{+}\text{ as }D\to 0^{+}\]
with \(0<S<\frac{1}{2}\lambda\) and \(\gamma>0\) to be determined. In general, \(S\) will depend upon both \(\lambda\) and \(D\), with \(S=O(1)^{+}\) as \(D\to 0^{+}\), and \(\lambda\in\left(\frac{1}{2},1\right)\). With this in mind, we expand \(S(\lambda,D)\) as,
\[S(\lambda,D)=\bar{S}(\lambda)+D^{\gamma}S_{1}(\lambda)+o(D^{\gamma}),\text{ as }D\to 0^{+} \tag{5.32}\]
with \(\lambda\in\left(\frac{1}{2},1\right)\), which allows for weak displacements in the location of region II below.
_Region II_ (transition region)
\[x\in\left(S-O(D^{\gamma}),S+O(D^{\gamma})\right),\ F_{p}=O(D^{\delta})^{+}\text { as }D\to 0^{+}\]
with \(\delta>0\) to be determined.
_Region III_ (exponential region)
\[x\in\left(S+O(D^{\gamma}),\frac{1}{2}\lambda\right],\ F_{p}=O(E(D))^{+}\text{ as }D\to 0^{+}\]
with \(E(D)\) indicating terms exponentially small in \(D\) as \(D\to 0^{+}\).
The above structure, requires, for the change in structure across region II, that the span of region III (which is the leading order separation of consecutive support regions) must have the same length as the half span of the nonlocal term, which is \(\frac{1}{2}\), and so,
\[\bar{S}+\frac{1}{2}=\lambda-\bar{S}\]
which gives,
\[\bar{S}=\frac{1}{2}\left(\lambda-\frac{1}{2}\right)=\bar{S}(\lambda) \tag{5.33}\]
and we note from this that
\[0<\bar{S}(\lambda)<\frac{1}{4}\]
for \(\lambda\in\left(\frac{1}{2},1\right)\), with,
\[\bar{S}(\lambda)-\frac{1}{2}<-\bar{S}(\lambda)\quad\forall\,\lambda\in\left( \frac{1}{2},1\right). \tag{5.34}\]
With (5.32)-(5.34), then equation (5.1) requires that
\[\int_{x-\frac{1}{2}}^{x+\frac{1}{2}}F_{p}(y,\lambda,D)dy=\int_{-\frac{1}{2}}^{ \frac{1}{2}}F_{p}(y,\lambda,D)dy+O(E(D))\text{ as }D\to 0^{+}, \tag{5.35}\]
for \(x\in\left[0,\bar{S}(\lambda)-O(D^{\gamma})\right)\), whilst
\[\int_{-\frac{1}{2}}^{\frac{1}{2}}F_{p}(y,\lambda,D)dy=1+\alpha_{1}D+\beta_{1}D^{ r}+o(D^{r}) \tag{5.36}\]
as \(D\to 0^{+}\), with the constants \(r>1\), \(\alpha_{1}\) and \(\beta_{1}\) to be determined.
We begin in region I and expand in the form
\[F_{p}(x,\lambda,D)=F_{0}(x,\lambda)+D^{m}F_{1}(x,\lambda)+o(D^{m})\text{ as }D\to 0^{+}, \tag{5.37}\]
with \(x\in[0,S(\lambda)-O(D^{\gamma}))\) as \(m>0\) to be determined. We substitute into (5.1), to obtain, at leading order as \(D\to 0^{+}\),
\[F_{0}^{\prime\prime}-\alpha_{1}F_{0}=0,\quad 0<x<\bar{S}(\lambda) \tag{5.39}\] \[F_{0}^{\prime}(0,\lambda)=F_{0}(\bar{S}(\lambda),\lambda)=0\] (5.40) \[F_{0}(x,\lambda)>0,\quad 0\leq x<\bar{S}(\lambda) \tag{5.38}\]
with the second condition in (5.39) required by asymptotic matching (via Van Dyke's principle, [18]) between region I and region II. Finally, (5.36) requires
\[\int_{0}^{S(\lambda)}F_{0}(y,\lambda)dy=\frac{1}{2}, \tag{5.41}\]
using the evenness of \(F_{p}(x,\lambda,D)\), and its asymptotic form in regions II and III. Now, the problem (5.38)-(5.40) is a classical Sturm-Liouville eigenvalue problem, with eigenvalue \(-\alpha_{1}\), whilst condition (5.40) requires that \(-\alpha_{1}\) is the _lowest_ eigenvalue. This may be treated directly, to obtain,
\[\alpha_{1}=-\frac{\pi^{2}}{\left(\lambda-\frac{1}{2}\right)^{2}} \tag{5.42}\]
with
\[F_{0}(x,\lambda)=B\cos\frac{\pi x}{\left(\lambda-\frac{1}{2}\right)},\quad 0 \leq x\leq\bar{S}(\lambda) \tag{5.43}\]
and \(B>0\) an arbitrary constant. It remains to apply condition (5.41), which gives,
\[B=\frac{\pi}{(2\lambda-1)}, \tag{5.44}\]
and so,
\[F_{0}(x,\lambda)=\frac{\pi}{(2\lambda-1)}\cos\frac{\pi x}{\left(\lambda-\frac {1}{2}\right)},\quad 0\leq x\leq\bar{S}(\lambda) \tag{5.45}\]
and \(\bar{S}(\lambda)=\frac{1}{2}\left(\lambda-\frac{1}{2}\right)\). Via (5.35), (5.36) and (5.42), we also have
\[\int_{x-\frac{1}{2}}^{x+\frac{1}{2}}F_{p}(y,\lambda,D)dy=2\int_{0}^{\frac{1}{2} }F_{p}(y,\lambda,D)dy=1-\frac{\pi^{2}}{\left(\lambda-\frac{1}{2}\right)^{2}}D+ \beta_{1}D^{r}+o(D^{r}) \tag{5.46}\]
as \(D\to 0^{+}\), with \(x\in[0,S(\lambda)-O(D^{\gamma}))\). This completes the leading order structure in region I. We now move on to consider region II, the transition region. In region II we introduce the scaled coordinate \(X=O(1)\) as \(D\to 0^{+}\), given by,
\[x=S(\lambda,D)+D^{\gamma}X \tag{5.47}\]
It then follows from the expansion in region I, on moving into region II, via (5.45), (5.37) with (5.41), that \(F_{p}=0(D^{\gamma})\) as \(D\to 0^{+}\) in region II. Thus, we expand in the form
\[F_{p}(X,\lambda,D)=D^{\gamma}\bar{F}_{0}(X,\lambda)+o(D^{\gamma})\text{ as }D\to 0^{+} \tag{5.48}\]
with \(X=O(1)\). We remark here that the term at \(O(D^{\gamma})\) in \(S(\lambda,D)\) could be removed by a constant shift in the coordinate \(X\). However, it is convenient for later matching purposes to leave this shift in \(S\); this also leads to a parameter free equation at leading order, with all parameters shifted into the matching condition. We next consider the form of the nonlocal term in (5.1) in region II, which can be written as,
\[\int_{x-\frac{1}{2}}^{x+\frac{1}{2}}F_{p}(y,\lambda,D)dy=\int_{S(\lambda,D)- \frac{1}{2}+D^{\gamma}X}^{S(\lambda,D)+\frac{1}{2}+D^{\gamma}X}F_{p}(y,\lambda,D)dy \tag{5.49}\]
Now, using the periodicity and evenness of \(F_{p}\), and its structure in each of regions I-III, we have firstly that,
\[\int_{S(\lambda,D)-\frac{1}{2}+D^{\gamma}X}^{S(\lambda,D)-\frac{1}{2}}F_{p}(y,\lambda,D)dy=O(E(D)D^{\gamma}) \tag{5.50}\]
as \(D\to 0^{+}\) with \(X=O(1)\), whilst,
\[\int_{S(\lambda,D)-\frac{1}{2}}^{S(\lambda,D)+\frac{1}{2}-O(1)}F_{p}(y, \lambda,D)dy=1-\frac{\pi^{2}}{\left(\lambda-\frac{1}{2}\right)^{2}}D+\beta_{1} D^{r}+o(D^{r}) \tag{5.51}\]
as \(D\to 0^{+}\). The final term in (5.49) may now be written as
\[\int_{S(\lambda,D)+\frac{1}{2}+D^{\gamma}X}^{S(\lambda,D)+\frac{1}{2}+D^{ \gamma}X}F_{p}(y,\lambda,D)dy=D^{\gamma}\int_{Y=-O(D^{-\gamma})}^{Y=X}F_{p} \left(S(\lambda,D)+\frac{1}{2}+D^{\gamma}Y,\lambda,D\right)dY\]
\[=D^{\gamma}\int_{Y=-O(D^{-\gamma})}^{Y=X}F_{p}(\lambda-S(\lambda,D)+D^{ \gamma}Y,\lambda,D)dY=D^{\gamma}\int_{Y=-O(D^{-\gamma})}^{Y=X}F_{p}(-S(\lambda,D) +D^{\gamma}Y,\lambda,D)dY \tag{5.52}\] \[=D^{\gamma}\int_{Y=-O(D^{-\gamma})}^{Y=X}F_{p}(S(\lambda,D)-D^{ \gamma}Y,\lambda,D)dY\sim D^{2\gamma}\int_{Y=-\infty}^{Y=X}\bar{F}_{0}(-Y, \lambda)dY=D^{2\gamma}\int_{z=-X}^{z=\infty}\bar{F}_{0}(z,\lambda)dz\]
as \(D\to 0^{+}\) with \(X=O(1)\). Using (5.50)-(5.52) in (5.49), we finally arrive at
\[\int_{S(\lambda,D)-\frac{1}{2}+D^{\gamma}X}^{S(\lambda,D)+\frac{1}{2}+D^{ \gamma}X}F_{p}(y,\lambda,D)dy=1-\frac{\pi^{2}}{\left(\lambda-\frac{1}{2} \right)^{2}}D+\beta_{1}D^{r}+D^{2\gamma}\int_{z=-X}^{z=\infty}\bar{F}_{0}(z, \lambda)dz+o(D^{2\gamma},D^{r},E(D)) \tag{5.53}\]
as \(D\to 0^{+}\) with \(X=O(1)\). We now substitute from (5.47), (5.48) and (5.53) into equation (5.1). To obtain a non-trivial balance at leading order requires us to choose
\[\gamma=\frac{1}{4}. \tag{5.54}\]
At leading order, equation (5.1) then becomes,
\[\Psi_{\bar{X}\bar{X}}-\Psi\int_{w=-\bar{X}}^{w=\infty}\Psi(w)dw=0,\quad\bar{X }\in\mathbb{R}, \tag{5.55}\]
which is both nonlinear and nonlocal. In (5.55) we have introduced the simple scalings
\[\bar{F}_{0}=(2\lambda-1)^{-\frac{3}{2}}\Psi,\ X=(2\lambda-1)^{\frac{1}{2}}\bar {X}. \tag{5.56}\]
Equation (5.55) is now completed by matching conditions between region II and region I (as \(\bar{X}\to-\infty\)) and region III (as \(\bar{X}\to\infty\)). The matching process is straightforward (following Van Dyke's asymptotic matching principle, [18]), and leads to the two boundary conditions,
\[\Psi(\bar{X})\to 0\ \text{as}\ \bar{X}\to\infty, \tag{5.58}\] \[\Psi(\bar{X})\sim-2\pi^{2}(\bar{X}+l)\ \text{as}\ \bar{X}\to-\infty. \tag{5.57}\]
Here
\[S_{1}(\lambda)=(2\lambda-1)^{\frac{1}{2}}l. \tag{5.59}\]
It is also required that,
\[\Psi(\bar{X})>0\quad\forall\,\bar{X}\in\mathbb{R}. \tag{5.60}\]
We observe that the problem (5.55), (5.57), (5.58) and (5.60) is a nonlinear, nonlocal eigenvalue problem, with real eigenvalue \(l\). It is also free of parameters. A numerical investigation of this problem establishes that a solution exists if and only if \(l=l^{*}\), and the solution is unique. Here
\[l^{*}\approx-\frac{3.493}{2\pi^{2}}, \tag{5.61}\]
and a numerically-determined graph of \(\Psi(\bar{X})\) against \(\bar{X}\), calculated using a simple finite difference method and the trapezium rule, is shown in Figure 5.4. It is also straightforward to establish that
\[\Psi(\bar{X})=-2\pi^{2}(\bar{X}+l^{*})+\Psi_{-\infty}\bar{X}^{-2}e^{-\frac{1}{ 2}\pi X^{2}}(1+o(1)) \tag{5.62}\]
as \(\bar{X}\to-\infty\), whilst,
\[\Psi(\bar{X})=\Psi_{\infty}e^{-\frac{1}{2}\pi X^{2}}(1+o(1)) \tag{5.63}\]
as \(\bar{X}\to+\infty\). Here \(\Psi_{\infty}\) (\(>0\)) and \(\Psi_{-\infty}\) are parameter free, globally determined constants. In principle, these can be determined from the numerical solution shown in Figure 5.4, but in
practice, since these corrections are exponentially-small, we are not able to resolve them with sufficient accuracy.
We now return to region I. To allow matching with region II we now require
\[m=\frac{1}{4}, \tag{5.64}\]
whilst the most structured balance in equation (5.1) dictates that
\[r=m+1=\frac{5}{4}. \tag{5.65}\]
The problem for \(F_{1}\) is then,
\[F_{1}^{\prime\prime}+\frac{\pi^{2}}{\left(\lambda-\frac{1}{2} \right)^{2}}F_{1} =\beta_{1}F_{0}(x,\lambda)\] \[=\frac{\beta_{1}\pi}{(2\lambda-1)}\cos\frac{\pi x}{\left( \lambda-\frac{1}{2}\right)},\quad 0<x<\frac{1}{2}\left(\lambda-\frac{1}{2} \right), \tag{5.66}\]
\[F_{1}^{\prime}(0,\lambda)=F_{1}\left(\frac{1}{2}\left(\lambda- \frac{1}{2}\right),0\right)=0, \tag{5.68}\] \[\int_{0}^{\frac{1}{2}\left(\lambda-\frac{1}{2}\right)}F_{1}(y, \lambda)dy=0. \tag{5.67}\]
The second of conditions (5.67) arises from matching with region II (with both expansions taken up to \(O(D^{\frac{1}{4}})\) in regions I and II). A solution to (5.66) and (5.67) exists if and only if
\[\beta_{1}=0, \tag{5.69}\]
after which
\[F_{1}(x,\lambda)=C\cos\frac{\pi x}{\left(\lambda-\frac{1}{2}\right)} \tag{5.70}\]
with \(C\) an arbitrary real constant. However, substitution from (5.70) into (5.68), then requires \(C=0\), and so,
\[F_{1}(x,\lambda)=0\quad\forall\,x\in\left[0,\frac{1}{2}\left(\lambda-\frac{1} {2}\right)\right]. \tag{5.71}\]
Thus, in region I we have
\[F_{p}(x,\lambda,D)=\frac{\pi}{(2\lambda-1)}\cos\frac{\pi x}{\left(\lambda- \frac{1}{2}\right)}+o(D^{\frac{1}{4}}) \tag{5.72}\]
as \(D\to 0^{+}\) with \(x\in\left[0,\frac{1}{2}\left(\lambda-\frac{1}{2}\right)\right)\).
We finally move into the exponential region, region III. In this region \(F_{p}\) is exponentially small in \(D\) as \(D\to 0^{+}\), and develops as a WKB expansion. For brevity we omit details, but obtain
\[F_{p}(x,\lambda,D)=A^{*}(\lambda)e^{-\Phi_{0}(\lambda)D^{-\frac{1}{2}}}\cosh \biggl{(}\frac{\Phi(x)}{D^{\frac{1}{2}}}\biggr{)}(1+o(1)) \tag{5.73}\]
as \(D\to 0^{+}\), with \(x\in\left(\frac{1}{2}(\lambda-\frac{1}{2})+O(D^{\frac{1}{4}}),\frac{1}{2} \lambda\right]\). Here,
\[\Phi(x)=\frac{1}{\sqrt{2}}\int_{x}^{\frac{1}{2}\lambda}\left(1-\sin\biggl{(} \frac{\pi w}{\left(\lambda-\frac{1}{2}\right)}\biggr{)}\right)^{\frac{1}{2}}dw \quad\forall\,x\in\left[\frac{1}{2}\left(\lambda-\frac{1}{2}\right),\frac{1}{ 2}\lambda\right) \tag{5.74}\]
and
\[\Phi_{0}(\lambda)=\Phi\left(\frac{1}{2}\left(\lambda-\frac{1}{2}\right)\right). \tag{5.75}\]
The constant \(A^{*}(\lambda)\) is independent of \(D\) and is related to the constant \(\Psi_{\infty}\) and \(\lambda\), via matching expansion (5.73) as \(x\to\frac{1}{2}\left(\lambda-\frac{1}{2}\right)^{-}\) with expansion (5.48) as \(X\to\infty\), which gives,
\[A^{*}(\lambda)=\frac{2\Psi_{\infty}}{(2\lambda-1)^{\frac{3}{2}}} \tag{5.76}\]
with details omitted for brevity. This completes the asymptotic structure as \(D\to 0^{+}\), with fixed \(\lambda\in\left(\frac{1}{2},1\right)\).
We make the following key observations. First, up to terms exponentially small in D, as \(D\to 0^{+}\),
\[\operatorname{supp}(F_{p}(x,\lambda,D))=[-S(\lambda,D),S(\lambda,D)] \tag{5.77}\]
with
\[S(\lambda,D)=\frac{1}{2}\left(\lambda-\frac{1}{2}\right)-\frac{3.493}{\sqrt{2 }\pi^{2}}\left(\lambda-\frac{1}{2}\right)^{\frac{1}{2}}D^{\frac{1}{4}}+o(D^{ \frac{1}{4}})\,\text{ as }D\to 0^{+}, \tag{5.78}\]
whilst,
\[\alpha(\lambda,D)=\frac{\pi}{(2\lambda-1)}+o(D^{\frac{1}{4}})\,\text{ as }D\to 0^{+} \tag{5.79}\]
and occurs at \(x=0\). In addition,
\[\int_{-S(\lambda,D)}^{+S(\lambda,D)}F_{p}(y,\lambda,D)dy=1-\frac{\pi^{2}}{ \left(\lambda-\frac{1}{2}\right)^{2}}D+o(D^{\frac{5}{4}}) \tag{5.80}\]
as \(D\to 0^{+}\). For comparison, we present in Figure 5.5, a numerically determined representation of \(F_{p}\), with \(\lambda=0.75\) and various values of \(D\). This is in excellent agreement with the asymptotic form developed above.
Finally, we observe from (5.77)-(5.80) in particular, that the asymptotic structure developed above, for \(F_{p}(x,\lambda,D)\) as \(D\to 0^{+}\), with fixed \(\lambda\in\left(\frac{1}{2},1\right)\), becomes nonuniform as \(\lambda\to\frac{1}{2}\), and, specifically, when \(\lambda=\frac{1}{2}+O(D^{\frac{1}{2}})\) as \(D\to 0^{+}\). For the present, we continue by considering the asymptotic structure of \(F_{p}(x,\lambda,D)\) as \(D\to 0^{+}\) in each of the remaining tongues \(\Omega_{i}\), \(i=2,3,\ldots\).
### Asymptotic structure as \(D\to 0^{+}\) with \((\lambda,D)\in\Omega_{i}\), \(i=2,3,\ldots\)
We now consider the asymptotic form of \(F_{p}(x,\lambda,D)\) with \((\lambda,D)\in\Omega_{i}\) (\(i\geq 2\)) as \(D\to 0^{+}\). We, again, need only consider the structure for \(x\in\left[0,\frac{1}{2}\lambda\right]\), with now \(\lambda\in\left(\frac{1}{2i},\frac{1}{2i-1}\right)\) fixed, as \(D\to 0^{+}\). The structure is again as in the case of \(\Omega_{1}\), and for brevity, we restrict attention to the key support region, labelled as region I. The analysis follows exactly that of the previous subsection, and we therefore present only the salient features. In this case we obtain,
\[\bar{S}(\lambda)=\frac{1}{2}i\left(\lambda-\frac{1}{2i}\right), \tag{5.82}\] \[F_{0}(x,\lambda)=\frac{\pi}{(2i-1)(2i\lambda-1)}\cos\frac{\pi x }{i\left(\lambda-\frac{1}{2i}\right)}\quad\forall\,x\in[0,\bar{S}(\lambda)],\] (5.83) \[(2i-1)\int_{-S(\lambda,D)}^{S(\lambda,D)}F_{p}(y,\lambda,D)dy=1- \frac{\pi^{2}}{i^{2}\left(\lambda-\frac{1}{2i}\right)^{2}}D+o(D) \tag{5.81}\]
Figure 5.5. The solution \(F_{p}\), calculated numerically for \(\lambda=\frac{3}{4}\) and \(D=10^{-3}\), \(10^{-4}\), \(10^{-5}\), \(10^{-6}\), \(10^{-7}\) and \(10^{-8}\). The broken line shows the leading order outer solution given by (5.72).
as \(D\to 0^{+}\), whilst,
\[\alpha(\lambda,D)=\frac{\pi}{(2i-1)(2i\lambda-1)}+o(1)\text{ as }D\to 0^{+} \tag{5.84}\]
with \(\lambda\in\left(\frac{1}{2i},\frac{1}{2i-1}\right)\) fixed. We observe that, as before, this structure becomes nonuniform when \(\lambda=\frac{1}{2i}+o(1)\) as \(D\to 0^{+}\).
### Asymptotic structure as \(D\to 0^{+}\) with \(\lambda=\frac{1}{2}+O(D^{\frac{1}{2}})\)
We return to \((\lambda,D)\in\Omega_{1}\), and now develop the structure to \(F_{p}(x,\lambda,D)\) with \(\lambda=\frac{1}{2}+O(D^{\frac{1}{2}})\) and \(x\in\left[0,\frac{1}{2}\lambda\right]\) as \(D\to 0^{+}\). We write
\[\lambda=\frac{1}{2}+D^{\frac{1}{2}}\bar{\lambda} \tag{5.85}\]
with \(\bar{\lambda}=O(1)^{+}\) as \(D\to 0^{+}\). An examination of (5.77)-(5.79), with (5.85), then requires,
\[x=O(D^{\frac{1}{2}})^{+}\text{ and }F_{p}=O(D^{-\frac{1}{2}})^{+} \tag{5.86}\]
as \(D\to 0^{+}\) in the support region, with
\[F_{p}=O(E(D)), \tag{5.87}\]
when
\[x\in\left(O(D^{\frac{1}{2}}),\frac{1}{4}+\frac{1}{2}\bar{\lambda}D^{\frac{1}{2 }}\right] \tag{5.88}\]
in the exponential region. Following (5.86), in the support region we introduce
\[\hat{X}=\frac{x}{D^{\frac{1}{2}}}=O(1)^{+} \tag{5.89}\]
as \(D\to 0^{+}\), and expand in the form
\[F_{p}(\hat{X},\bar{\lambda},D)=v(\hat{X},\bar{\lambda})D^{-\frac{1}{2}}+o(D^{ -\frac{1}{2}}) \tag{5.90}\]
as \(D\to 0^{+}\) with \(\hat{X}=O(1)^{+}\). After a careful consideration of the nonlocal term, substitution from (5.85), (5.89) and (5.90) into (5.1) gives, at leading order, the nonlocal, nonlinear equation,
\[v_{\hat{X}\hat{X}}+v\left(1-2\int_{-\infty}^{\infty}v(s,\bar{\lambda})ds+\int _{\hat{X}-\bar{\lambda}}^{\bar{X}+\bar{\lambda}}v(s,\bar{\lambda})ds\right)=0 ;\quad\hat{X}>0 \tag{5.91}\]
which must be solved subject to the conditions,
\[v_{\hat{X}}(0,\bar{\lambda})=0, \tag{5.92}\]
\[v(\hat{X},\bar{\lambda})\to 0\text{ as }\hat{X}\to\infty, \tag{5.94}\] \[v(\hat{X},\bar{\lambda})>0\quad\forall\,x\geq 0,\] (5.95) \[v(-\hat{X},\bar{\lambda})=v(\hat{X},\bar{\lambda})\quad\forall \hat{X}\geq 0. \tag{5.93}\]
Here, the boundary condition (5.93) accommodates asymptotic matching to the exponential region.
The problem (5.91)-(5.95) has been considered numerically, which has demonstrated that a unique solution exists for each \(\bar{\lambda}>0\). We observe that this solution has,
\[v(\hat{X},\bar{\lambda})\sim v_{\infty}(\bar{\lambda})e^{-\sigma_{\infty}( \bar{\lambda})\hat{X}}\text{ as }\hat{X}\to\infty \tag{5.96}\]
with
\[\sigma_{\infty}(\bar{\lambda})=4\int_{0}^{\infty}v(s,\bar{\lambda})ds-1>0 \tag{5.97}\]
and \(v_{\infty}(\bar{\lambda})>0\) a globally dependent constant. Figure 5.6 shows the numerical solution of (5.91) to (5.95) for various values of \(\bar{\lambda}\). The solution becomes larger and narrower as \(\bar{\lambda}\) increases, but then becomes smaller and fatter as \(\bar{\lambda}\) increases past about five (see also Figure 5.8). Also shown is a comparison with the asymptotic solutions for \(\bar{\lambda}\) both large and small, with which there is excellent agreement. Figure 5.7 shows the numerically calculated periodic solution of the full problem for \(D=10^{-3}\) and \(\lambda=\frac{1}{2}+10\sqrt{D}\), along with the corresponding asymptotic solution for \(\bar{\lambda}=10\) and the solution (5.72). The agreement is excellent, and we can also see that the \(\lambda=O(1)\) asymptotic solution, (5.72) is not a good approximation to the solution if \(\lambda-\frac{1}{2}\) is of \(O(\sqrt{D})\), as expected. The maximum of \(v\) is at \(\hat{X}=0\) and a graph of \(v(0,\bar{\lambda})\) against \(\bar{\lambda}\) is shown in Figure 5.8, which shows that there is a single maximum at \(\bar{\lambda}\approx 5.8\), with, as required,
\[\begin{split}& v(0,\bar{\lambda})\sim\frac{\pi}{2\bar{\lambda}} \text{ as }\bar{\lambda}\to\infty,\\ & v(0,\bar{\lambda})\to 0^{+}\text{ as }\bar{\lambda}\to 0^{+} \end{split} \tag{5.98}\]
We note that,
\[\alpha(\bar{\lambda},D)=D^{-\frac{1}{2}}v(0,\bar{\lambda})+o(D^{-\frac{1}{2}}) \tag{5.99}\]
as \(D\to 0^{+}\). The amplitude given by \(\alpha(\bar{\lambda},D)\) is indistinguishable from the amplitude given by the numerical solution of the full problem, shown in Figure 5.1, for \(D\) less than about \(10^{-4}\).
Figure 5.6. The numerical solution (bold curves) of (5.91) to (5.95) for \(\bar{\lambda}=0.5\), \(1\), \(10\), \(50\) and \(100\). The broken line is the asymptotic solution for \(\bar{\lambda}\ll 1\) for \(\bar{\lambda}=0.5\) and \(1\), given by (5.110), whilst the dash-dotted line is the asymptotic solution for \(\bar{\lambda}\gg 1\) for \(\bar{\lambda}=50\) and \(100\), which comes from rescaling (5.72).
Figure 5.7. The numerically calculated periodic solution of the full problem for \(D=10^{-3}\) and \(\lambda=\frac{1}{2}+10\sqrt{D}\), along with the corresponding asymptotic solution for \(\bar{\lambda}=10\) (broken line) and the solution (5.72), (dash-dotted line).
To complete this subsection, we examine the problem (5.91)-(5.95) in the limit \(\bar{\lambda}\to 0^{+}\). A balancing of terms in (5.91) leads us to write,
\[\tilde{X}=\bar{\lambda}\tilde{X}=O(1)^{+}\text{ as }\bar{\lambda}\to 0^{+}, \tag{5.100}\]
and expand in the form
\[v(\tilde{X},\bar{\lambda})=\bar{\lambda}\tilde{v}(\tilde{X})+o(\bar{\lambda}) \text{ as }\bar{\lambda}\to 0^{+}. \tag{5.101}\]
Using (5.100) and (5.101), the nonlocal term in (5.91) becomes
\[\int_{\tilde{X}-\bar{\lambda}}^{\tilde{X}+\bar{\lambda}}v(s,\bar{\lambda})ds =\int_{\tilde{X}-\bar{\lambda}}^{\tilde{X}+\bar{\lambda}}(\bar{\lambda} \tilde{v}(\bar{\lambda}s)+o(\bar{\lambda}))ds=\int_{\tilde{X}-\bar{\lambda}^{2 }}^{\tilde{X}+\bar{\lambda}^{2}}(\tilde{v}(w)+o(1))dw=2\bar{\lambda}^{2}\tilde {v}(\tilde{X})+o(\bar{\lambda}^{2}) \tag{5.102}\]
as \(\bar{\lambda}\to 0^{+}\). The most structured balance at leading order then requires,
\[\int_{-\infty}^{\infty}v(\tilde{X},\bar{\lambda})d\tilde{X}=\frac{1}{2}\bar{ \lambda}+\tilde{I}\bar{\lambda}^{3}+o(\bar{\lambda}^{3}) \tag{5.103}\]
as \(\bar{\lambda}\to 0^{+}\) with \(\tilde{I}\) a constant to be determined. On substitution from (5.100)-(5.103) into (5.91)-(5.95), we arrive at the leading order problem,
\[\tilde{v}_{\tilde{X}\tilde{X}}-2\tilde{v}(\tilde{I}-\tilde{v})=0,\,\tilde{X}>0, \tag{5.104}\]
Figure 5.8. The maximum value, \(v(0,\bar{\lambda})\) of the numerically-calculated solution for (5.91) to (5.95). The broken lines show the predicted asymptotic behaviour as \(\bar{\lambda}\to 0\) and \(\bar{\lambda}\to\infty\).
\[\tilde{v}_{\tilde{X}}(0)=0, \tag{5.106}\] \[\tilde{v}(\tilde{X})\to 0\text{ as }\tilde{X}\to\infty,\] (5.107) \[\tilde{v}(\tilde{X})>0\quad\tilde{X}\geq 0,\] (5.108) \[\int_{0}^{\infty}\tilde{v}(w)dw=\frac{1}{4}. \tag{5.105}\]
This problem is a nonlinear eigenvalue problem, with eigenvalue \(\tilde{I}\in\mathbb{R}\), which can be solved directly. There is a single eigenvalue given by
\[\tilde{I}=\frac{1}{72} \tag{5.109}\]
with the associated eigenfunction uniquely determined as
\[\tilde{v}(\tilde{X})=\frac{1}{48}\operatorname{sech}^{2}\left(\frac{1}{12} \tilde{X}\right)\quad\forall\ \tilde{X}\geq 0. \tag{5.110}\]
Thus, (5.101) and (5.110) give
\[v(0,\bar{\lambda})\sim\frac{1}{48}\bar{\lambda}\ \text{ as }\bar{\lambda}\to 0^{+} \tag{5.111}\]
which is shown in Figure 5.8, and (5.108) and (5.109) lead to
\[\int_{-\infty}^{\infty}v(\hat{X},\bar{\lambda})d\hat{X}=\frac{1}{2}+\frac{1}{ 72}\bar{\lambda}^{2}+o(\bar{\lambda}^{2}) \tag{5.112}\]
as \(\bar{\lambda}\to 0^{+}\). Equations (5.110)-(5.112) are in excellent agreement with the numerical solution to problem (5.91)-(5.95), when \(\bar{\lambda}\) is small
A final remark in this section relates to the temporal stability of the periodic steady states identified with \((\lambda,D)\in\Omega\). In this respect, we have performed a limited stability analysis of \(u=F_{p}(x,\lambda,D)\) with \((\lambda,D)\in\Omega\) when \(D\) is small by considering the exponentially-small part of the solution. Without presenting details, this analysis reveals that each such periodic steady state is locally, temporally, asymptotically stable to perturbations in the region between the \(O(1)\) parts of the solution, where we would expect any instability to occur. In relation to (IBVP), we observe that the results of this section support the possibility (P2), for the case when \(0<D<\Delta_{1}\). We readily verify that the wavelength,
\[\lambda=\lambda_{m}(D)=\frac{2\pi}{k_{m}(D)}\in\Omega_{1} \tag{5.113}\]
for each \(D\in(0,\Delta_{1}),\) and so we have, in (P2),
\[p(x)=F_{p}(x,\lambda_{m}(D),D),\ x\in\mathbb{R}, \tag{5.114}\]
for each \(D\in(0,\Delta_{1}).\) A final point to note is that the detailed analysis of the existence and structure of the family of steady periodic solution in the first tongue \(\Omega_{1}\) on the \((\lambda,D)\) plane given in this section, not only provides substance to the conjecture (P2) in the present context, but also plays a crucial role in studying the corresponding Cauchy problem on a closed finite spatial interval (with either Dirichlet or Neumann endpoint conditions), which we present in the second of this series of papers.
We now move on to consider bifurcations to periodic travelling waves from the equilibrium solution \(u_{e}=1\). This enables us to investigate in more depth the structure of the solution to (IBVP) when \(t\) is large, now in the region of the propagating wavefronts identified in section 4. In particular, the next section first rules out the possibility of _nondecaying, permanent form periodic waves travelling with, and to the rear of, the principal wavefronts_.
## 6. Positive Periodic Travelling Waves
We seek to identify the bifurcation to spatially periodic travelling wave solutions from the equilibrium state \(u_{e}=1\). At fixed \(D>0\), a periodic travelling wave solution, with propagation speed \(v=v_{b}\neq 0\) will bifurcate from \(u_{e}=1\), at wavelength \(\lambda_{b}>0\) if and only if, \((\lambda_{b},v_{b})\) satisfies the complex equation,
\[4\pi^{2}D-2i\pi v_{b}\lambda_{b}+\frac{\lambda_{b}^{3}}{\pi}\sin\frac{\pi}{ \lambda_{b}}=0 \tag{6.1}\]
which is readily obtained from equation (1.5) with (1.11), when written in the travelling wave coordinate \(z=x-vt\), and linearized about \(u_{e}=1\). It is immediate that (6.1) has _no solutions_ which have \(v_{b}\neq 0\) and \(\lambda_{b}>0\). We can therefore conclude that there are no bifurcations to periodic travelling waves from the equilibrium state \(u_{e}=1\).
We are now able to use this result in moving on to consider transitional permanent form travelling waves, which accommodate the transition from the equilibrium state \(u=0\) ahead of the wavefront to the equilibrium state \(u=1\) or to a spatially periodic permanent form travelling wave, to the rear of the wavefront.
## 7. Transitional Travelling Waves
A transitional travelling wave (or wavefront), in the present context, is a travelling wave of permanent form, propagating with constant propagation speed \(v>0\). The permanent wave
form is nonnegative, and ahead of the wavefront the wave form approaches the equilibrium state \(u_{e}=0\), whilst to the rear of the wavefront, the wave form either approaches the equilibrium state \(u_{e}=1\) or approaches a non-trivial, positive periodic travelling wave (with speed \(v>0\)). We anticipate that such structures will play a key role in the large-\(t\) evolution of the solution to (IBVP), when \(|x|=O(t)\) as \(t\to\infty\). This motivates the present consideration of transitional travelling waves. Following Section 6, we can immediately eliminate the existence of transitional travelling waves of the second type described above, and we therefore concentrate on the existence of transitional travelling waves of the first type, which we will henceforth refer to as (TPTW) solutions to equation (1.5) with (1.11). For a different treatment of permanent form transitional solutions to (1.1), that lead up to the concept of generalised travelling waves, the reader is referred to [1] and the references therein.
We introduce the travelling coordinate \(z=x-vt\) with \(v>0\) being the constant propagation speed. We represent a (TPTW) solution with propagation speed \(v>0\) as \(u=u_{T}(z,v)\), with \(u_{T}(\cdot,v)\in C^{2}(\mathbb{R})\) satisfying,
\[Du_{T}^{\prime\prime}+vu_{T}^{\prime}+u_{T}\left(1-\int_{z-\frac {1}{2}}^{z+\frac{1}{2}}u_{T}(s,v)ds\right)=0,\quad z\in\mathbb{R} \tag{7.2}\] \[u_{T}(z,v)\geq 0\quad\forall\ z\in\mathbb{R},\] (7.3) \[u_{T}(z,v)\to\left\{\begin{array}{ll}1&\text{as}\quad z\to- \infty\\ 0&\text{as}\quad z\to+\infty.\end{array}\right. \tag{7.1}\]
The problem (7.1)-(7.3) will be henceforth refered as (BVP). We begin with some qualitative results.
1. Let \(u_{T}(\cdot,v):\mathbb{R}\to\mathbb{R}\) be a (TPTW), then \(u_{T}(\cdot,v)\in C^{\omega}(\mathbb{R})\).
Proof.: This follows by induction, following repeated differentiation through (7.1), with the condition that \(u_{T}(\cdot,v)\in C^{2}(\mathbb{R})\).
1. Let \(u_{T}(\cdot,v):\mathbb{R}\to\mathbb{R}\) be a (TPTW), then \(u_{T}(z,v)>0\) for all \(z\in\mathbb{R}\).
Proof.: Suppose there is \(z_{0}\in\mathbb{R}\) such that \(u_{T}(z_{0},v)=0\), then via (7.2) \(u_{T}^{\prime}(z_{0},v)=0\). Thus, via induction on (7.1), \(u_{T}^{(n)}(z_{0},v)=0\) for each \(n=2,3,\dots\). Since \(u_{T}(\cdot,v)\in C^{\omega}(\mathbb{R})\), via (T1), it then follows that \(u_{T}(z,v)=0\) for all \(z\in\mathbb{R}\), contradicting (7.3). The result follows.
1. The existence of a (TPTW) requires \(v\geq 2\sqrt{D}\).
Proof.: Let \(u_{T}(\cdot,v):\mathbb{R}\rightarrow\mathbb{R}\) be a (TPTW) with propagation speed \(v>0\). Via (7.1)-(7.3) and the linearization theorem (see for example [11]), we must have,
\[Du_{T}^{\prime\prime}+vu_{T}^{\prime}+u_{T}=0,\quad z\gg 1, \tag{7.5}\] \[u_{T}(z,v)\to 0\,\,\,\text{as}\,\,z\rightarrow\infty\] (7.6) \[u_{T}(z,v)>0\,\,\text{for}\,\,z\gg 1 \tag{7.4}\]
with the strict inequality in (7.6) following from (T2). As a consequence of (7.4) and (7.5) there exists constants \(A_{\infty}\) and \(B_{\infty}\), not both zero, such that,
\[u_{T}(z,v)\sim A_{\infty}e^{\lambda_{+}(v)z}+B_{\infty}e^{\lambda_{-}(v)z}, \tag{7.7}\]
with
\[\lambda_{\pm}(v)=-\frac{1}{2D}\left(v\pm\left(v^{2}-4D\right)^{1/2}\right), \tag{7.8}\]
and the obvious modification when \(v=2\sqrt{D}.\) It then follows from (7.7) and (7.8) with (7.6), that the propagation speed \(v\geq 2\sqrt{D},\) as required.
In the remainder of this section we suppose that \(u_{T}:\mathbb{R}\rightarrow\mathbb{R}\) is a (TPTW) which has the minimum possible propagation speed
\[v=2\sqrt{D}. \tag{7.9}\]
which we refer to as \(u_{T}(z)\). Our intention is now to examine in detail the form of \(u_{T}(z)\) as \(z\rightarrow\ -\infty\), using the linearization theorem (see, for example, [11]). First we write,
\[u_{T}(z)=1+\overline{u}(z), \tag{7.10}\]
after which \(\overline{u}(z)\) must satisfy
\[D\overline{u}^{\prime\prime}+2\sqrt{D}\overline{u}^{\prime}- \int_{z-\frac{1}{2}}^{z+\frac{1}{2}}\overline{u}(s)ds=0,\quad(-z)\gg 1, \tag{7.12}\] \[\overline{u}(z)\to 0\,\,\,\text{as}\,\,z\rightarrow-\infty. \tag{7.11}\]
The form of \(\overline{u}(z)\) as \(z\rightarrow-\infty\) is determined by examining elementary solutions in the form
\[\overline{u}(z)=e^{\sigma z} \tag{7.13}\]
with \(\sigma\in\mathbb{C}\) to be determined, and \(\mathrm{Re}(\sigma)>0\), to satisfy condition (7.12). On substitution from (7.13) into (7.11) we find that \(\sigma\in\mathbb{C}\) allows (7.13) to solve (7.11) and (7.12) if and only if
\[D\sigma^{2}+2\sqrt{D}\sigma-\frac{2}{\sigma}\sinh\frac{1}{2}\sigma=0,\text{ with }\mathrm{Re}(\sigma)>0. \tag{7.14}\]
For each fixed \(D>0\), the transcendental equation (7.14) has a countably infinite number of roots in the complex plane, which we label as
\[\sigma_{n}(D)\in\mathbb{C}\text{ for each }n\in\mathbb{Z}\setminus\{0\} \tag{7.15}\]
and, in addition
\[\sigma_{n}(\cdot)\in PC^{1}(\overline{\mathbb{R}}^{+})\cap C(\overline{ \mathbb{R}}^{+}). \tag{7.16}\]
with,
\[\mathrm{Re}(\sigma_{\pm(2r-1)}(D))>0,\,r\in\mathbb{N},\] \[\mathrm{Re}(\sigma_{\pm 2r}(D))<0,\,r\in\mathbb{N}. \tag{7.17}\]
We next observe that,
\[\sigma_{n}(D)=2n\pi i-(-1)^{n}8n^{2}\pi^{2}\sqrt{D}+O(D)\,\text{ as }D\to 0^{+} \tag{7.18}\]
with \(n\in\mathbb{Z}\setminus\{0\}\). Also,
\[\begin{split}&\sigma_{\pm(2r-1)}(D)=2\log\frac{1}{2}D\pm 4(r-1 )\pi i+o(1),\\ &\sigma_{\pm 2r}(D)=-2\log\frac{1}{2}D\pm 4(r-1)\pi i+o(1),\end{split} \tag{7.19}\]
as \(D\to\infty\), with \(r\in\mathbb{N}\setminus\{1\}\). The full locus of \(\sigma_{n}(D)\) is sketched in Figure 7.1 for various values of \(n\). We observe from (7.14) that, in fact,
\[\sigma_{-n}(D)=\bar{\sigma}_{n}(D) \tag{7.20}\]
for \(n\in\mathbb{Z}\setminus\{\pm 1,\pm 2\}\).
We now consider the cases \(n=\pm 1,\pm 2\). We concentrate on \(\sigma_{\pm 1}(D)\). For \(0<D<D_{+}\), we have that \(\sigma_{-1}(D)=\bar{\sigma}_{1}(D)\) and \(\mathrm{Im}(\sigma_{1}(D))>0\). At \(D=D_{+}\), \(\sigma_{-1}(D_{+})=\sigma_{1}(D_{+})=\sigma_{+}\in\mathbb{R}^{+}\).
For \(D>D_{+}\), \(0<\sigma_{1}(D)<\sigma_{+}<\sigma_{-1}(D)\), with
\[\sigma_{1}(D) \sim\frac{(\sqrt{2}-1)}{\sqrt{D}},\] \[\sigma_{-1}(D) \sim 2\log\frac{1}{2}D, \tag{7.21}\]
as \(D\to\infty\), as can be seen in Figure 7.1. In the above, \((\sigma_{+},D_{+})\in(\mathbb{R}^{+})^{2}\) is the solution to the transcendental equations
\[D\sigma^{3}+2\sqrt{D}\sigma^{2}-2\sinh\frac{1}{2}\sigma=0,\] \[3D\sigma^{2}+4\sqrt{D}\sigma-\cosh\frac{1}{2}\sigma=0, \tag{7.22}\]
which gives
\[(\sigma_{+},D_{+})=(4.437,2.824\times 10^{-2}). \tag{7.23}\]
We are now able to interpret these results in relation to a (TPTW) with minimum propagation speed \(v=2\sqrt{D}\). In particular, translation invariance in \(z\) can be fixed so that
\[u_{T}(z)\sim ze^{-\frac{1}{\sqrt{D}}z}\,\text{ as }z\to\infty \tag{7.24}\]
via (7.7) and (7.8), whilst, for \(0<D<D_{+}\), there are global constants \(A_{\infty}\neq 0\) and \(\phi_{\infty}\in[0,2\pi)\) such that,
\[u_{T}(z)\sim 1-A_{\infty}e^{a(D)z}\cos(b(D)z+\phi_{\infty})\,\text{ as }z\to-\infty \tag{7.25}\]
with
\[a(D)=\text{Re}(\sigma_{1}(D)),\quad b(D)=\text{Im}(\sigma_{1}(D)) \tag{7.26}\]
which are both positive. In particular,
\[a(D)\sim\begin{cases}8\pi^{2}\sqrt{D}&\text{ as }D\to 0^{+},\\ \sigma_{+}-O\left((D_{+}-D)\right)&\text{ as }D\to D_{+}^{-}\end{cases} \tag{7.27}\]
\[b(D)\sim\begin{cases}2\pi+O(D)&\text{ as }D\to 0^{+},\\ O((D_{+}-D)^{\frac{1}{2}})&\text{ as }D\to D_{+}^{-}.\end{cases} \tag{7.28}\]
However, for \(D>D_{+}\), there is a global constant \(A_{\infty}^{\prime}\neq 0\) such that
\[u_{T}(z)\sim 1-A_{\infty}^{\prime}e^{\sigma_{1}(D)z}\,\text{ as }z\to-\infty \tag{7.29}\]
recalling that \(0<\sigma_{1}(D)<\sigma_{+}\) and decreasing for \(D\in(D_{\infty},\infty)\), with
\[\sigma_{1}(D)\sim\begin{cases}\sigma_{+}-O((D-D_{+})^{\frac{1}{2}})&\text{ as }D\to D_{+}^{+}\\ \frac{(\sqrt{2}-1)}{\sqrt{D}}&\text{ as }D\to\infty.\end{cases} \tag{7.30}\]
Principally we see that a (TPTW) with minimum speed is monotone as \(z\to-\infty\) when \(D\in(D_{+},\infty)\), but has decaying oscillations as \(z\to-\infty\) when \(D\in(0,D_{+})\). Numerical solutions of (IBVP) with \(v=2\sqrt{D}\) and \(D>0\) are shown in Figure 3.1, and are consistent with the existence of a (unique by fixing translation invariance with \(u_{T}(0)=\frac{1}{2}\)) (TPTW) at minimum propagation speed \(v=2\sqrt{D}\), and that this is the travelling wave generated in the initial value problem (1.5) to (1.7) for \(D>\Delta_{1}\). For \(D<\Delta_{1}\), as discussed in Section 3, for moderate values of \(D\), the minimum speed (TPTW) provides a good approximation to the solution of (IBVP) at the wavefront, which moves at speed \(2\sqrt{D}\), ahead of a stationary periodic state.
## 8. Conclusions
In this paper we have studied various aspects of the Cauchy problem for the nonlocal Fisher-KPP equation with a top hat kernel. We showed that the problem is globally well-posed and investigated the positive periodic steady state solutions that bifurcate from the uniform steady state \(u=1\) at dimensionless diffusivity \(D=\Delta_{1}\approx 0.00297\). These have wavelength \(\lambda\) and exist in a sequence of 'tongues' in \((\lambda,D)\) parameter space. As \(D\to 0^{+}\), each of these periodic solutions has finite amplitude and wavelength, with \(\lambda<1\), and we constructed the asymptotic solution in detail for solutions in the tongue with largest wavelengths.
We also investigated permanent form travelling wave solutions, which are known to exist for all wavespeeds \(v\) with \(v\geq 2\sqrt{D}\), [3]. Numerical solutions suggest that, from localised initial conditions with finite support, a pair of diverging travelling wavefronts with minimum wavespeed, \(2\sqrt{D}\), is generated, and that for \(D\geq\Delta_{1}\), the solution asymptotes to the stable uniform state, \(u=1\), between the wavefronts. For \(0<D<\Delta_{1}\), a periodic static steady state, with wavelength around \(0.7\), is generated between the wavefronts. The wavelength selection mechanism is not clear, but it is notable that \(0.7\) is approximately the linearly most unstable wavelength of the uniform steady state \(u=1\), and that this instability is not dispersive.
In the next paper in this series we will report on the bifurcation structure for the problem set on a finite domain with various boundary conditions. Oscillatory solutions are also generated, with a structure similar to those discussed in the present paper, but with nontrivial dependence on the form of the boundary conditions.
Many other related open questions remain. For example, the structure of the solution for small \(D\) (regular humps with finite width) is in contrast to the types of solution found in [4] for kernels with \(\phi(y)>0\) for all \(y\in\mathbb{R}\), for which the width of the humps, or spikes, in the minimum speed travelling wave solution tends to zero and height to infinity as \(D\to 0\). Similar results were found numerically in [17] for an M-shaped kernel. This behaviour remains to be explained.
Another interesting extension is to the Cauchy problem in higher dimensions, for example with the two dimensional kernel
\[\phi(r)=\left\{\begin{array}{ll}\frac{4}{\pi}&\mbox{for $r<\frac{1}{2}$,}\\ 0&\mbox{otherwise,}\end{array}\right.\]
in polar coordinates \((r,\theta)\). The numerical solution of initial value problems in higher spatial dimensions is significantly more challenging than in one dimension. We would expect that
spatially-localised solutions would be generated for small enough \(D\), and that they would be amenable to asymptotic analysis.
|
2307.09923 | Large Language Models can accomplish Business Process Management Tasks | Business Process Management (BPM) aims to improve organizational activities
and their outcomes by managing the underlying processes. To achieve this, it is
often necessary to consider information from various sources, including
unstructured textual documents. Therefore, researchers have developed several
BPM-specific solutions that extract information from textual documents using
Natural Language Processing techniques. These solutions are specific to their
respective tasks and cannot accomplish multiple process-related problems as a
general-purpose instrument. However, in light of the recent emergence of Large
Language Models (LLMs) with remarkable reasoning capabilities, such a
general-purpose instrument with multiple applications now appears attainable.
In this paper, we illustrate how LLMs can accomplish text-related BPM tasks by
applying a specific LLM to three exemplary tasks: mining imperative process
models from textual descriptions, mining declarative process models from
textual descriptions, and assessing the suitability of process tasks from
textual descriptions for robotic process automation. We show that, without
extensive configuration or prompt engineering, LLMs perform comparably to or
better than existing solutions and discuss implications for future BPM research
as well as practical usage. | Michael Grohs, Luka Abb, Nourhan Elsayed, Jana-Rebecca Rehse | 2023-07-19T11:54:46Z | http://arxiv.org/abs/2307.09923v1 | # Large Language Models can accomplish Business Process Management Tasks
###### Abstract
Business Process Management (BPM) aims to improve organizational activities and their outcomes by managing the underlying processes. To achieve this, it is often necessary to consider information from various sources, including unstructured textual documents. Therefore, researchers have developed several BPM-specific solutions that extract information from textual documents using Natural Language Processing techniques. These solutions are specific to their respective tasks and cannot accomplish multiple process-related problems as a general-purpose instrument. However, in light of the recent emergence of Large Language Models (LLMs) with remarkable reasoning capabilities, such a general-purpose instrument with multiple applications now appears attainable. In this paper, we illustrate how LLMs can accomplish text-related BPM tasks by applying a specific LLM to three exemplary tasks: mining imperative process models from textual descriptions, mining declarative process models from textual descriptions, and assessing the suitability of process tasks from textual descriptions for robotic process automation. We show that, without extensive configuration or prompt engineering, LLMs perform comparably to or better than existing solutions and discuss implications for future BPM research as well as practical usage.
Keywords:Business Process Management Natural Language Processing Large Language Models ChatGPT
## 1 Introduction
The objective of Business Process Management (BPM) is to understand and supervise the execution of work within an organization. This ensures consistent outcomes and allows for the identification of improvement opportunities [6]. To accomplish this, BPM researchers and practitioners make use of diverse sources of information pertaining to business processes. These sources range from well-structured process models and event logs to unstructured textual documents [18]. In the past decade, BPM researchers have increasingly turned to Natural Language Processing (NLP) techniques to automatically extract process-related information from the abundant textual data found in real-world organizations.
Many existing approaches utilize textual data for a wide range of BPM tasks. Examples of such tasks include the mining of imperative or declarative process models from textual process descriptions [8; 19], process redesign for classifying end-user feedback [11], identifying suitable tasks for robotic process automation (RPA) in textual process descriptions [10], assessing process complexity based on textual data [16], or extracting semantic process information from natural language [13]. Although a few approaches also incorporate machine learning methods, the majority rely on extensive rule sets.
Each existing approach is designed for a specific purpose, meaning that it can only be applied to one specific task. A versatile general-purpose model that comprehends process-related text and seamlessly integrates it into various BPM tasks does not yet exist. However, the recent emergence of pre-trained Large Language Models (LLMs), which have demonstrated remarkable reasoning abilities across diverse domains and tasks [17], offers promising prospects for developing such a system. Already, multiple research groups are actively exploring the potential of these models in the BPM field, for example by analyzing which opportunities and challenges LLMs pose for the individual stages of the BPM lifecycle [20], how LLMs input should look like such that the output supports BPM [5], or whether conversational process modeling is possible [9].
These recent publications and pre-prints mostly illustrate the potential and difficulties of LLMs on a high level, but they do not showcase concrete applications. In this paper, we take a more application-oriented approach by investigating whether an LLM can accomplish three text-related BPM tasks: (1) mining imperative process models from textual descriptions, (2) mining declarative process models from textual descriptions, and (3) assessing the suitability of process tasks for RPA from textual descriptions. We selected these tasks because they are practically relevant and have previously been addressed in research. We evaluate how well the LLM can perform these tasks by benchmarking them against existing approaches that were specifically developed for the respective task. Based on the results, we discuss implications for future research in the field of BPM and illustrate how LLMs can support practitioners in their daily work.
The paper is structured as follows: In Sect. 2, we introduce the general solution approach that we followed for all three tasks. The task-specific applications and results are described in Sect. 4, Sect. 3, and Sect. 5, respectively. Section 6 discusses the future usage of LLMs in practice as well as implications for future research, before we conclude the paper in Sect. 7.
Figure 1: Overview of our Approach
## 2 Approach
In this paper, we illustrate how LLMs can be utilized for three BPM tasks that require textual documents as input. For all tasks, we follow the same approach, illustrated in Fig. 1. We start by assembling a prompt with the following parts:
1. A general description of the BPM task that is to be accomplished.
2. A specification of a particular output format that the LLM should adhere to. This ensures that the generated text output has a certain level of consistency and that results are sufficiently standardized so that they can be further processed by, for example, parsing algorithms.
3. The natural language text that we want to abstract information from, e.g., a textual process description
4. Optionally, if suitable for a given task, few input-output pairs as examples
The complete prompt is then entered into the current state-of-the-art instruction-following LLM, ChatGPT with GPT4 backend [12] (henceforth referred to as GPT4). The textual output of GPT4 (i.e., the response to the prompt) is then evaluated with respect to its utility in solving the respective task and benchmarked against an existing approach. All parts of the prompt have not been specifically engineered but rather included such that the output is actually solving the tasks. The prompts were not optimized with respect to any metric.
In all applications, we provide the model with several prompts in order to check input robustness (i.e., how prompts from different authors influence the results) and output robustness (i.e., how the results change for different tries of the same prompt). By this, we aim to analyze whether GPT4 is able to accomplish specific tasks sufficiently well to be used by a diverse group of people and whether the results remain consistent despite the inherent randomness of the model output. For each task, we start with an 'original'' prompt written by one of the authors of this paper and enter this prompt three times (Tries 1 to 3). Two more prompts are then created by two other authors, who are given a general description of the task to be accomplished and the exact output format that they should specify, but who have not seen the original prompt. Finally, where appropriate, we also enter the original prompt without examples to evaluate the effect those have on the result. Each prompt is entered in a separate conversation window in the GPT4 web interface so that the model cannot draw on previous prompts as context.
All prompts, responses, and detailed evaluation results are available online1.
Footnote 1: [https://gitlab.uni-mannheim.de/jpmac/llms-in-bpm](https://gitlab.uni-mannheim.de/jpmac/llms-in-bpm)
## 3 Mining BPMN process models from natural language descriptions
### Motivation
Process models are the predominant tool for representing organizational activities and are often the starting point for process analyses [19]. Constructing
such models requires knowledge of the process and proficiency in the creation of formal models [8]. However, the actors with process knowledge commonly are not experienced process modelers [8]. Therefore, modeling procedures can be very time-consuming and error-prone [15]. This holds true even though detailed textual descriptions of process requirements are often available in the form of policies, guidelines, or e-mail conversations, which can be considered relevant sources of information [8]. Approaches that extract process models from natural language can speed up the modeling and also enable managers to frequently update their process models without requiring extensive modeling experience.
A rule-based approach to extract Business Process Model and Notation (BPMN) process models from textual process descriptions was first proposed in [8]. This remains the only generally-applicable, end-to-end technique able to produce a full imperative process model from text input, though several other publications with a more narrow scope or a focus on mining partial models exist (see [2] for a short review). There are also papers that investigate the ability of LLMs to extract process entities and relations from textual descriptions [3, 9]. Though their approaches have some similarities to ours, neither ends up producing an actual process model from the text.
### Evaluation
Following Fig. 1, we ask GPT4 to create a BPMN model for a process described in text. At the time of writing, the web interface version of GPT4 has a token output limit that prevents it from generating sequences at the length that would be required to generate BPMN models as XML files. We, therefore, prompt it to produce a model in a pre-specified intermediary notation as an output format that includes the main elements of BPMN and is straightforward to parse into a proper model representation. The template we provided in the prompt represents task nodes as natural language words, arcs between model elements as arrows (\(->\)), and exclusive and parallel gateways as XOR and AND, respectively. We also specify that outgoing arcs of exclusive gateways can be labeled to represent decision criteria, e.g., XOR (Proposal accepted) \(->\) Task1. Finally, we ask the model to provide an actor-to-activity mapping that can be used to construct lanes, in the format actor: [activity1,...]. Other elements (e.g., messages) are not included. We also do not provide example pairs of text and corresponding full or partial models to the LLM to avoid bias towards a certain modeling style.
Figure 2 shows an example of a textual description of a computer repair process, an excerpt of the response that GPT4 gave when presented with this description, and a visualization of the derived BPMN model. The generated model accurately represents the process described in the text. It could, however, be made slightly simpler by combining the two separate _Test System Functionality_ activities and the subsequent exclusive gateways into one each.
For our evaluation, we use six process descriptions from [7] (1.1 - 1.4, 2.1, and 2.2). We selected these with the goal of applying our technique to a mix of short and simple as well as longer and more sophisticated textual descriptions. As ground truth, we use the annotations provided for these descriptions in the
PET dataset [1]. Specifically, we evaluate the output of the LLM with regard to how many of the relations described in the textual description are correctly identified (i.e., recall). Note that this allows us to simultaneously evaluate how many entities (task names and actors) are correctly identified since a relation that involves an unknown entity will be counted as not identified. We do not evaluate the models with regard to how many superfluous entities or relations they produce (i.e., precision) as that would raises several conceptual questions that require answers (e.g., how to treat a task that is correctly identified but in the wrong position), which would go beyond the intended scope of this paper.
We further restrict our evaluation to _flow_ and _actor performer_ relations, i.e., those that are present in the intermediary notation we provide in the prompts. Since the ground truth annotation applies only to the textual descriptions, we manually establish a mapping between the entities identified in the dataset and the ones produced by GPT4. In some cases, the relations produced by the LLM do not exactly match the ground truth (e.g., Write Report and Send Report are combined to Write and Send Report). For these, we follow the same approach as [3], i.e., we evaluate them on a case-by-case basis and count them as correct if they are semantically correct. As a benchmark, we use the process models produced by [7], applying the same evaluation criteria as described above.
The results of our evaluation are shown in Tab. 1, subdivided by the evaluation of output robustness (OR) and input robustness (IR). Overall, regarding the proportion of relations (and entities) that are correctly extracted from the textual process description, the models generated by GPT4 are comparable to
Figure 2: Example of a textual process description (top left), an excerpt of the generated LLM response (top right, from Prompt 1 Try 1), and visualization of the corresponding BPMN diagram (bottom).
the ones produced by [8]. Note that the absolute numbers reported should be interpreted with caution, because the PET ground truth is very fine-granular and we weigh all relation types equally, so that, for example, a single missing exclusive gateway (with the gateway itself, two decision criteria on the outgoing arcs, and two subsequent activities) would be counted as five non-identified relations. Consequently, a recall value of 0.5 should not be understood to indicate that the model only includes half of the relevant process behavior described in the text. Furthermore, the models generated by GPT4 are very precise in the sense that they tend to include a minimal (often insufficient) set of tasks, whereas the rule-based approach of [8] tends to produce models with several superfluent activities (e.g., Begin Process following a start event). Since our evaluation does not include a notion of false positive relations, it could be argued that we somewhat underestimate the quality of the LLM output relative to the benchmark.
Overall, an LLM-based text-to-BPMN technique produces reasonably good results. The model also produces consistent answers in the same intermediary notation when provided with the exact same description of the target template, so parsing its output into XML is possible. With prompt fine-tuning, and especially with subsequent prompting that asks the model to fix common issues, it is not unfeasible to create a reliable text-to-BPMN pipeline based on an LLM.
## 4 Mining declarative process models from natural language descriptions
### Motivation
Not all business processes can be adequately captured by imperative modeling notations such asBPMN. For instance, knowledge-intensive processes have execution orders that cannot always be fully specified in advance [19]. These are better modeled using declarative process models, i.e., a set of formal constraints that do not rely on an explicit definition of allowed behavior [4]. They provide a flexible way of modeling processes, especially suitable in complex settings [4].
An approach that extracts declarative process models from natural language has been proposed in [19]. It uses the common declarative modeling language _Declare_, which is based on constraint templates grounded in Linear Temporal Logic (LTL) [4]. By applying rule-based NLP techniques to sentences, the
\begin{table}
\begin{tabular}{|l|l||c c c c c|c|} \hline & & Text & Text & Text & Text & Text & \multirow{2}{*}{Overall} \\ \cline{3-3} \cline{5-8} & & 1.1 & 1.2 & 1.3 & 1.4 & 2.1 & 2.2 & \\ \hline \hline \multirow{3}{*}{\begin{tabular}{l} \end{tabular} } & Prompt 1 Try 1 & 0.42 & **0.58** & 0.46 & 0.50 & 0.57 & 0.45 & 0.50 \\ \cline{2-8} & Prompt 1 Try 2 & **0.54** & **0.58** & 0.38 & **0.70** & **0.61** & 0.42 & 0.54 \\ \cline{2-8} & Prompt 1 Try 3 & **0.54** & **0.58** & 0.50 & 0.60 & 0.53 & 0.53 & 0.54 \\ \hline \multirow{3}{*}{\begin{tabular}{l} \end{tabular} } & Other Author (1) & **0.54** & 0.47 & 0.54 & 0.50 & 0.47 & 0.34 & 0.48 \\ \cline{2-8} & Other Author (2) & 0.46 & 0.42 & 0.35 & 0.47 & 0.43 & 0.39 & 0.42 \\ \hline \multirow{2}{*}{
\begin{tabular}{l} \end{tabular} } & Benchmark [19] & **0.54** & 0.47 & **0.58** & 0.55 & 0.55 & **0.66** & **0.56** \\ \hline \end{tabular}
\end{table}
Table 1: Recall for the Text-to-BPMN Task
approach in [19] generates declarative constraints for five LTL templates: precedence, response, succession, initialization (init), and end. _Precedence(A, B)_ (formal as NOT(B) U A) means that activity B should only occur after activity A. _Response(A,B)_ (formal as A -> B) means that B must follow whenever A occurs. _Succession(A,B)_ is the combination of _Precedence(A,B)_ and _Response(A,B)_. _Init(A)_ (formal as START -> A) prescribes that all process instances must start with A and _End(A)_ (formal as END -> A) indicates that they must end with A.
### Evaluation
In our experiment, we recreate the set-up from [19], applying GPT4 on the same five LTL templates and 104 test sentences. Following Fig. 1, we create a prompt that asks GPT4 to create LTL formulas in the form of precedence, response, succession, init, and end. For each template, we provide the output format and an example. As a result, the LLM outputs one or more discovered constraints in the format prescribed by the prompt, as shown in the exemplary excerpt of the output in Tab. 2. This output can then be compiled and translated into declarative modeling languages like _Declare_.
In addition to the three identical prompts for output robustness, we use two other formulations by different authors and, as we use examples in the original prompt, also one prompt without examples for input robustness. Table 3 displays precision (Prec.), recall (Rec.), and F1-score (F1) as used by [19] for each of the five LTL templates of the six different prompts compared to the benchmark.2 We only consider syntactically correct classifications as true positives.
Footnote 2: The corresponding confusion matrices can be found in our repository.
Except for the response template, GPT4 outperforms the benchmark and has a high precision value of close to 1. Further, we see that precision does not vary significantly with respect to output robustness for all LTL templates. With respect to recall, we see lower values for precedence. This is because many precedence constraints are misclassified as a response, which also explains the lower precision for this template. For succession and end, we see a high variation in the recall. This is due to a few constraints of these types in the 104 sentences, meaning that few misclassifications have a high impact. With respect to input robustness, the evaluation metrics are worse if no examples for the LTL templates
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Sentence Input** & **GPT4 Output** & **LTL-Template** \\ \hline A claim should be created before it can be approved. & NOT(approve claim) U create claim & Precedence \\ \hline The process begins with the booking of the ticket. & START -> book ticket & Init \\ \hline Every provided laundry service must be billed. & provide laundry service -> & Response \\ \hline \hline \end{tabular}
\end{table}
Table 2: Exemplary Output of GPT4 for the Text-to-LTL Task
are provided. This is especially visible for the precedence template. In contrast to that, different formulations from other authors do not have a significant impact on the metrics. Rather, stability across different prompts is visible.
The F1-score shows that all prompts with examples for the LTL templates yield equal or higher scores than the benchmark. This illustrates that GPT4 outperforms the specific approach from [19] if it is provided with proper examples. This is an important finding as it indicates that prompts yield different results based on their fit to the task. Further, for tasks like this with short input text to be classified and a few classification targets, we recommend that the prompt should include examples. It should be noted that other prompts for example with additional information or the repetition of instructions could yield even better results. Further, the output of GPT4 has to be parsed into declarative process models using for example _Declare_ to allow complete usage. This is possible in an automatic manner given the consistent output format for all 104 sentences.
## 5 Assessing RPA suitability of process tasks from natural language descriptions
### Motivation
RPA is a technology that aims to automate routine and repetitive tasks in business environments. To do so, software robots that work on the user interface
of software systems are developed to perform these tasks the same way human actors would do, thus increasing operational efficiency [14].
Various process information can be used to identify tasks that are suitable for RPA. This includes textual process descriptions, which are commonly used to document processes [18]. The approach proposed in [10] identifies suitable tasks for RPA by measuring the degree of the automation of process tasks using supervised machine learning techniques from the textual descriptions of business processes. From this textual data, the approach classifies the process tasks into manual, automated, or user tasks. Manual tasks are the tasks performed by a human actor without any use of an information system, user tasks consist of humans interacting with an information system, and automated tasks are performed automatically on an information system without any human involvement. Tasks classified as user tasks are suitable RPA candidate tasks as they can be automated by replicating human interactions by means of RPA agents. This increases the efficiency of identifying suitable RPA tasks in comparison to a manual analysis that takes a long time and effort, especially if there exists a large number of such documents or a large number of processes to be analyzed [10].
### Evaluation
Following the approach from Fig. 1, GPT4 is used to replicate the experiment of [10]. The task is described in the prompt by asking the LLM to classify process tasks into one of three output formats: manual, user, or automated task. Possible features that might affect the task classifications (e.g., verb feature, object feature, resource type (human/non-human), and IT domain) are included in the task description. The output format as well as an example of tasks' classification for tasks of a given process description is also provided. We use the same dataset as [10], consisting of 33 textual process descriptions obtained from [7]. These descriptions consist of 424 process tasks to be classified. See Tab. 4 for an example of an input process description and the output generated by GPT4.
We did three identical prompts, we use two other prompts by different authors with an example in each. We also did one prompt without an example. Table 5 displays precision (Prec.), recall (Rec.), and F1-score (F1) for each of the six prompts compared to the benchmark from [10]. For the overall results, we apply the same micro-averaging approach as the benchmark, i.e., the number of tasks belonging to a class was used to weigh the respective precision and recall values.
\begin{table}
\begin{tabular}{l l} \hline \hline
**Task Input** & **GPT4 Output** \\ \hline register a claim performed by claims officer & User task \\ \hline examine a claim performed by claims officer & Manual task \\ \hline write a settlement recommendation performed by claims officer & Manual task \\ \hline send the claim back to the claims performed by claims officer & User task \\ \hline \hline \end{tabular}
\end{table}
Table 4: Exemplary Output of GPT4 for the RPA Classification Task
GPT4 outperforms the benchmark for 4 out of 6 prompts for user tasks. For the automated tasks, precision results are below the benchmark because many tasks were classified by GPT4 as automated although they are not. However, the recall for this class outperforms the benchmark in almost all the prompts. For the F1-score, it is similar to the benchmark for the classes except for the user class where the F1-score results were higher than the benchmark. Overall, as indicated by the F1-score, GPT4 performs similarly to the benchmark for all six prompts. We also saw the performance of GPT4 deteriorate over time. We suspect that this is caused by the limited context window of GPT4, combined with the large number of tasks to be classified (424). In such cases, reminding the LLM of the task description between inputs could yield better results.
## 6 Discussion
After illustrating that out-of-the-box GPT4 performs similarly or even better than specialized approaches for our three exemplary tasks, we now want to discuss the usage of LLMs in practice and provide guidelines for users.
**Prompt Recommendations.** In our experiments, we found that including different contents in the prompt increase the performance of GPT4. For example, the output should be clearly defined instigating the task. Further, for the text-to-LL task, examples led to better results. We can therefore recommend specifying
\begin{table}
\begin{tabular}{|l|l|l|c c c|c|} \hline \multicolumn{2}{|c||}{} & & \multicolumn{1}{c}{Manual} & \multicolumn{1}{c}{User} & \multicolumn{1}{c|}{Automated} & \multicolumn{1}{c|}{Overall} \\ \hline \hline \multirow{6}{*}{**F1-Score**} & \multirow{2}{*}{Prompt 1} & Prec. & 0.68 & **0.88** & 0.15 & 0.79 \\ & & & Rec. & 0.83 & 0.6 & 0.75 & 0.67 \\ & & & F1 & 0.75 & 0.74 & 0.45 & 0.73 \\ \cline{2-8} & \multirow{2}{*}{Prompt 1} & Prec. & 0.65 & 0.84 & 0.73 & 0.78 \\ & & & Rec. & 0.69 & 0.83 & 0.69 & 0.78 \\ & & & F1 & 0.67 & **0.84** & **0.71** & 0.78 \\ \cline{2-8} & \multirow{2}{*}{Prompt 1} & Prec. & 0.84 & 0.84 & 0.32 & **0.82** \\ & & & Rec. & 0.65 & 0.83 & **0.93** & 0.78 \\ & & & F1 & 0.75 & **0.84** & 0.63 & 0.8 \\ \hline \multirow{6}{*}{**F1-Score**} & \multirow{2}{*}{No Examples} & Prec. & **0.85** & 0.77 & 0.34 & 0.78 \\ & & Rec. & 0.42 & **0.88** & 0.88 & 0.74 \\ & & & F1 & 0.64 & 0.83 & 0.61 & 0.76 \\ \cline{2-8} & \multirow{2}{*}{Other Author (2)} & Prec. & 0.44 & 0.71 & 0.29 & 0.61 \\ & & & Rec. & 0.42 & 0.74 & 0.13 & 0.62 \\ & & F1 & 0.43 & 0.73 & 0.21 & 0.62 \\ \cline{2-8} & \multirow{2}{*}{Other Author (2)} & Prec. & 0.5 & 0.87 & 0.36 & 0.74 \\ & & Rec. & 0.82 & 0.55 & 0.8 & 0.64 \\ & & F1 & 0.66 & 0.71 & 0.58 & 0.69 \\ \hline \multirow{6}{*}{Benchmark [10]} & Prec. & 0.81 & 0.8 & **0.92** & 0.81 \\ & & Rec. & **0.9** & 0.7 & 0.52 & **0.8** \\ \cline{1-1} & & F1 & **0.85** & 0.75 & 0.66 & **0.81** \\ \hline \end{tabular}
\end{table}
Table 5: Precision, Recall, and F1-Score for the RPA Task
the output format and to try using examples if feasible. In general, different prompts should be used and compared to maximize the benefits of using GPT4.
**Non-deterministic output.** In order to produce more natural-sounding text, generative LLMs typically have _temperature_ parameter that adds some variability to the output. Because of this, responses given by GPT4 may change even if the input remains constant. At the same time, if the input is varied slightly (e.g., by phrasing the same instruction in a different way), the model may make significant alterations to its response. In our experiments, we attempted to account for this by establishing a certain level of input and output consistency. We found that, although results are overall relatively consistent, there is still considerable variation in how well each response reflects individual aspects of the provided text, for example, whether a particular task has been correctly identified and categorized. We, therefore, argue that future research into the behavior of LLMs and their reaction to different inputs is needed. In particular, the non-deterministic nature of LLM's output has implications for evaluation design: in our opinion, a basic sensitivity analysis as applied in this paper is always required in order to perform a meaningful evaluation of performance.
**File Generation.** When using it in practice, as illustrated with the three tasks, GPT4 does not generate files but rather text. Therefore, in order to use it in the first two exemplary tasks, further translation into formalized languages was necessary. This can be done via a compiler that generates _Declare_ constraints or BPMN models based on the output. Nevertheless, it poses a limitation of current LLMs, especially considering output variability. It should be noted that this limitation is specific to present-day LLMs such as GPT4, which are not capable of file generation and may be overcome by future iterations of the models.
## 7 Conclusion
In this paper, we developed and applied an approach that utilizes the LLM GPT4 for diverse BPM tasks. The approach itself is simple and leverages the capabilities of GPT4 by instructing it to accomplish the task at hand. We selected three BPM tasks to illustrate that GPT4 is indeed able to accomplish them: mining imperative process models from the textual description, mining declarative process models from the textual description, and assessing RPA suitability of process tasks from textual descriptions. For all three tasks, GPT4 performs similarly to or better than the benchmark, i.e., specific applications for the respective task. We analyzed the input and output robustness of the approach and found that the output is relatively insensitive to different executions of the same prompt, even if different authors formulated them. Further, we found that some prompts should include examples to help the LLM. Future research could assess whether LLMs are also applicable to other tasks from different phases of the BPM lifecycle. All in all, this paper illustrates and evaluates three practical applications of GPT4 and provides implications for future research and usage. |
2305.19391 | Deep Clustering with Incomplete Noisy Pairwise Annotations: A Geometric
Regularization Approach | The recent integration of deep learning and pairwise similarity
annotation-based constrained clustering -- i.e., $\textit{deep constrained
clustering}$ (DCC) -- has proven effective for incorporating weak supervision
into massive data clustering: Less than 1% of pair similarity annotations can
often substantially enhance the clustering accuracy. However, beyond empirical
successes, there is a lack of understanding of DCC. In addition, many DCC
paradigms are sensitive to annotation noise, but performance-guaranteed noisy
DCC methods have been largely elusive. This work first takes a deep look into a
recently emerged logistic loss function of DCC, and characterizes its
theoretical properties. Our result shows that the logistic DCC loss ensures the
identifiability of data membership under reasonable conditions, which may shed
light on its effectiveness in practice. Building upon this understanding, a new
loss function based on geometric factor analysis is proposed to fend against
noisy annotations. It is shown that even under $\textit{unknown}$ annotation
confusions, the data membership can still be $\textit{provably}$ identified
under our proposed learning criterion. The proposed approach is tested over
multiple datasets to validate our claims. | Tri Nguyen, Shahana Ibrahim, Xiao Fu | 2023-05-30T20:06:03Z | http://arxiv.org/abs/2305.19391v1 | # Deep Clustering with Incomplete Noisy Pairwise Annotations: A Geometric Regularization Approach
###### Abstract
The recent integration of deep learning and pairwise similarity annotation-based constrained clustering--i.e., _deep constrained clustering_ (DCC)--has proven effective for incorporating weak supervision into massive data clustering: Less than 1% of pair similarity annotations can often substantially enhance the clustering accuracy. However, beyond empirical successes, there is a lack of understanding of DCC. In addition, many DCC paradigms are sensitive to annotation noise, but performance-guaranteed noisy DCC methods have been largely elusive. This work first takes a deep look into a recently emerged logistic loss function of DCC, and characterizes its theoretical properties. Our result shows that the logistic DCC loss ensures the identifiability of data membership under reasonable conditions, which may shed light on its effectiveness in practice. Building upon this understanding, a new loss function based on geometric factor analysis is proposed to fend against noisy annotations. It is shown that even under _unknown_ annotation confusions, the data membership can still be _provably_ identified under our proposed learning criterion. The proposed approach is tested over multiple datasets to validate our claims.
Machine Learning, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Clustering, Deep Clustering, Deep, Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Clustering, Deep Clustering, Deep Clustering, Deep, Clustering, Deep, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Clustering, Deep Clustering, Deep Clustering, Deep, Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Clustering, Deep, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Clustering, Deep Clustering, Deep Clustering, Deep, Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep Clustering, Deep, Clustering, Deep Clustering, Deep, Clustering, Deep Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep Clustering, Deep, Clustering, Deep, Clustering, Deep Clustering, Deep Clustering, Clustering, Deep, Clustering, Deep, Clustering, Deep Clustering, Deep, Clustering, Deep, Clustering, Deep Clustering, Deep, Clustering, Deep, Clustering, Deep Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Deep Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep, Clustering, Deep, Clustering, Clustering, Deep
unclear under what conditions the loss functions constructed for DCC could succeed or fail in finding the ground-truth cluster membership of the data entities. However, understanding the _identifiability_ of the membership is critical for designing principled and robust DCC systems. The interplay of key aspects, e.g., neural network complexity and the generalization ability of the learned feature extractor, in the context of DCC is also of great interest--yet no pertinent study exists. Second, most of the existing (D)CC methods (implicitly) assumed that annotations are accurate; see, e.g., (Basu et al., 2004; Wagstaff et al., 2001; Li et al., 2009; Zhang et al., 2019; Ren et al., 2019; Hsu et al., 2018). Hence, many of these methods may not be robust to annotation noise. This is particularly detrimental to DCC methods as large over-parameterized neural models easily overfit (Du et al., 2019). Some works took noisy annotations into consideration (e.g., (Luo et al., 2018; Manduchi et al., 2021)), but no guarantees of recovering the cluster membership.
Contributions.In this work, we take a deeper look at the DCC problem from a membership-identifiability analysis viewpoint. Our contribution is twofold:
First, we re-examine a recently emerged effective loss function of DCC, namely, the logistic loss-based DCC criterion, which was proposed by (Hsu et al., 2018; Zhang et al., 2021). We show that, if the pairwise annotations are generated following a model that is reminiscent of the mixed-membership stochastic blockmodel (MMSB) (Airoldi et al., 2008; Huang and Fu, 2019), then, the logistic loss can provably recover (i) the data entities' cluster membership and (ii) the nonlinear function that maps the data features to the membership indicator for both seen and unseen data--and thus generalization of the learned neural network is guaranteed.
Second, using our understanding, we propose a noisy annotation-robust version of logistic loss for DCC. We explicitly model the annotators' confusion as a probability transition matrix, which is inspired by classic noisy label analysis such as the Dawid-Skene model (Dawid and Skene, 1979; Ghosh et al., 2011; Zhang et al., 2014). We propose a geometric factor analysis (Fu et al., 2018, 2019) based learning criterion to _provably_ ensure the identifiability of the ground-truth cluster membership, in the presence of annotation confusion.
We test our method over a series of DCC tasks and observe that the proposed approach significantly improves the performance over existing paradigms, especially when annotation noise exists. Our finding shows the significance of identifiability in DCC, echoing observations made in similar semi-supervised/unsupervised problems, e.g., (Arora et al., 2013; Kumar et al., 2013; Anandkumar et al., 2014; Zhang et al., 2014). We also evaluate the algorithms using real data collected through the Amazon Mechanical Turk (AMT) platform. The code is published at github.com/ductri/VolMaxDCC.
Notation.We use \(x\), \(\mathbf{x}\) and \(\mathbf{X}\) to denote scalar, vector and matrix, respectively; both \([\mathbf{x}]_{k},x_{k}\) refer to the \(k\)th element of vector \(\mathbf{x}\); \(X_{ij}\) (and \([\mathbf{X}]_{i,j}\)) is the element in the \(i\)th row and \(j\)th column of \(\mathbf{X}\); \(\mathbf{I}_{K}\) denotes the identity matrix of size \(K\); \(\mathbf{0}\) and \(\mathbf{1}\) are all-zero and all-one matrices with proper sizes; \(\langle\mathbf{x},\mathbf{y}\rangle\) and \(\langle\mathbf{X},\mathbf{Y}\rangle\) denote dot products between two vectors and two matrices, respectively; \(\left\|\mathbf{x}\right\|\) denotes the \(\ell_{2}\)-norm; \(\left\|\mathbf{X}\right\|_{\mathrm{F}}\) and \(\left\|\mathbf{X}\right\|_{2}\) denote the Frobenius norm and spectral norm of \(\mathbf{X}\), respectively; \(\sigma_{\max}(\mathbf{X}),\sigma_{\min}(\mathbf{X})\), and \(\sigma_{i}(\mathbf{X})\) represent the largest, the smallest, and the \(i\)th singular value of matrix \(\mathbf{X}\), respectively; \([N]\) is the set of natural numbers from 1 to \(N\), i.e., \([N]=\{1,\ldots,N\}\); \([N]\times[N]\) denotes the set of all possible pairs of \((i,j)\) where \(i\in[N],j\in[N]\); \(\mathrm{cone}(\mathbf{X})\) to denote conic hull of the column vectors of \(\mathbf{X}\), i.e., \(\mathrm{cone}(\mathbf{X})=\{\mathbf{y}\mid\mathbf{y}=\mathbf{X}\mathbf{\theta},\mathbf{\theta}\geq\mathbf{0}\}\).
## 2 Background
Problem Setting.We consider the CC setting as follows: There are \(N\) data samples \(\mathbf{x}_{1},\ldots,\mathbf{x}_{N}\). Each sample belongs to one or multiple clusters of in total \(K\) clusters. The association of \(\mathbf{x}_{n}\) with the clusters is represented by a vector \(\mathbf{m}_{n}\in\mathbb{R}^{K}\). The element \([\mathbf{m}_{n}]_{k}\) represents the probability that data \(n\) belongs to cluster \(k\). Note that in hard clustering, \([\mathbf{m}_{n}]_{k}\in\{0,1\}\) for all \(k\in[K]\), which means the clusters have no overlaps. In more general cases, we have
\[\mathbf{1}^{\top}\mathbf{m}_{n}=1,\ \mathbf{m}_{n}\geq\mathbf{0}. \tag{1}\]
A collection of \(M\) pairwise annotations are available, which are denoted by \((i_{1},j_{1},y_{1}),\ldots,(i_{M},j_{M},y_{M})\). Here,
\[y_{m}=\left\{\begin{aligned} & 1,&\mathbf{x}_{i_{m}},\mathbf{x}_{j_{m}} \text{ are ``similar''},\\ & 0,&\text{otherwise},\end{aligned}\right. \tag{2}\]
where the similarity of the membership of \(\mathbf{x}_{i_{m}}\) and \(\mathbf{x}_{j_{m}}\) is often deemed by an annotator. Note that there are in total \(N(N-1)/2\) such data pairs, and we often have
\[M\ll N(N-1)/2;\]
that is, only a small portion of the data pairs are annotated. The objective of pairwise annotation-based CC is to find the cluster membership vector \(\mathbf{m}_{n}\) of each \(\mathbf{x}_{n}\) using the data and the \(M\) annotations.
Early CC Methods.The task of clustering with the pairwise constraints can be dated back to the early 2000s, where (Wagstaff and Cardie, 2000; Wagstaff et al., 2001) used the pairwise annotations to impose extra constraints of the classic K-means iterations. The work (Basu et al., 2004) considered using pairwise annotation-induced "soft" constraints
(or, regularization terms) to modify K-means; see similar ideas in (Bilenko et al., 2004). Instead of modifying K-means, another line of approaches proposed to work with pairwise constraints under the spectral clustering framework. The idea is to incorporate the pairwise annotation-based constraints into the construction of the graph affinity matrix; see, e.g., (Kulis et al., 2005; Lu and Carreira-Perpinan, 2008; Li et al., 2009; Cucuringu et al., 2016).
DCC Developments.In recent years, deep neural network-based feature extractors were proposed to combine with CC--leading to the _deep constrained clustering_ (DCC) paradigms, e.g., (Manduchi et al., 2021; Luo et al., 2018; Zhang et al., 2019, 2021). In DCC, a deep neural network (DNN) \(\mathbf{f_{\theta}}(\cdot)\) is used to link the data vector \(\mathbf{x}_{n}\) with its membership, i.e.,
\[\mathbf{m}_{n}=\mathbf{f_{\theta}}(\mathbf{x}_{n}).\]
The use of DNN helps nonlinearly transform the data to spaces that are "friendly" to clustering (Yang et al., 2017). Learning \(\mathbf{f_{\theta}}(\cdot)\) also allows the neural network to generalize to unseen data. By (1),
\[\mathbf{m}_{i_{m}}^{\top}\mathbf{m}_{j_{m}}\in[0,1]\]
can be considered as the probability that \(\mathbf{x}_{i_{m}}\) and \(\mathbf{x}_{j_{m}}\) belong to the same cluster, i.e., \(y_{m}\sim\mathsf{Bernoulli}(\mathbf{m}_{i_{m}}^{\top}\mathbf{m}_{j_{m}})\). From this perspective, recent works have proposed an extension of logistic regression to incorporate pairwise annotations (Hsu et al., 2018; Zhang et al., 2019, 2021):
\[\mathsf{Loss}_{\mathrm{cc}}(\mathbf{\theta}) =\frac{1}{M}\sum_{m=1}^{M}\Big{(}y_{m}\log\frac{1}{\mathbf{f_{\theta }}(\mathbf{x}_{i_{m}})^{\top}\mathbf{f_{\theta}}(\mathbf{x}_{j_{m}})}+ \tag{3}\] \[(1-y_{m})\log\frac{1}{1-\mathbf{f_{\theta}}(\mathbf{x}_{i_{m}})^{\top} \mathbf{f_{\theta}}(\mathbf{x}_{j_{m}})}\Big{)}.\]
In the literature, the \(\mathsf{Loss}_{\mathrm{cc}}\) term is sometimes used with other loss functions; e.g., (Zhang et al., 2019, 2021) used an overall loss function consisting of two terms:
\[\text{minimize}\ \mathsf{Loss}_{\mathrm{recon}}+\lambda\mathsf{Loss}_{\mathrm{cc}}, \tag{4}\]
where \(\lambda\geq 0\) and the reconstruction loss \(\mathsf{Loss}_{\mathrm{recon}}\) was realized using an autoencoder. Such combination is advocated for practical reasons, e.g., utilizing all available data. Nonetheless, as we will show, \(\mathsf{Loss}_{\mathrm{cc}}\) itself suffices to offer strong guarantees under reasonable conditions.
Challenges.Although there have been abundant empirical evidence demonstrating the effectiveness of CC and DCC, theoretical understanding has been largely behind. This is particularly obvious for the DCC case, where aspects such as the identifiability of \(\mathbf{m}_{n}\) and the generalization ability of the learned \(\mathbf{f_{\theta}}(\cdot)\) are of great interest--yet no theoretical support exists, to our best knowledge.
Another challenge lies in noise robustness. Although it has been widely observed that noisy annotations could greatly impact the performance of CC and DCC (Liu et al., 2017; Covoes et al., 2013; Pellege and Baras, 2007; Manduchi et al., 2021; Luo et al., 2018; Chang et al., 2017; Zhang et al., 2021; Zhu et al., 2015), effective solutions--especially performance-guaranteed ones--for handling this problem have been largely lacking. Many works did test their algorithms with noisy labels (see, e.g., (Cucuringu et al., 2016; Wang et al., 2014)), but no special care was taken to alleviate the impact of such noisy labels. Some heuristics--such as pre-processing the data (Yi et al., 2012), modeling annotation uncertainties (Manduchi et al., 2021), and introducing concepts reflecting human behaviors (Luo et al., 2018; Chang et al., 2017) and annotators' accuracy (Luo et al., 2018; Chang et al., 2017)--were also proposed in the literature. However, performance guarantees have been elusive.
## 3 DCC Loss Revisited: Identifiability and Generalization
In this section, we take a deeper look into the DCC loss in (3) and understand its theoretical properties. Such understanding will allow us to design a performance-guaranteed new DCC loss in the presence of noisy pairwise annotations.
### A Generative Model of Annotations
To better present the results, in this section, we use the superscript "\(\natural\)" to denote all the ground-truth terms; e.g., \(\mathbf{f}^{\natural}(\cdot)\) denotes the ground-truth nonlinear mapping from data to membership and \(\mathbf{m}_{n}^{\natural}=\mathbf{f}^{\natural}(\mathbf{x}_{n})\) denotes the ground-truth membership vector of sample \(n\). We propose to employ the following generative model of \(y_{m}\): Given \(\mathbf{x}_{1},\ldots,\mathbf{x}_{N}\sim\mathcal{P}_{\mathcal{X}},\mathbf{x}_{n}\in \mathcal{X}\),
\[i,j\text{ are sampled over }[N]\times[N]\;; \tag{5a}\] \[\mathbf{m}_{i}^{\natural}=\mathbf{f}^{\natural}(\mathbf{x}_{i})\text{ and }\mathbf{m}_{j}^{\natural}=\mathbf{f}^{\natural}(\mathbf{x}_{j});\] (5b) \[y_{i,j}\sim\mathsf{Bernoulli}((\mathbf{m}_{i}^{\natural},\mathbf{m}_{j} ^{\natural})). \tag{5c}\]
Note that the logistic loss in (3) is the maximum likelihood estimator (MLE) of the parameters in the generative model. The model is reminiscent of the classic generative models of logistic regression and network analysis, particularly, MMSB (Airoldi et al., 2008). In MMSB, the nonlinear mapping from the data features to the membership vectors were not considered. Incorporating the nonlinear mapping \(\mathbf{f}^{\natural}\) follows the ideas from supervised learning, where the relationship between the data features and the data class is often modeled as the following conditional probability represented by \(\mathbf{f}^{\natural}\)(Shalev-Shwartz and Ben-David, 2014):
\[[\mathbf{m}_{n}^{\natural}]_{k}=\mathsf{Pr}(y=k|\mathbf{x}_{n})=[\mathbf{f}^{\natural}(\mathbf{ x}_{n})]_{k}.\]
Once \(\mathbf{f}^{\natural}\) is learned, it can be used as a multi-class classifier. This perspective was also mentioned in (Hsu et al., 2018; Zhang et al., 2019, 2021)--for algorithm design purpose. However, membership identifiability was not addressed.
In this section, we will show that the logistic loss (3) ensures identifying the membership vectors \(\mathbf{m}_{n}\) under the generative model in (5). It also ensures that \(\mathbf{f}_{\mathbf{\theta}}\approx\mathbf{f}^{\natural}\), under reasonable conditions. These findings may shed some light onto the effectiveness and good generalization performance of DCC using (3) in the literature.
### Performance Analysis
Finite-Sample Identifiability and Generalization.We first show that \(\mathsf{Loss}_{cc}\) is a sound criterion for identifying \(\mathbf{m}_{n}^{\natural}\) and \(\mathbf{f}^{\natural}(\cdot)\) by itself. Specifically, let us denote
\[\mathbf{\theta}^{\star}=\arg\min_{\mathbf{\theta}}\mathsf{Loss}_{\text{cc}}(\mathbf{ \theta}) \tag{6}\]
and \(\mathbf{f}^{\star}=\mathbf{f}_{\mathbf{\theta}^{\star}}\). Here, \(\mathbf{f}_{\mathbf{\theta}}\) is represented by a deep neural network. To be more specific, we consider \(\mathbf{f}_{\mathbf{\theta}}\) belonging to a function class \(\mathcal{F}\) defined by
\[\mathcal{F}\triangleq\left\{\texttt{softmax}(\texttt{net}(\mathbf{x};\mathbf{\theta }))\mid\forall\mathbf{x}\in\mathcal{X}\right\},\]
where \(\texttt{net}(\cdot;\mathbf{\theta})\) is a neural network that maps \(\mathbf{x}\) to \(\mathbb{R}^{K}\), and \([\texttt{softmax}(\mathbf{x})]_{\mathbb{R}}\triangleq\exp(x_{k})/\sum_{\ell=1}^{ K}\exp(x_{\ell})\) is imposed onto the output layer of the neural network, which is used to reflect the constraints in (1).
We will show that \(\mathbf{f}^{\star}\approx\mathbf{f}^{\natural}\) and \(\mathbf{m}_{n}^{\star}=\mathbf{f}^{\star}(\mathbf{x}_{n})\approx\mathbf{m}_{n}^{\natural}\) under reasonable conditions. To proceed, let us invoke the following assumptions:
**Assumption 3.1** (Anchor Sample Condition (ASC)).: Let \(\mathbf{m}_{n}^{\natural}=\mathbf{f}^{\natural}(\mathbf{x}_{n})\), \(\mathbf{M}^{\natural}=[\mathbf{m}_{1}^{\natural},\dots,\mathbf{m}_{N}^{\natural}]\). \(\mathbf{M}^{\natural}\) satisfies ASC if there exists a set \(\mathcal{K}\) of \(K\) indices such that \(\mathbf{M}^{\natural}[:,\mathcal{K}]=\mathbf{I}\). Accordingly, define \(\mathbf{V}^{\natural}\triangleq\mathbf{M}^{\natural}[:,\mathcal{K}^{\text{C}}], \mathcal{K}^{c}\triangleq[N]\setminus\mathcal{K}\).
**Assumption 3.2**.: (Function Class) There exist \(0<\nu<1\) and \(\mathbf{\tilde{f}}\in\mathcal{F}\) such that
\[\left\|\mathbf{\tilde{f}}(\mathbf{x})-\mathbf{f}^{\natural}(\mathbf{x})\right\|\leq\nu, \forall\mathbf{x}\in\mathcal{X}. \tag{7}\]
In addition, \(\alpha<\mathbf{f}(\mathbf{x})^{\mathsf{T}}\mathbf{f}(\mathbf{y})<1-\alpha,\ \forall\mathbf{x},\mathbf{y}\in\mathcal{X}, \forall\mathbf{f}\in\mathcal{F}\), for some \(0<\alpha<1\). The complexity measure of the neural network \(\texttt{net}(\mathbf{x};\mathbf{\theta})\) is \(R_{\texttt{NET}}\).
The ASC means that there exist samples that solely belong to a single cluster, which are called the anchor samples. This assumption is reminiscent of the anchor point assumption in the community detection literature (Panov et al., 2017; Mao et al., 2017). Condition (7) takes the approximation error of the employed neural network class \(\mathcal{F}\) into consideration. The assumption on \(\alpha\) is a regularity condition that prevents pathological unbounded cases of the logistic loss from happening. The constant \(R_{\texttt{NET}}\) is proportional to the upper bound of the so-called spectral complexity in (Bartlett et al., 2017). A formal definition of \(R_{\texttt{NET}}\) is given in Lemma A.10. The parameters \(\nu\) and \(R_{\texttt{NET}}\) present a tradeoff: Roughly speaking, if one has a deeper and wider neural network, then \(\nu\) is smaller--but \(R_{\texttt{NET}}\) is bigger.
Define \(\mathbf{P}^{\natural}\in\mathbb{R}^{N\times N}\) such that \(P^{\natural}_{ij}=\left\langle\mathbf{f}^{\natural}(\mathbf{x}_{i}),\mathbf{f}^{\natural}( \mathbf{x}_{j})\right\rangle\), \(\mathbf{P}^{\star}\in\mathbb{R}^{N\times N}\) such that \(P^{\star}_{ij}=\left\langle\mathbf{f}^{\star}(\mathbf{x}_{i}),\mathbf{f}^{\star}(\mathbf{x}_{j })\right\rangle\), and \(\mathbf{S}_{\mathbf{X}}=[\mathbf{x}_{i_{1}},\mathbf{x}_{j_{1}},\dots,\mathbf{x}_{i_{M}},\mathbf{x}_{ i_{M}}]\). We first show that the matrix \(\mathbf{P}^{\natural}\) is approximately recoverable via minimizing (3):
**Lemma 3.3**.: _Let \(S=\{(i_{1},j_{1},y_{1}),\dots,(i_{M},j_{M},y_{M})\}\), where \((i_{1},j_{1}),\dots,(i_{M},j_{M})\) are drawn independently and uniformly at random from \([N]\times[N]\). Suppose that \(y_{m}\mid(i_{m},j_{m})\sim\mathsf{Bernoulli}(P^{\natural}_{i_{m},j_{m}})\) following the generative model in (5) and that \(\mathcal{F}\) satisfies Assumption 3.2. Then, with probability at least \(1-\delta\), we have_
\[\frac{1}{N^{2}}\left\|\mathbf{P}^{\star}-\mathbf{P}^{\natural}\right\|_{ \mathrm{F}}^{2}\leq\epsilon(M,\delta)^{2},\]
_where \(\epsilon(M,\delta)^{2}\) is defined as follows:_
\[\epsilon(M,\delta)^{2}\triangleq\frac{64\log(1/\alpha)}{M}+\frac{ 96\sqrt{2}\log M}{\alpha M\log 2}\left\|\mathbf{S}_{\mathbf{X}}\right\|_{\mathrm{F}}\sqrt{R_{\texttt{ NET}}}\] \[+64\log(1/\alpha)\sqrt{\frac{2\log(4/\delta)}{M}}+\frac{16\nu}{ \alpha}. \tag{8}\]
The proof of Lemma 3.3 is relegated to Appendix A. It is not surprising that \(\mathbf{P}^{\natural}\) can be recovered from minimizing the logistic loss, as the problem of recovering \(\mathbf{P}^{\natural}\) can be regarded as a generalized 1-bit matrix completion (MC) problem. Unlike the conventional 1-bit MC frameworks that leveraged the low-rank structure of the complete data (see, e.g., (Davenport et al., 2014)), here we exploit the neural generative model in (5), which is also a low-dimensional model, as long as \(R_{\texttt{NET}}\) is sufficiently small (compared to \(M\) and \(N\)).
Before proceeding to showing recovery of \(\mathbf{M}^{\natural}\), let us observe the following fact based on Lemma 3.3: It can be seen in (8) that \(\epsilon(M,M^{-0.5})^{2}\) is decreasing with a rate of \(\mathcal{O}(\sqrt{(\log M)/M})\), hence there exists \(M_{0}\in\mathbb{N}\) independent to \(N\) such that \(\ \forall M>M_{0}\),
\[\epsilon^{\prime}(M)^{2}\triangleq M^{0.25}\epsilon(M,M^{-0.5})^{2}+M^{-0.25} \leq\frac{1}{8K^{2}}. \tag{9}\]
Using the above fact, we show that
**Theorem 3.4**.: _Under the same assumptions as in Lemma 3.3, further assume that Assumption 3.1 is satisfied. Let \(\mathbf{M}^{\star}=[\mathbf{f}^{\star}(\mathbf{x}_{1}),\dots,\mathbf{f}^{\star}(\mathbf{x}_{N})]\), and consider \(M>M_{0}\), then there exists a permutation matrix \(\mathbf{\Pi}^{\star}\) such that_
\[\frac{1}{NK}\left\|\mathbf{\Pi}^{\star}\mathbf{M}^{\star}-\mathbf{M}^{ \natural}\right\|_{\mathrm{F}}^{2} \leq\frac{4N}{K}\epsilon(M,\delta)^{2}\] \[+\frac{2K}{N}(1+8\sigma_{\max}^{2}(\mathbf{V}^{\natural}))\epsilon^{ \prime}(M),\]
holds with probability at least \(1-\delta-K^{2}/M^{0.25}\), where \(\epsilon^{\prime}(M)\) is defined in (9)._
The proof of Theorem 3.4 is in Appendix B. We should mention that the sample complexity in Theorem 3.4 is based on a worst-case analysis, and thus the bound tends to be pessimistic--it starts to make sense when
\[N\leq\mathcal{O}(K\sqrt{M}),\]
as reflected in the first term on the right hand side. In practice, \(\mathsf{Loss}_{\text{cc}}\) can work fairly well using a much smaller \(M\). Nonetheless, Theorem 3.4 for the first time shows the soundness of using \(\mathsf{Loss}_{\text{cc}}\) in (3), in terms of being able to guarantee the identifiability of \(\mathbf{M}^{\natural}\).
With Theorem 3.4, it is readily to show the following:
**Theorem 3.5** (Generalization).: _Under the same conditions as in Theorem 3.4, with probability at least \(1-\delta-K^{2}/M^{1/4}\),_
\[\mathop{\mathbb{E}}_{\mathbf{x}\sim\mathcal{P}_{\mathcal{X}}}\left[ \frac{1}{K}\left\|\mathbf{\Pi}^{\star}\mathbf{f}^{\star}(\mathbf{x})-\mathbf{f}^{\natural}(\bm {x})\right\|^{2}\right]\leq\frac{4N}{K}\epsilon(M,\delta)^{2}\\ +\frac{2K}{N}(1+8\sigma_{\max}^{2}(\mathbf{V}^{\natural}))\epsilon^{ \prime}(M)\\ +\frac{4}{NK}+\frac{12\sqrt{2R_{\scalebox{0.5}{\tiny{$\mathrm{ \mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$ \mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{ \tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{n}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{ \tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{n}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{ \tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{n}$}\scalebox{0.5}{ \tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{n}$}\scalebox{0.5}{\tiny{$\mathrm{n}$}\scalebox{0.5}{\tiny{$ \mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{n}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{n}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{n}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{ \mathrm{n}$}$\scalebox{0.5}{\tiny{$\mathrm{n}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$} \scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$} \scalebox{0.5}{\tiny{$\mathrm{n}$}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}$\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$\scalebox{0.5}{\tiny{$\mathrm{n}$}$\scalebox{0.5}{\tiny{$ \mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 {\tiny{$\mathrm{n}$}$\scalebox{0.5}{\tiny{\mathrm{n}$}$\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$ \scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{n}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{n}$}\scalebox{0.5}{\tiny{$\mathrm{n}$}\scalebox{0.5}{\tiny{$ \mathrm{\mathrm{n}}$\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{n}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$\scalebox{0.5} {\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5}{\tiny{$\mathrm{n}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}}$\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$}\scalebox{0.5 }{\tiny{$\mathrm{n}$}\scalebox{0.5}{\tiny{$\mathrm{\mathrm{n}}$\scalebox{0.5}{\tiny{$ \mathrm{n}$}\scalebox{0.5}{\tiny{$\mathrm{n}$}\scalebox{0.5}{\tiny{$\mathrm{n}$}\scalebox{0.5 }{\tiny{$\mathrm{\mathrm{n}$}$\scalebox{0.5}{\tiny{$\mathrm{n}$}$\scalebox{0.5}{\tiny{$ \mathrm{n}$}\scalebox{0.5}
## 4 DCC with Noisy Labels: Geometric Regularization
Incorporating Annotation Confusion.We should mention that the generative model in (5) can already model annotation noise to a certain extent: The Bernoulli sampling process can encode some 0-1 flipping probability of observing \(y_{m}\). Nonetheless, this level of noise consideration is not enough, as annotations could be grossly inaccurate. To take more severe annotation errors into consideration, we modify the generative model as follows. We assume that the annotator confuses class \(i\) with class \(j\) with probability \(\mathsf{Pr}(j|i)\). Let \(A_{i,j}=\mathsf{Pr}(i|j)\), we have a "confusion matrix" \(\mathbf{A}\in\mathbb{R}^{K\times K}\) where \([\mathbf{A}]_{i,j}=A_{i,j}\). Hence, the annotator's "confused membership vector" is modeled as
\[\mathbf{m}_{n}^{\rm confused}=\mathbf{A}\mathbf{m}_{n}^{\natural}=\mathbf{A}\mathbf{f}^{\natural} (\mathbf{x}_{n}). \tag{10}\]
Then, the annotator's output is sampled from the following:
\[y_{m}\sim\mathsf{Bernoulli}(\langle\mathbf{A}\mathbf{f}^{\natural}(\mathbf{x}_{i_{m}}), \mathbf{A}\mathbf{f}^{\natural}(\mathbf{x}_{j_{m}})\rangle). \tag{11}\]
Note that using a confusion matrix to model noisy labels' generating process is widely seen in noisy label learning--but mostly under the supervised learning setting; see, e.g., (Dawid and Skene, 1979; Liu et al., 2012; Zhang et al., 2014; Chu et al., 2021). We argue that this confusion model is also suitable for pairwise annotation. The rationale is that the error happened in comparison is mainly caused by the annotator's confusion on the membership of \(\mathbf{x}_{i_{m}}\)_or_ that of \(\mathbf{x}_{j_{m}}\)--which is exactly reflected in (11).
Volume Maximization DCC.To proceed, we propose the following modified logistic loss:
\[\mathsf{Loss}^{\prime}_{\rm cc}(\mathbf{\theta},\mathbf{B})=\frac{1}{M} \sum_{m=1}^{M}\Big{(}y_{m}\log\frac{1}{\mathbf{f}_{\mathbf{\theta}}(\mathbf{x}_{i_{m}})^{ \dagger}\mathbf{B}\mathbf{f}_{\mathbf{\theta}}(\mathbf{x}_{j_{m}})}+\] \[(1-y_{m})\log\frac{1}{1-\mathbf{f}_{\mathbf{\theta}}(\mathbf{x}_{i_{m}})^{ \dagger}\mathbf{B}\mathbf{f}_{\mathbf{\theta}}(\mathbf{x}_{j_{m}})}\Big{)},\]
where \(\mathbf{B}\in\mathbb{R}^{K\times K}\) satisfies \(0\leq\mathbf{B}\leq 1\), as it is induced by \(\mathbf{B}=\mathbf{A}^{\top}\mathbf{A}\). Note that we will use \(\mathbf{\theta}\) and \(\mathbf{B}\) as our optimization variables (instead of \(\mathbf{A}\)) as it simplifies the loss function. In addition, as our ultimate goal is to learn \(\mathbf{M}^{\natural}\) and \(\mathbf{f}^{\natural}\), the intermediate variable \(\mathbf{A}\) does not need to be explicitly estimated.
We show that minimizing \(\mathsf{Loss}^{\prime}_{\rm cc}(\mathbf{\theta},\mathbf{B})\) provably recovers the data membership and finds a generalizable \(\mathbf{f}^{\star}\), with an additional volume requirement satisfied by the solution. Specifically, we have the following theorem:
**Theorem 4.1** (Identifiability of Noisy Case).: _Assume that the assumptions in Lemma 3.3 hold, except that the generative model is replaced by (11). Suppose that \(\mathbf{M}^{\natural}\) satisfies SSC (Assumption 3.6) and that \(\mathbf{f}^{\natural}\in\mathcal{F}\). Also assume that \(\mathrm{rank}(\mathbf{A}^{\top}\mathbf{A}\mathbf{M}^{\natural})=K\). Denote_
\[(\mathbf{\theta}^{\star},\mathbf{B}^{\star})=\arg\min\ \mathsf{Loss}^{\prime}_{\rm cc}(\mathbf{ \theta},\mathbf{B}) \tag{12}\]
_and \(\mathbf{f}^{\star}=\mathbf{f}_{\mathbf{\theta}^{\star}}\) and \(\mathbf{m}_{n}^{\star}=\mathbf{f}^{\star}(\mathbf{x}_{n})\). In addition, assume that \(\mathsf{Loss}^{\prime}_{\rm cc}\) is minimized with a solution \(\mathbf{M}^{\star},\mathbf{B}^{\star}\) such that \(\log\det(\mathbf{M}^{\star}(\mathbf{M}^{\star})^{\top})\) is maximized among all possible optimal solutions. Then, at the limit of \(\max\big{(}\log(1/\alpha),\log(M)\sqrt{R_{\nicefrac{{N\!\!=\!1}}{{2}}}}/ \alpha\big{)}/\sqrt{M}\to 0\), the following statements hold:_
1. _There exists a permutation matrix_ \(\mathbf{\Pi}^{\star}\) _such that_ \(\mathbf{\Pi}^{\star}\mathbf{M}^{\star}=\mathbf{M}^{\natural}\)_._
2. _The learned neural network_ \(\mathbf{f}^{\star}\) _satisfies_ \[\mathop{\mathbb{E}}_{\mathbf{x}\sim\mathcal{P}_{\mathcal{X}}} \left[\frac{1}{K}\left\|\mathbf{\Pi}^{\star}\mathbf{f}^{\star}(\mathbf{x})-\mathbf{f}^{ \natural}(\mathbf{x})\right\|^{2}\right]\leq\frac{8}{NK}+\] \[\frac{12\sqrt{2R_{\nicefrac{{N\!\!=\!1}}{{2}}}}}{NK\log 2}+8 \sqrt{\frac{2\log(4/\delta)}{NK^{2}}}\] _with probability at least_ \(1-\delta\)_._
The proof of Theorem 4.1 is in Appendix E. The take-home point is that, when one has large samples and an expressive neural network, the membership identifiability and generalization performance of using \(\mathsf{Loss}^{\prime}_{\rm cc}\) can be as good as that of using \(\mathsf{Loss}_{\rm cc}\)--as if there is no annotation confusion.
Of course, there are more requirements to satisfy under Theorem 4.1. Particularly, the maximal \(\log\det(\mathbf{M}^{\star}(\mathbf{M}^{\star})^{\top})\) requirement is nontrivial. In practice, the optimization criterion in Theorem 4.1 can be approximated via a regularized version of \(\mathsf{Loss}^{\prime}_{\rm cc}\) as follows:
\[\mathop{\text{minimize}}_{\mathbf{\theta},\mathbf{0}\leq\mathbf{B}\leq\mathbf{1}}\ \mathsf{Loss}^{\prime}_{\rm cc}+\mathsf{Loss}_{\rm vol}, \tag{13}\]
where \(\mathsf{Loss}_{\rm vol}=-\lambda\log\det(\mathbf{M}\mathbf{M}^{\top})\) and \(\mathbf{m}_{n}=\mathbf{f}_{\mathbf{\theta}}(\mathbf{x}_{n})\). Note that \(\log\det(\mathbf{M}\mathbf{M}^{\top})\) is proportional to the volume of the Gram matrix \(\mathbf{M}\mathbf{M}^{\top}\)(Boyd et al., 2004). Hence, the term can be regarded as a geometry-driven regularization. We name our method using (13) as the _Volume Maximization-Regularized Deep Constrained Clustering_ (VolMaxDCC). An overall architecture is shown in Fig. 2.
## 5 Related Work
Recovering the underlying unseen matrix from incomplete and binary measurement is related to 1-bit low-rank matrix completion (Davenport et al., 2014), which was often studied in the context of recommender systems. The generative model in (5) is reminiscent of the MMSB (Airoldi et al., 2008; Huang and Fu, 2019) that has been a workhorse in overlapped community detection. The MMSB model with
missing links was also used for clustering-related tasks, e.g., crowdclustering (Gomes et al., 2011). The ASC and SSC are commonly seen conditions in identifiability analysis of nonnegative matrix factorization (Fu et al., 2018, 2015, 2019, 2016; Huang et al., 2014; Donoho & Stodden, 2003; Gillis, 2014; Gillis & Vavasis, 2014; Gillis & Luce, 2014; Gillis, 2020; Nguyen et al., 2022). Using volume maximization/minimization to enhance NMF identifiability appeared as early as 1994 (Craig, 1994) in the context of a blind source separation (BSS) problem in spectral image analysis. The volume-based geometric regularization was connected to ASC- and SSC-like conditions (e.g., the so-called "pure pixel condition" and "local dominance condition") in (Chan et al., 2009) and (Fu et al., 2015, 2016; Lin et al., 2015), respectively, to attain uniqueness of matrix factorization models, again, in the context of BSS. All these models do not involve nonlinear function learning or deep neural networks. In addition, the classic geometric factorization models were developed with continuous low-rank matrix data--instead of binary data generated from models involving complex _unknown_ nonlinear function. Confusion matrices are often used in supervised noisy label learning and crowdsourcing (Dawid & Skene, 1979; Zhang et al., 2014; Rodrigues & Pereira, 2018), to model probabilistic label transition in the annotating process. The ASC and SSC were also used to establish identifiability in supervised (crowdsourced) noisy label learning (Xia et al., 2019; Li et al., 2021a; Ibrahim et al., 2019; Ibrahim & Fu, 2021; Ibrahim et al., 2023). Incorporating the idea of confusion matrix-based modeling in similarity annotation was not seen before. The proposed generative model connects label confusion matrices, volume maximization (more generally, ASC/SSC-based identifiable factor analysis), and DCC for the first time, to our best knowledge.
## 6 Experiments
Datasets.We use STL-10 (Coates et al., 2011), ImageNet-10 (Chang et al., 2017a), and CIFAR-10 (Krizhevsky et al., 2009). For three datasets, we use \(N=10000\) samples as the seen data, and set aside \(2000\), \(2000\) and \(45000\) unseen data, respectively, for testing the generalization performance. In all experiments, \(M=10,000\) pairwise constraints are uniformly and randomly drawn from \([N]\times[N]\). There are no more than 0.02% of the total number of pairs annotated for all three datasets.
Baselines.We compare our method with several baselines, including the classic CC methods, i.e., PCKMeans (Basu et al., 2004a), COP-KMeans (Wagstaff et al., 2001), and the DCC methods, namely, DC-GMM (Manduchi et al., 2021) and C-IDEC (Zhang et al., 2019, 2021). We also include the plain-vanilla K-means as a reference. We use a validation set for the baselines whenever proper for parameter tuning and algorithm stopping. The sizes of the validation sets are \(N_{\mathrm{valid}}=1000\) for STL-10 and ImageNet-10 and \(N_{\mathrm{valid}}=5000\) for CIFAR-10. For C-IDEC, the regularization parameters is chosen from \(\{0,1e{-}1,1e{-}2,1e{-}3,1e{-}4,1e{-}5\}\). For DC-GMM, we use their heuristic to set the constraint violation penalty with the true (oracle) label flipping rate. For the proposed VolMaxDCC, we also choose \(\lambda\) among \(\lambda\in\{0,1e{-}1,1e{-}2,1e{-}3,1e{-}4,1e{-}5\}\). We also include the result of using the simple logistic loss \(\mathsf{Loss}_{\mathrm{cc}}\) in (3), which is referred to as VanillaDCC.
Neural Network Settings.For all the DCC methods, we employ the unsupervised pre-training method by (Li et al., 2021b) to convert the images to feature vectors \(\{\mathbf{x}_{1},\dots,\mathbf{x}_{N}\}\subseteq\mathbb{R}^{512}\)(Li et al., 2021b). The feature vectors are then fed to a two-hidden-layer fully connected neural network \(\mathbf{f_{\theta}}(\cdot)\), where each hidden layer has 512 ReLU activation functions. The output layer of \(\mathbf{f_{\theta}}(\cdot)\) has \(K=10\) dimensions with the softmax constraints. The classic methods also work with the pre-trained feature vectors.
Algorithm Implementation.To tackle the proposed criterion in (13), we first parameterize \(\mathbf{B}\) such that each element \(B_{ij}=1/(1+\exp(-B^{\prime}_{ij}))\), where \(B^{\prime}_{ij}\in\mathbb{R}\) is a trainable parameter. By doing so, (13) becomes an unconstrained optimization problem. We then employ the commonly used stochastic gradient-based solvers to tackle the re-parameterized problem. In our implementation, we use stochastic gradient descent with a batch size of \(128\). We set the learning rate for \(\mathbf{B^{\prime}}\) and \(\mathbf{\theta}\) to be 0.1 and 0.5, respectively. The initialization of \(\mathbf{\theta}\) is chosen randomly following uniform distributions with parameters depending on output dimension of each layer. To initialize \(\mathbf{B^{\prime}}\), we make the
Figure 2: The architecture of the VolMaxDCC approach.
diagonal elements to be \(1\) and the other elements \(-1\). The baselines are handled by their respective author-provided code for optimization.
### Noisy Machine Annotations.
Noisy Annotation Settings.We first conduct experiments under noise-controlled settings. To be specific, we divide the experiments into two cases, where the annotations are accurate and noisy, respectively. In the former case, the annotations are set to \(1\) if the pair of data are from the same class, and set to \(0\) otherwise. In the latter case, to generate noisy pairwise constraints, several supervised machine annotators (i.e., classifiers) are trained using different number of training samples. By doing this, the trained classifiers have different prediction errors. The pairs are then annotated based on the class predictions made by these imperfect classifiers. If the pair of samples share the same predicted membership by the machine annotator, the annotation is set to be 1 (and 0 otherwise). The annotations are noisy as the machine annotators are far from perfect. The annotation noise level can be controlled by tuning the prediction error of the classifiers, via using various amounts of training samples. The statistics of the annotation errors are provided with our experiment results (see the "noise level" column in the tables).
Results.We report average performance on the seen set \(\{\mathbf{x}_{1},\dots,\mathbf{x}_{N}\}\) and the unseen data over 5 random trials in Tables 1, 2, and 3--which correspond to STL10, CIFAR10, and ImageNet10, respectively. For K-means, COP-Kmeans, and PCKMeans, we only report results on the seen set since these methods do not have the notion of generalization. The performance is measured by three commonly seen metrics, namely, clustering accuracy (ACC) (Cai et al., 2010), normalized mutual information (NMI) (Cai et al., 2010), and adjusted rank index (ARI) (Yeung and Ruzzo, 2001). For all metrics, a higher score indicates a better performance.
In case where accurate pairwise constraints are used (i.e., the rows in all the tables corresponding to "Noise Level = 0%"), most methods work reasonably well. As expected, all the DCC methods exhibit tangible edges over the CC methods that do not use deep neural networks. This is consistent with the observations made from previous DCC works (Manduchi et al., 2021; Zhang et al., 2019, 2021). The good performance of VanillaDCC on both training and testing set are also as expected, per our identifiability analysis.
The rows in the tables associated with nonzero noise levels show that the performance of DC-GMM, C-IDEC, and VanillaDCC drops quickly. For example, in Table 2, the ACC of DC-GMM drops from (0.91,0.89) to (0.74,0.74) when the noise level changes from 0% to 10.9%. Similar performance degradation is observed for C-IDEC and VanillaDCC, which are both DCC methods that do not explicit consider annotation noise. Nonetheless, the proposed VolMaxDCC's performance decline is much more graceful on all three datasets. In particular, Table 3 shows that the ACC of VolMaxDCC is still at (0.91,0.90) when the noise level reaches 11.2%, while the baselines have a best ACC of (0.80,0.81). The results show the usefulness of our confusion model, as well as the effectiveness of our identifiability-driven loss function design. More experiment results can be seen in Appendix F.
\begin{table}
\begin{tabular}{c
### Noisy AMT Annotations.
Data Acquisition.In addition to using machine classifier-annotated data, we also conduct experiments using pairwise annotations that are obtained from the AMT platform. We uploaded \(8994\) data pairs to AMT, where the samples are from the ImageNet10 dataset. The annotators were asked to provide their judgement on the similarity of the pairs. Recall that there are \(N=10000\) samples in the ImageNet10 dataset, which means that \(0.018\)% of all the pairs were annotated. We manually checked the error rate, which was found to be 23.09%. The annotated pairs are also released with the code.
Results.Table 4 shows the results on this AMT dataset. As before, we use the available pairs to learn the membership of training data and observe the testing accuracy over \(N_{\mathrm{unseen}}=2,000\) samples. One can see that the proposed Vo1MaxDCC exhibits the highest clustering accuracy over the seen and unseen data. The clustering accuracy of the second best baseline is 4% lower than that of Vo1MaxDCC over both seen and unseen data. The margins of the proposed method over the baselines in terms of NMI and ARI are also obvious. The performance on the noisy AMT data speaks for the usefulness and effectiveness of the proposed method in real-world scenarios.
## 7 Conclusion
We revisited the pairwise annotation-based DCC problem from a membership identifiability viewpoint. We showed that a recently emerged logistic DCC loss is a sound criterion in terms of model identification--if the annotations are generated following a model that is reminiscent of the MMSB and deep learning based classifier learning. Based our understanding to the vanilla logistic loss, we moved forward to consider the noisy annotation case and proposed a confusion-matrix based generative model. We proposed a modified logistic loss with a geometric regularization for provable membership identification--whose identifiability guarantee is the first of the kind under noisy annotations-based DCC, to our best knowledge. We tested our new design over various datasets under multiple noisy levels. We observed tangible improvements over all cases, showing our confusion-based modeling and identifiability-driven design are promising.
Limitations.The proposed approach has a couple of notable limitations. First, the model used for annotator noise relies on a confusion matrix model, assuming uniform confusion across all data samples, which may not always hold true in practical scenarios. Developing a framework that takes into account more realistic confusion models could lead to further improvements in performance. Second, the establishment of membership identifiability under the SSC assumption lacks finite sample analysis (cf. Theorem 3.7 and Theorem 4.1). The assumption that \(M\) reaches infinity can never be met in practice. It is of great interest to show how the performance of the volume-based criterion scales with different sample sizes.
Acknowledgement.This work is supported in part by the National Science Foundation (NSF) under project NSF IIS-2007836.
|
2303.02359 | Hitchin map for the moduli space of $Λ$-modules in positive
characteristic | Building on Simpson's original definition over the complex numbers, we
introduce the notion of restricted sheaf $\Lambda$ of rings of differential
operators on a variety defined over a field of positive characteristic. We
define the notion of $p$-curvature for $\Lambda$-modules and the analogue of
the Hitchin map on the moduli space of $\Lambda$-modules. We show that under
certain conditions this Hitchin map descends under the Frobenius map of the
underlying variety and we give examples. | David Alfaya, Christian Pauly | 2023-03-04T08:50:57Z | http://arxiv.org/abs/2303.02359v1 | # Hitchin map for the moduli space of \(\Lambda\)-modules in positive characteristic
###### Abstract.
Building on Simpson's original definition over the complex numbers, we introduce the notion of restricted sheaf \(\Lambda\) of rings of differential operators on a variety defined over a field of positive characteristic. We define the notion of \(p\)-curvature for \(\Lambda\)-modules and the analogue of the Hitchin map on the moduli space of \(\Lambda\)-modules. We show that under certain conditions this Hitchin map descends under the Frobenius map of the underlying variety and we give examples.
Key words and phrases:Hitchin map, Lambda-modules, connections, Higgs bundles, positive characteristic, moduli space 2010 Mathematics Subject Classification: 14D20, 14G17
## 1. Introduction
The notion of sheaf of rings of differential operators \(\Lambda\) over a smooth variety \(X\) defined over an algebraically closed field \(\mathbb{K}\) and the associated notion of \(\Lambda\)-module for \(\mathcal{O}_{X}\)-modules over \(X\) was introduced in [15] over the complex numbers \(\mathbb{K}=\mathbb{C}\) as a way to give a unifying structure for \(\mathcal{D}_{X}\)-modules, i.e. vector bundles with an integrable connection, and Higgs sheaves over \(X\). Other examples of \(\Lambda\)-modules include connections along a foliation or logarithmic connections.
In this paper we consider Simpson's original definition of sheaf of rings of differential operators \(\Lambda\) over a field \(\mathbb{K}\) of characteristic \(p>0\). Note that the sheaf of rings of crystalline differential operators \(\mathcal{D}_{X}\) (see [1] or [2]) defined as the enveloping algebra of the Lie algebroid \(T_{X}\) is such a sheaf of rings of differential operators, but the usual sheaf of differential operators (e.g. [1, Section 16]) is not. One of the main features of the sheaf of rings \(\mathcal{D}_{X}\) in positive characteristic is its large center, which can be described by using the \(p\)-th power map, or \(p\)-structure, on the Lie algebroid \(T_{X}\). Our first contribution to the general study of \(\Lambda\)-modules in positive characteristic is the definition of _restricted_ sheaf of rings of differential operators (see Definition 2.6) obtained by equipping \(\Lambda\) with a \(p\)-structure. Examples of restricted sheaves of rings of differential operators already appeared in [13] as universal enveloping algebras of restricted Lie algebroids. New non-split examples are given, for instance, by the sheaf of rings of twisted differential operators \(\mathcal{D}_{X}(L)\) for some line bundle \(L\) over \(X\) (see Subsection 4.5).
The main purpose of this paper is to prove a property of the analogue of the Hitchin map for restricted \(\Lambda\)-modules in positive characteristic over a projective variety \(X\). First, we check (Section 5) that the notion of \(p\)-curvature \(\psi_{\nabla}\) of a \(\Lambda\)-module \(E\) over \(X\) adapts to our general set-up and thus defines for each \(\Lambda\)-module structure on the sheaf \(E\) a \(F^{*}H^{\vee}\)-valued Higgs field on \(E\), where \(H\) is the first quotient \(\Lambda_{1}/\Lambda_{0}\) associated to the filtration \(\Lambda_{0}\subset\Lambda_{1}\subset\cdots\subset\Lambda\) and \(F\) is the Frobenius
map of \(X\). Thus by applying the classical Hitchin map to the Higgs field \(\psi_{\nabla}\) we obtain a morphism
\[h_{\Lambda}:\mathcal{M}^{\Lambda}_{X}(r,P)\to\mathcal{A}_{r}(X,F^{*}H^{\vee}),\]
where \(\mathcal{M}^{\Lambda}_{X}(r,P)\) is the moduli space parameterizing Giesecker semi-stable \(\Lambda\)-modules over \(X\) of rank \(r\) and with Hilbert polynomial \(P\), and \(\mathcal{A}_{r}(X,F^{*}H^{\vee})\) is the Hitchin base for the vector bundle \(F^{*}H^{\vee}\). Under the assumption that the anchor map \(\delta:\Lambda_{1}/\Lambda_{0}\to T_{X}\) induced by the commutator between elements of \(\Lambda_{1}\) and local regular functions in \(\mathcal{O}_{X}\) is generically surjective, our main result (Theorem 6.6) says that the coefficients of the characteristic polynomial of \(\psi_{\nabla}\) descend under the Frobenius map \(F\) of the variety \(X\). Equivalently, this means that the Hitchin morphism \(h_{\Lambda}\) factorizes through
\[h^{\prime}_{\Lambda}:\mathcal{M}^{\Lambda}_{X}(r,P)\to\mathcal{A}_{r}(X,H^{ \vee}), \tag{1.1}\]
followed by the pull-back under the Frobenius map \(F\) of global sections. The latter theorem was first proved in [10] for a smooth projective curve \(X\) and for \(\Lambda=\mathcal{D}_{X}\). It was observed in [1, Section 2.5] that in the case \(\Lambda=\mathcal{D}_{X}\) the proof follows rather directly from the fact that the \(p\)-curvature \(\psi_{\nabla}\) is flat for the natural connection on the sheaf \(\operatorname{End}(E)\otimes F^{*}\Omega^{1}_{X}\), already proved in [11, Proposition 5.2.3], and moreover their argument is independent of the dimension of the variety \(X\). In this paper we show that the elegant argument given in [1] can be adapted to general restricted \(\Lambda\)-modules under the assumption that the anchor map \(\delta:\Lambda_{1}/\Lambda_{0}\to T_{X}\) is generically surjective. We also give an example showing that the result is false when \(\delta\) is not generically surjective.
In the last section we present an analogue of the main Theorem in a relative situation by taking the Rees construction \(\Lambda^{R}\) on \(X\times\mathbb{A}^{1}\) over \(\mathbb{A}^{1}\) obtained from a sheaf of rings of differential operators \(\Lambda\) on \(X\). Here we need to restrict attention to sheaves \(\Lambda\) obtained as a universal enveloping algebra of a restricted Lie algebroid \(H\) over \(X\). Our theorem (Theorem 7.1) then gives an explicit deformation over the affine line \(\mathbb{A}^{1}\) of the classical Hitchin map of \(H^{\vee}\)-valued Higgs sheaves to the Hitchin map (1.1) \(h^{\prime}_{\Lambda}\) of \(\Lambda\)-modules. This result was already obtained in [10] for a smooth projective curve \(X\) in the case where \(\Lambda=\mathcal{D}_{X}\) and \(H=T_{X}\), see also [16, Section 4.5] for some partial generalizations.
Finally we mention that the fibers of the Hitchin map (1.1) \(h^{\prime}_{\Lambda}\) are described in [10] for a smooth projective curve \(X\) and for \(\Lambda=\mathcal{D}_{X}\). For general \(X\) and \(\Lambda\), a description of the fibers of \(h^{\prime}_{\Lambda}\) seems to be missing in the literature and studying it would be an interesting future line of work.
We would like to thank Carlos Simpson for many useful discussions during the preparation of this article.
**Acknowledgments.** This work was started during a research stay in 2017 of the first-named author at the Laboratoire J.-A. Dieudonne at the Universite Cote d'Azur and he would like to thank the laboratory for its hospitality. This research was partially funded by MINECO (grants MTM2016-79400-P, PID2019-108936GB-C21 and ICMAT Severo Ochoa project SEV-2015-0554) and the 7th European Union Framework Programme (Marie Curie IRSES grant 612534 project MODULI). During the development of this work, the first-named author was also supported by a predoctoral grant from Fundacion La Caixa - Severo Ochoa International Ph.D. Program and a postdoctoral position associated to the ICMAT Severo Ochoa project.
## 2. Preliminaries on sheaves of rings of differential operators
### Definitions and properties
Let \(\mathbb{K}\) be an algebraically closed field. Let \(X\) and \(S\) be schemes of finite type over \(\mathbb{K}\) and let
\[\pi:X\longrightarrow S\]
be a morphism. We recall from [20, Section 2] the definition of sheaf of rings of differential operators on \(X\) over \(S\). We note that the original definition in [20] was given over \(\mathbb{K}=\mathbb{C}\), but it can be considered over an arbitrary base field \(\mathbb{K}\).
**Definition 2.1**.: _A sheaf of rings of differential operators on \(X\) over \(S\) is a sheaf of associative and unital \(\mathcal{O}_{S}\)-algebras \(\Lambda\) over \(X\) with a filtration \(\Lambda_{0}\subset\Lambda_{1}\subset\cdots\) which satisfies the properties_
1. \(\Lambda=\bigcup_{i=0}^{\infty}\Lambda_{i}\) _and_ \(\Lambda_{i}\cdot\Lambda_{j}\subset\Lambda_{i+j}\)_._
2. _The image of_ \(\mathcal{O}_{X}\to\Lambda\) _equals_ \(\Lambda_{0}\)_._
3. _The image of_ \(\pi^{-1}(\mathcal{O}_{S})\) _in_ \(\mathcal{O}_{X}\) _is contained in the center of_ \(\Lambda\)_._
4. _The left and right_ \(\mathcal{O}_{X}\)_-module structures on_ \(\operatorname{Gr}_{i}(\Lambda):=\Lambda_{i}/\Lambda_{i-1}\) _are equal._
5. _The_ \(\mathcal{O}_{X}\)_-modules_ \(\operatorname{Gr}_{i}(\Lambda)\) _are coherent._
6. _The graded_ \(\mathcal{O}_{X}\)_-algebra_ \(\operatorname{Gr}^{\bullet}(\Lambda):=\bigoplus_{i=0}^{\infty}\operatorname{ Gr}_{i}(\Lambda)\) _is generated by_ \(\operatorname{Gr}_{1}(\Lambda)\)_._
Because of property (4) we have that for each \(D\in\Lambda_{1}\) the commutator \([D,f]\) with \(f\in\mathcal{O}_{X}\) is an element of \(\Lambda_{0}\). Moreover, for each \(D\in\Lambda_{1}\) and each \(f,g\in\mathcal{O}_{X}\) we have
\[[D,fg]=Dfg-fgD=Dfg-fDg+fDg-fgD=[D,f]g+f[D,g].\]
Thus, assuming that \(\Lambda_{0}=\mathcal{O}_{X}\), we see that the map \([D,-]:\mathcal{O}_{X}\to\mathcal{O}_{X}\) is a \(\mathcal{O}_{S}\)-derivation that we will denote by \(\delta_{D}\) (i.e., \(\delta_{D}(f)=[D,f]\)). Moreover, let us denote \(H=\Lambda_{1}/\Lambda_{0}\). Then we have a short exact sequence
\[0\longrightarrow\Lambda_{0}=\mathcal{O}_{X}\longrightarrow\Lambda_{1} \stackrel{{\mathrm{sb}}}{{\longrightarrow}}H\longrightarrow 0. \tag{2.1}\]
We call the map \(\Lambda_{1}\longrightarrow H\) the symbol map and we will denote it by \(\mathrm{sb}\). We also note that the \(\mathcal{O}_{X}\)-linear map \(\delta:D\mapsto\delta_{D}\) factorizes through \(H\), so that we obtain an \(\mathcal{O}_{X}\)-linear map, also denoted
\[\delta:H\longrightarrow\operatorname{Der}_{\mathcal{O}_{S}}(\mathcal{O}_{X},\mathcal{O}_{X})=T_{X/S},\]
called the anchor map. Here \(T_{X/S}\) is the relative tangent sheaf.
**Remark 2.2**.: _The condition that the anchor map \(\delta=0\) is easily seen to be equivalent to the fact that the right and left \(\mathcal{O}_{X}\)-module structures on \(\Lambda\) are the same._
In this paper we will be sometimes interested in sheaves of rings of differential operators having more properties.
**Definition 2.3**.: _Let \(\Lambda\) be a sheaf of rings of differential operators on \(X\) over \(S\) with \(H=\Lambda_{1}/\Lambda_{0}\). We say that \(\Lambda\) is_
* _almost abelian, if the graded algebra_ \(\operatorname{Gr}^{\bullet}(\Lambda)\) _is abelian._
* _almost polynomial, if_ \(\mathcal{O}_{X}=\Lambda_{0}\)_,_ \(H\) _is locally free and the graded algebra_ \(\operatorname{Gr}^{\bullet}(\Lambda)\) _equals the symmetric algebra_ \(\operatorname{Sym}^{\bullet}(H)\)
_
* _split almost polynomial, if_ \(\Lambda\) _is almost polynomial and the exact sequence (_2.1_) is split._
For completeness we recall the following
**Definition 2.4**.: _A \(\mathcal{O}_{S}\)-Lie algebroid on \(X\) over \(S\) is a triple \((H,[-,-],\delta)\) consisting of an \(\mathcal{O}_{X}\)-module \(H\), which is also a sheaf of \(\mathcal{O}_{S}\)-Lie algebras, and an \(\mathcal{O}_{X}\)-linear anchor map \(\delta:H\to T_{X/S}\) satisfying the following condition for all local sections \(f\in\mathcal{O}_{X}\) and \(D_{1},D_{2}\in H\)_
\[[D_{1},fD_{2}]=f[D_{1},D_{2}]+\delta_{D_{1}}(f)D_{2}.\]
**Remark 2.5**.: _If \(\Lambda\) is almost abelian, then \((H=\Lambda_{1}/\Lambda_{0},[-,-],\delta)\) is a \(\mathcal{O}_{S}\)-Lie algebroid on \(X\) (see Proposition 3.4 for the "restricted" version)._
### Restricted sheaf of rings of differential operators
From now on we assume that the characteristic of \(\mathbb{K}\) is \(p>0\). In that situation we introduce the following
**Definition 2.6**.: _A restricted sheaf of rings of differential operators on \(X\) over \(S\) is a sheaf of rings of differential operators \(\Lambda\) on \(X\) over \(S\) together with a map_
_called a \(p\)-structure, such that for every local sections \(D,D_{1},D_{2}\in\Lambda_{1}\) and every local section \(f\in\mathcal{O}_{X}\) the following properties hold_
1. \(\operatorname{ad}(D^{[p]})=\operatorname{ad}(D)^{p}\)__
2. \((D_{1}+D_{2})^{[p]}=D_{1}^{[p]}+D_{2}^{[p]}+\sum_{i=1}^{p-1}s_{i}(D_{1},D_{2})\)__
3. \((fD)^{[p]}=f^{p}D^{[p]}+\delta_{fD}^{p-1}(f)D\)__
4. \(f^{[p]}=f^{p}\)__
_where \(s_{i}(x,y)\) are the universal Lie polynomials for the commutator in the associative algebra \(\Lambda\), defined by the following expression in \(\Lambda[t]\)_
\[\operatorname{ad}(tx+y)^{p-1}(x)=\sum_{i=1}^{p-1}is_{i}(x,y)t^{i-1}.\]
**Remark 2.7**.: _Note that property (1) is equivalent to the equality \(\operatorname{ad}(D^{[p]})(E)=\operatorname{ad}(D)^{p}(E)\) for any local sections \(D,E\in\Lambda_{1}\). In fact, by Jacobson's identity we have \(\operatorname{ad}(D)^{p}=\operatorname{ad}(D^{p})\), hence if \(D^{[p]}-D^{p}\) commutes with any \(E\in\Lambda_{1}\), it commutes with any \(E\in\Lambda\), since \(\Lambda\) is generated by \(\Lambda_{1}\)._
**Remark 2.8**.: _Let \(F:X\to X\) denote the absolute Frobenius of \(X\) and let \(Z(\Lambda)\) denote the center of \(\Lambda\). We note that the center \(Z(\Lambda)\) does not have the structure of an \(\mathcal{O}_{X}\)-module. However, the left and right \(\mathcal{O}_{X}\)-module structures on the direct image \(F_{*}(Z(\Lambda))\) coincide, since for any local sections \(D\in\Lambda_{1}\) and \(f\in\mathcal{O}_{X}\) we have_
\[[D,f^{p}]=\delta_{D}(f^{p})=0.\]
**Proposition 2.9**.: _For every local sections \(D\in\Lambda_{1}\) and \(f\in\mathcal{O}_{X}\) we have_
\[\delta_{fD}^{p-1}(f)=f\delta_{D}^{p-1}(f^{p-1}).\]
Proof.: The relative tangent sheaf \(T_{X/S}\cong\operatorname{Der}_{\mathcal{O}_{S}}(\mathcal{O}_{X},\mathcal{O}_{X})\) with the standard commutator is a \(\mathcal{O}_{S}\)-Lie algebroid. Moreover this Lie algebroid is equipped with a \(p\)-structure \(\nu\mapsto\nu^{p}\in T_{X/S}\) (see also Remark 3.2). Thus, by the Hochschild identity (see [10, Lemma 1], [11, Lemma 4.3], [12, Lemma 2.1]), we have for every local derivation \(\nu\in T_{X/S}\) and every local section \(f\in\mathcal{O}_{X}\) the equality
\[(f\nu)^{p}=f^{p}\nu^{p}+(f\nu)^{p-1}(f)\nu\]
in the associative \(\mathcal{O}_{S}\)-algebra \(\operatorname{End}_{\mathcal{O}_{S}}(\mathcal{O}_{X})\). On the other hand, we have the following identity from Deligne (cf. [11, Proposition 5.3])
\[(f\nu)^{p}=f^{p}\nu^{p}+f\nu^{p-1}(f^{p-1})\nu.\]
Therefore we have that for every \(\nu\in T_{X}\)
\[(f\nu)^{p-1}(f)\nu=f\nu^{p-1}(f^{p-1})\nu.\]
If \(\nu=0\), then clearly \((f\nu)^{p-1}(f)=f\nu^{p-1}(f^{p-1})=0\). Otherwise, the left-hand side and right-hand side of the equality are multiples of the same nonzero section of the torsion free sheaf \(T_{X/S}\), so they are equal if and only if
\[(f\nu)^{p-1}(f)=f\nu^{p-1}(f^{p-1}).\]
Therefore, the latter equality holds for every local derivation \(\nu\in T_{X/S}\) and every local section \(f\in\mathcal{O}_{X}\). The proposition is then obtained by applying the previous equality to \(\nu=\delta_{D}\) and taking into account that \(f\delta_{D}=\delta_{fD}\), i.e. that the anchor map \(\delta\) is \(\mathcal{O}_{X}\)-linear.
**Corollary 2.10**.: _If \(\Lambda\) is a restricted sheaf of differential operators on \(X\) over \(S\), then for every local sections \(D\in\Lambda_{1}\) and \(f\in\mathcal{O}_{X}\) we have_
\[(fD)^{[p]}=f^{p}D^{[p]}+f\delta_{D}^{p-1}(f^{p-1})D.\]
### The map \(\iota:\Lambda_{1}\to Z(\Lambda)\)
Using the \(p\)-structure on \(\Lambda\), we can define the following map, generalizing the difference of \(p\)-th power maps on vector fields
\[\begin{CD}\iota&:&\Lambda_{1}@>{}>{}>\Lambda\\ &D@>{}>{}>\iota(D)=D^{p}-D^{[p]}.\end{CD}\]
**Proposition 2.11**.: _The map \(\iota:\Lambda_{1}\to\Lambda\) is a \(p\)-linear map, i.e., for every local sections \(D,D_{1},D_{2}\in\Lambda_{1}\) and \(f\in\mathcal{O}_{X}\) we have_
1. \(\iota(D_{1}+D_{2})=\iota(D_{1})+\iota(D_{2}),\)__
2. \(\iota(fD)=f^{p}\iota(D).\)__
Proof.: a) Let us apply Jacobson's identity in the associative ring \(\Lambda(U)\), where \(U\) is any open subset where \(D_{1}\) and \(D_{2}\) are both defined
\[(D_{1}+D_{2})^{p}=D_{1}^{p}+D_{2}^{p}+\sum_{i=1}^{p-1}s_{i}(D_{1},D_{2}).\]
On the other hand, as \([p]\) is a \(p\)-structure on \(\Lambda\), we have
\[(D_{1}+D_{2})^{[p]}=D_{1}^{[p]}+D_{2}^{[p]}+\sum_{i=1}^{p-1}s_{i}(D_{1},D_{2}).\]
Therefore, subtracting one from the other yields
\[\iota(D_{1}+D_{2})=(D_{1}+D_{2})^{p}-(D_{1}+D_{2})^{[p]}=D_{1}^{p}+D_{2}^{p}-D_{1 }^{[p]}-D_{2}^{[p]}=\iota(D_{1})+\iota(D_{2}).\]
b) Let us consider \(f\in\mathcal{O}_{X}=\Lambda_{0}\) as a local section of \(\Lambda\). Then we can apply Deligne's identity (cf. [10, Proposition 5.3]) in the associative ring \(\Lambda(U)\) for an open subset \(U\) such that \(f\in\mathcal{O}_{X}(U)\) and \(D\in\Lambda(U)\) and we obtain
\[(fD)^{p}=f^{p}D^{p}+f\operatorname{ad}(D)^{p-1}(f^{p-1})D.\]
As the adjoint of \(D\) applied to any local function is simply \(\delta_{D}\), we obtain
\[(fD)^{p}=f^{p}D^{p}+f\delta_{D}^{p-1}(f^{p-1})D.\]
On the other hand, by Corollary 2.10 we have
\[(fD)^{[p]}=f^{p}D^{[p]}+f\delta_{D}^{p-1}(f^{p-1})D.\]
Therefore, subtracting one from the other yields
\[\iota(fD)=(fD)^{p}-(fD)^{[p]}=f^{p}D^{p}-f^{p}D^{[p]}=f^{p}\iota(D).\]
**Proposition 2.12**.: _The image of \(\iota\) lies in the center \(Z(\Lambda)\) of \(\Lambda\)._
Proof.: Using Jacobson's identity \(\operatorname{ad}(D^{p})=\operatorname{ad}(D)^{p}\) we obtain that for any local sections \(D,E\in\Lambda_{1}\)
\[\operatorname{ad}(\iota(D))(E)=\operatorname{ad}(D^{p}-D^{[p]})(E) = \operatorname{ad}(D^{p})(E)-\operatorname{ad}(D^{[p]})(E)\] \[= \operatorname{ad}(D)^{p}(E)-\operatorname{ad}(D)^{p}(E)=0.\]
So \(\iota(D)\) commutes with every element in \(\Lambda_{1}\). As \(\Lambda_{1}\) generates \(\Lambda\), \(\iota(D)\) commutes with every element in \(\Lambda\).
Observe that for each \(f\in\mathcal{O}_{X}\) we have \(\iota(f)=f^{[p]}-f^{p}=0\) and that for each \(f\in\mathcal{O}_{X}\) and \(D\in\Lambda_{1}\) we have
\[\iota(f+D)=\iota(f)+\iota(D)=\iota(D).\]
So \(\iota\) factorizes through the quotient
\[\iota:\Lambda_{1}/\Lambda_{0}=H\longrightarrow Z(\Lambda).\]
Then, as \(\iota\) is a \(p\)-linear map, it induces an \(\mathcal{O}_{X}\)-linear map
\[\iota:H\longrightarrow F_{*}(Z(\Lambda)),\]
where \(F\) denotes the absolute Frobenius of \(X\). Moreover, \(F_{*}(Z(\Lambda))\) is a commutative \(\mathcal{O}_{X}\)-algebra (see Remark 2.8), so, by the universal property of the symmetric algebra, the map \(\iota\) induces a map of sheaves of commutative \(\mathcal{O}_{X}\)-algebras
\[\iota:\operatorname{Sym}^{\bullet}(H)\longrightarrow F_{*}(Z(\Lambda)).\]
**Proposition 2.13**.: _Suppose that \(\Lambda\) is almost polynomial. Then the induced map \(\iota:\operatorname{Sym}^{\bullet}(H)\to F_{*}(Z(\Lambda))\) is injective._
Proof.: We note that the symbol map \(\operatorname{sb}:\Lambda\longrightarrow\operatorname{Gr}^{\bullet}(\Lambda) \cong\operatorname{Sym}^{\bullet}(H)\) is a multiplicative (but not \(\mathcal{O}_{X}\)-linear) map, so, composing with \(\iota\), we obtain a multiplicative map
To prove that \(\ker(\iota)=0\) it is enough to prove that \(\ker(\operatorname{sb}(\iota))=0\). As \(\Lambda\) is almost polynomial, we have for every non-zero local \(D\in H\) and any representative \(\overline{D}\in\Lambda_{1}\) with \(\operatorname{sb}(\overline{D})=D\)
\[\operatorname{sb}(\overline{D}^{p})=D^{p}\in\operatorname{Sym}^{p}(H).\]
So
\[\operatorname{sb}(\iota(D))=\operatorname{sb}(\overline{D}^{p})=D^{p}\neq 0.\]
Moreover, for every local section \(D\in\operatorname{Sym}^{\bullet}(H)\) there exist \(D_{1},\dots,D_{k}\in H\) such that \(D=D_{1}\cdots D_{k}+\tilde{D}\) with \(\tilde{D}\) of degree \(<k\). Therefore
\[\operatorname{sb}(\iota)(D)=\prod_{j=1}^{k}\operatorname{sb}(\iota)(D_{j})= \prod_{j=1}^{k}D_{j}^{p}\neq 0.\]
## 3. Properties of almost abelian restricted sheaves of rings of differential operators
Assume that the characteristic of \(\mathbb{K}\) is \(p>0\). Let \(\pi:X\to S\) be a morphism between schemes of finite type over \(\mathbb{K}\).
### Restricted \(\mathcal{O}_{S}\)-Lie algebroid
We need to recall some definitions ([10], [11], [12], [13], [14], [15], Definition 2.2]).
**Definition 3.1**.: _A restricted \(\mathcal{O}_{S}\)-Lie algebroid on \(X\) is a quadruple \((H,[-,-],\delta,[p])\) consisting of an \(\mathcal{O}_{X}\)-module \(H\), which is also a sheaf of restricted \(\mathcal{O}_{S}\)-Lie algebras, a map \([p]:H\to H\) and an \(\mathcal{O}_{X}\)-linear anchor map \(\delta:H\to T_{X/S}\) satisfying the following conditions for all local sections \(f\in\mathcal{O}_{X}\) and \(D,D_{1},D_{2}\in H\)_
1. \([D_{1},fD_{2}]=f[D_{1},D_{2}]+\delta_{D_{1}}(f)D_{2}\)_,_
2. \((fD)^{[p]}=f^{p}D^{[p]}+\delta_{fD}^{p-1}(f)D\)_._
**Remark 3.2**.: _The standard example of restricted \(\mathcal{O}_{S}\)-Lie algebroid on \(X\) over \(S\) is the relative tangent sheaf \(T_{X/S}\cong\operatorname{Der}_{\mathcal{O}_{S}}(\mathcal{O}_{X},\mathcal{O} _{X})\) with the standard Lie bracket, \([p]\) the \(p\)-th power map and \(\delta\) the identity map. Note that condition (2) is then equivalent to the Hochschild identity ([10, Lemma 1])._
### Examples of almost abelian restricted sheaves of rings of differential operators
We consider a restricted sheaf \(\Lambda\) of rings of differential operators as in Definition 2.6. In this subsection we assume that \(\Lambda\) is almost abelian, i.e., the graded algebra \(\operatorname{Gr}^{\bullet}(\Lambda)\) is abelian. Then for any two local sections \(D_{1},D_{2}\in\Lambda_{1}\) we have
\[[\operatorname{sb}(D_{1}),\operatorname{sb}(D_{2})]_{\operatorname{Gr}^{ \bullet}(\Lambda)}=0\in\Lambda_{2}/\Lambda_{1},\]
so \([D_{1},D_{2}]\in\Lambda_{1}\) and therefore \(\Lambda_{1}\) with the induced commutator and anchor \(\delta_{D}(f)=[D,f]\) for \(D\in\Lambda_{1}\) and \(f\in\mathcal{O}_{X}\) becomes an \(\mathcal{O}_{S}\)-Lie algebroid. In this case, conditions (1)-(3) of Definition 2.6 are equivalent to asking that \((\Lambda_{1},[-,-],\delta,[p])\) is a restricted \(\mathcal{O}_{S}\)-Lie algebroid. Condition (4) is then equivalent to asking that the inclusion of \(\mathcal{O}_{S}\)-Lie algebroids
\[(\mathcal{O}_{X},[-,-]=0,\delta=0,(-)^{p})\hookrightarrow(\Lambda_{1},[-,-], \delta,[p])\]
is a homomorphism of restricted \(\mathcal{O}_{S}\)-Lie algebroids.
We first need some information on the universal Lie polynomials used in Definition 2.6.
**Lemma 3.3**.: _Let \(\Lambda\) be any sheaf of rings of differential operators on \(X\) over \(S\). Let \(D\in\Lambda_{1}\) and \(f\in\mathcal{O}_{X}\). Then for every \(i<p-1\)_
\[s_{i}(D,f)=0\]
_and_
\[s_{p-1}(D,f)=\delta_{D}^{p-1}(f).\]
Proof.: In any associative algebra of characteristic \(p\) it is a classical result that we can write the Lie polynomial \(s_{i}(x_{1},x_{2})\) for \(1\leq i\leq p-1\) as follows
\[\begin{array}{c}s_{i}(x_{1},x_{2})=-\frac{1}{i}\sum_{\begin{array}{c} \sigma:\{1,\ldots,p-1\}\to\{1,2\}\\ |\sigma^{-1}(1)|=i\end{array}}\operatorname{ad}(x_{\sigma(1)})\cdots \operatorname{ad}(x_{\sigma(p-1)})(x_{2}).\end{array}\]
Observe that for \(x_{1}=D\in\Lambda_{1}\) and \(x_{2}=f\in\mathcal{O}_{X}\) we have the following equalities
\[\operatorname{ad}(x_{1})(x_{2})=\delta_{D}(f)\in\mathcal{O}_{X},\ \ \operatorname{ad}(x_{1})(x_{1})=0,\ \ \operatorname{ad}(x_{2})(g)=0\ \ \forall g\in\mathcal{O}_{X}.\]
In particular, observe that for any indices \(i\) and \(j\)
\[\operatorname{ad}(x_{i})(x_{j})\in\mathcal{O}_{X},\ \ \operatorname{ad}(x_{i})(g) \in\mathcal{O}_{X}\ \ \forall g\in\mathcal{O}_{X}.\]
Thus, for \(i=1,2\) and \(g\in\mathcal{O}_{X}\)
\[\operatorname{ad}(x_{2})\operatorname{ad}(x_{i})(g)=0.\]
In particular, if \(\sigma(j)=2\) for some \(j<p-1\) we have that
\[\operatorname{ad}(x_{\sigma(j+1)})\cdots\operatorname{ad}(x_{\sigma(p-1)})(x_ {2})\in\mathcal{O}_{X}.\]
So
\[\operatorname{ad}(x_{2})\operatorname{ad}(x_{\sigma(j+1)})\cdots\operatorname {ad}(x_{\sigma(p-1)})(x_{2})=0,\]
and the corresponding summand in the expression of \(s_{i}(D,f)\) would be zero. Similarly, if \(\sigma(p-1)=2\) we have \(\operatorname{ad}(x_{2})(x_{2})=0\) and the whole expression is zero. Thus
for the sum to be non-zero we must have \(\sigma(j)=1\) for all \(j=1,\ldots,p-1\). Finally, we have that for \(i=p-1\)
\[s_{p-1}(D,f)=-\frac{1}{p-1}\operatorname{ad}(D)\cdot\operatorname{ad}(D)(f)=- \frac{1}{p-1}\delta_{D}^{p-1}(f)=\delta_{D}^{p-1}(f).\]
**Proposition 3.4**.: _If \(\Lambda\) is an almost abelian restricted ring of differential operators on \(X\) over \(S\), then \(H=\Lambda_{1}/\Lambda_{0}\) inherits a restricted \(\mathcal{O}_{S}\)-Lie algebroid structure \((H,[-,-]_{H},\delta,[p])\) such that the short exact sequence (2.1) becomes an exact sequence of restricted \(\mathcal{O}_{S}\)-Lie algebroids._
Proof.: First of all, for each \(D_{1},D_{2}\in H\) define \([D_{1},D_{2}]_{H}=\operatorname{sb}([\overline{D_{1}},\overline{D_{2}}]_{ \Lambda})\) for any \(\overline{D_{i}}\) such that \(\operatorname{sb}(\overline{D_{i}})=D_{i}\) for \(i=1,2\). In order to prove that it is well-defined observe that for each \(f_{1},f_{2}\in\mathcal{O}_{X}\) we have
\[\operatorname{sb}([f_{1}+\overline{D_{1}},f_{2}+\overline{D_{2}}]_{\Lambda}) =\operatorname{sb}([\overline{D_{1}},\overline{D_{2}}]_{\Lambda_{1}}+\delta_ {\overline{D_{1}}}(f_{2})-\delta_{\overline{D_{2}}}(f_{1}))=\operatorname{sb} ([\overline{D_{1}},\overline{D_{2}}]_{\Lambda_{1}}).\]
Similarly, as \(\delta_{f}(g)=[f,g]_{\Lambda}=0\) for each \(f,g\in\mathcal{O}_{X}\), clearly \(\delta\) factorizes through \(H\).
Finally, define \(D^{[p]}=\operatorname{sb}(\overline{D}^{[p]})\). Then for each \(f\in\mathcal{O}_{X}\) we have that, using property (2) of the definition of \(p\)-structure and Lemma 3.3 we have
\[\operatorname{sb}((\overline{D}+f)^{[p]})=\operatorname{sb}(\overline{D}^{[p] }+f^{p}+\sum_{i=1}^{p-1}s_{i}(D,f))=\operatorname{sb}(\overline{D}^{[p]}+f^{p }+\delta_{\overline{D}}^{p-1}(f))=\operatorname{sb}(\overline{D}^{[p]}).\]
By construction, taking the symbol of the corresponding expressions in (1), (2) and (3), those properties are also satisfied for the induced \(p\)-structure on \(H\), and the symbol map \(\operatorname{sb}:\Lambda_{1}\longrightarrow H\) is a morphism of restricted \(\mathcal{O}_{S}\)-Lie algebroids.
On the other hand, let us consider a restricted \(\mathcal{O}_{S}\)-Lie algebroid \((H,[-,-],\delta,[p])\). Then the universal enveloping algebra1\(\Lambda_{H}\) of the \(\mathcal{O}_{S}\)-Lie algebroid \(H\), as defined e.g. in [13, Section 4.3] or [15, page 515], becomes a split almost polynomial restricted sheaf of rings of differential operators on \(X\) over \(S\) by taking the \(p\)-structure as follows: we have a splitting as \(\mathcal{O}_{X}\)-modules
Footnote 1: This sheaf of algebras is called the universal enveloping algebra of differential operators associated to \(H\) in [15]
\[(\Lambda_{H})_{1}=\mathcal{O}_{X}\oplus H,\]
and we define for every \(D\in H\) and every \(f\in\mathcal{O}_{X}\)
\[(f+D)^{[p]}=f^{p}+D^{[p]}+\delta_{D}^{p-1}(f).\]
We will show in the next proposition that this map endows \(\Lambda_{H}\) with the structure of a restricted sheaf of rings of differential operators. First we will need two lemmas.
**Lemma 3.5**.: _For any local sections \(f_{1},f_{2}\in\mathcal{O}_{X}\) and \(D_{1},D_{2}\in H\) we have the following equality in \(\Lambda_{H}\)_
\[\delta_{D_{1}}^{p-1}(f_{1})+\delta_{D_{2}}^{p-1}(f_{2})+\sum_{i=1}^{p-1}s_{i}( f_{1}+D_{1},f_{2}+D_{2})=\sum_{i=1}^{p-1}s_{i}(D_{1},D_{2})+\delta_{D_{1}+D_{2}}^{p-1 }(f_{1}+f_{2}).\]
Proof.: We will use Jacobson's formula to compute \((f_{1}+D_{1}+f_{2}+D_{2})^{p}\in\Lambda_{H}\) in two different ways. On one hand, taking into account Lemma 3.3 we have
\[((f_{1}+D_{1})+(f_{2}+D_{2}))^{p}=(f_{1}+D_{1})^{p}+(f_{2}+D_{2})^{p}+\sum_{i=1} ^{p-1}s_{i}(f_{1}+D_{1},f_{2}+D_{2})\\ =f_{1}^{p}+D_{1}^{p}+\delta_{D_{1}}^{p-1}(f_{1})+f_{2}^{p}+D_{2}^{ p}+\delta_{D_{2}}^{p-1}(f_{2})+\sum_{i=1}^{p-1}s_{i}(f_{1}+D_{1},f_{2}+D_{2}).\]
On the other hand, we have
\[((f_{1}+f_{2})+(D_{1}+D_{2}))^{p}=f_{1}^{p}+f_{2}^{p}+(D_{1}+D_{2} )^{p}+\delta_{D_{1}+D_{2}}^{p-1}(f_{1}+f_{2})\\ =f_{1}^{p}+f_{2}^{p}+D_{1}^{p}+D_{2}^{p}+\sum_{i=1}^{p-1}s_{i}(D_ {1},D_{2})+\delta_{D_{1}+D_{2}}^{p-1}(f_{1}+f_{2}).\]
Subtracting both expressions yields the desired equality.
**Lemma 3.6**.: _For any local sections \(f,g\in\mathcal{O}_{X}\) and any local section \(D\in H\) we have_
\[\delta_{gD}^{p-1}(gf)=g^{p}\delta_{D}^{p-1}(f)+\delta_{gD}^{p-1}(g)f.\]
Proof.: As it is an equality of local sections in \(\mathcal{O}_{X}\), it is enough to prove that the difference of the sections is zero on an open set. In particular, as the equality clearly holds if \(f=0\), we can assume that \(f\neq 0\) and restrict to the open subset where \(f\) is invertible. Then \(D^{\prime}=D/f\) is an element of \(H\) and we have the following two identities as a consequence of the \(p\)-structure on \(H\)
\[((gf)D^{\prime})^{[p]}=g^{p}f^{p}(D^{\prime})^{[p]}+\delta_{gfD^{\prime}}^{p-1 }(gf)D^{\prime},\]
\[(g(fD^{\prime}))^{[p]}=g^{p}(fD^{\prime})^{[p]}+\delta_{gfD^{\prime}}^{p-1}(g) (fD^{\prime})=g^{p}f^{p}(D^{\prime})^{[p]}+g^{p}\delta_{fD^{\prime}}^{p-1}(f)D ^{\prime}+\delta_{gfD^{\prime}}^{p-1}(g)fD^{\prime}.\]
Subtracting and considering coefficients of \(D^{\prime}\) yields the equality
\[\delta_{gfD^{\prime}}^{p-1}(gf)=g^{p}\delta_{fD^{\prime}}^{p-1}(f)+\delta_{ gfD^{\prime}}^{p-1}(g)f.\]
Taking into account that \(D=fD^{\prime}\) we obtain the result.
**Proposition 3.7**.: _Let \(H\) be a restricted \(\mathcal{O}_{S}\)-Lie algebroid on \(X\) over \(S\). Then the map \([p]:\mathcal{O}_{X}\oplus H\longrightarrow\mathcal{O}_{X}\oplus H\) defined by_
\[(f+D)^{[p]}=f^{p}+D^{[p]}+\delta_{D}^{p-1}(f)\]
_is a \(p\)-structure for the universal enveloping algebra \(\Lambda_{H}\) making the symbol map \(\operatorname{sb}:(\Lambda_{H})_{1}\longrightarrow H\) a morphism of restricted \(\mathcal{O}_{S}\)-Lie algebroids._
Proof.: It will be enough to check the four properties of Definition 2.6.
1. By Jacobson's formula in \(\Lambda_{H}\) and by Lemma 3.3 we have the equality \[(f+D)^{p}=f^{p}+D^{p}+\delta_{D}^{p-1}(f).\] So \[\operatorname{ad}(f+D)^{p}=\operatorname{ad}(f^{p})+\operatorname{ad}(D^{[p]}) +\operatorname{ad}(\delta_{D}^{p-1}(f))=\operatorname{ad}((f+D)^{[p]}).\]
2. To prove additivity we use Lemma 3.5 to obtain \[(f_{1}+D_{1}+f_{2}+D_{2})^{[p]}\] \[= ((f_{1}+f_{2})+(D_{1}+D_{2}))^{[p]}=f_{1}^{p}+f_{2}^{p}+(D_{1}+D_{2} )^{[p]}+\delta_{D_{1}+D_{2}}^{p-1}(f_{1}+f_{2})\] \[= f_{1}^{p}+f_{2}^{p}+D_{1}^{[p]}+D_{2}^{[p]}+\sum_{i=1}^{p-1}s_{i} (D_{1},D_{2})+\delta_{D_{1}+D_{2}}^{p-1}(f_{1}+f_{2})\] \[= f_{1}^{p}+f_{2}^{p}+D_{1}^{[p]}+D_{2}^{[p]}+\delta_{D_{1}}^{p-1} (f_{1})+\delta_{D_{2}}^{p-1}(f_{2})+\sum_{i=1}^{p-1}s_{i}(f_{1}+D_{1},f_{2}+D_{ 2})\] \[= (f_{1}+D_{1})^{[p]}+(f_{2}+D_{2})^{[p]}+\sum_{i=1}^{p-1}s_{i}(f_{1 }+D_{1},f_{2}+D_{2}).\]
3. Let \(f,g\in\mathcal{O}_{X}\) and \(D\in H\). Then by Lemma 3.6 we have \[(g(f+D))^{[p]}\] \[= (gf+gD)^{[p]}=g^{p}f^{p}+(gD)^{[p]}+\delta_{gD}^{p-1}(gf)\] \[= g^{p}f^{p}+g^{p}D^{[p]}+\delta_{gD}^{p-1}(g)D+\delta_{gD}^{p-1}(gf)\] \[= g^{p}f^{p}+g^{p}D^{[p]}+\delta_{gD}^{p-1}(g)D+g^{p}\delta_{D}^{p -1}(f)+\delta_{gD}^{p-1}(g)f\] \[= g^{p}(f+D)^{[p]}+\delta_{gD}^{p-1}(g)(f+D)=g^{p}(f+D)^{[p]}+ \delta_{g(f+D)}^{p-1}(g)(f+D).\]
4. This property is obvious by taking \(D=0\).
To summarize, we have shown that the definition of a \(p\)-structure on the universal enveloping algebra \(\Lambda_{H}\) of a restricted \(\mathcal{O}_{S}\)-Lie algebroid \(H\), as well as the usual notion of \(p\)-th power for crystalline differential operators are particular cases of our general definition of a \(p\)-structure for a restricted sheaf of rings of differential operators (Definition 2.6).
## 4. Some examples of restricted sheaves of rings of differential operators
In this section we assume that \(\pi:X\to S\) is a smooth morphism.
### Sheaf of crystalline differential operators \(\mathcal{D}_{X/S}\)
The sheaf of crystalline differential operators (see e.g. [1])
\[\Lambda^{dR}=\mathcal{D}_{X/S}\]
is a split almost polynomial restricted sheaf of rings of differential operators. Its associated restricted \(\mathcal{O}_{S}\)-Lie algebroid \(\Lambda_{1}^{dR}/\Lambda_{0}^{dR}\) is the relative tangent sheaf \(T_{X/S}\), taking the commutator as the Lie bracket of vector fields and taking the identity as the anchor map. The \(\mathcal{D}_{X/S}\)-modules correspond to coherent \(\mathcal{O}_{X}\)-modules with a relative integrable connection.
For every derivation \(\nu\in T_{X/S}\) the \(p\)-th power \(\nu^{p}\) is again a derivation, since by applying Leibniz rule, we have for every local section \(f,g\in\mathcal{O}_{X}\)
\[\nu^{p}(fg)=\sum_{k=0}^{p}\binom{p}{k}\nu^{k}(f)\nu^{p-k}(g)=\nu^{p}(f)g+f\nu^{p }(g)\]
so taking \(\nu^{[p]}=\nu^{p}\) gives us a \(p\)-structure \([p]:T_{X/S}\to T_{X/S}\) endowing \(T_{X/S}\) with the structure of a restricted \(\mathcal{O}_{S}\)-Lie algebroid \((T_{X/S},[-,-],\mathrm{id}_{T_{X/S}},[p])\) and, therefore, inducing a \(p\)-structure on \(\mathcal{D}_{X}\).
### Trivial \(p\)-structure on the symmetric algebra
Given a locally free \(\mathcal{O}_{X}\)-module \(H\) over \(X\), the symmetric algebra
\[\Lambda^{\mathrm{Higgs}}=\mathrm{Sym}^{\bullet}(H)\]
is a split almost polynomial restricted sheaf of rings of differential operators, when taking the trivial \(p\)-stucture on \(\Lambda_{1}=\mathcal{O}_{X}\oplus H\), i.e. we take \([p]:H\to H\) to be the zero map on H
\[D^{[p]}=0.\]
Then a \(\Lambda^{\mathrm{Higgs}}\)-module corresponds to a \(H^{\vee}\)-valued Higgs bundle \((E,\phi)\), where \(E\) is a vector bundle over \(X\) and \(\phi:E\to E\otimes H^{\vee}\) is a morphism of \(\mathcal{O}_{X}\)-modules satisfying \(\phi\wedge\phi=0\).
As \(\Lambda^{\mathrm{Higgs}}\) is abelian, we have
\[\mathrm{ad}_{\Lambda_{1}}(D)^{p}=0=\mathrm{ad}_{\Lambda_{1}}(D^{[p]}).\]
Moreover \(s_{i}(D_{1},D_{2})=0\) for all \(D_{1},D_{2}\in H\), so
\[(D_{1}+D_{2})^{[p]}=0=D_{1}^{[p]}+D_{2}^{[p]}=D_{1}^{[p]}+D_{2}^{[p]}+\sum_{i= 1}^{p-1}s_{i}(D_{1},D_{2}).\]
Finally, \(\Lambda^{\mathrm{Higgs}}\) being abelian implies \(\delta=0\), so we trivially have
\[0=(fD)^{[p]}=f^{p}D^{[p]}+\delta_{fD}^{p}(f)D=0.\]
### \(p\)-structure on the reduction to the associated graded of \(\mathcal{D}_{X/S}\)
By the classical Rees construction applied to the filtered sheaf \(\Lambda^{dR}=\mathcal{D}_{X/S}\) (see Subsection 4.1) we obtain a sheaf of rings over \(X\times\mathrm{Spec}(\mathbb{K}[t])=X\times\mathbb{A}^{1}\) defined as
\[\Lambda^{dR,R}=\bigoplus_{i\geq 0}t^{i}\Lambda_{i},\]
where \(t\) acts by multiplication with \(t\) on \(\Lambda^{dR,R}\) using the inclusions \(\Lambda_{i}\subset\Lambda_{i+1}\). Then by construction the fibers over the closed points \(0\) and \(1\) of \(\mathbb{A}^{1}\) equal
\[(\Lambda^{dR,R})_{0}=\mathrm{Sym}^{\bullet}(T_{X/S})\quad\text{and}\quad( \Lambda^{dR,R})_{1}=\mathcal{D}_{X/S}=\Lambda^{dR}.\]
We observe that \(\Lambda^{\mathrm{dR,R}}\) is a split almost polynomial sheaf of rings of differential operators on \(X\times\mathbb{A}^{1}\) relative to \(S\times\mathbb{A}^{1}\) such that the fiber over each \(\lambda\in\mathbb{A}^{1}\) corresponds to the universal enveloping algebra of the \(\mathcal{O}_{S}\)-Lie algebroid \((T_{X/S},\lambda[-,-],\mathrm{id}_{T_{X/S}})\).
We can endow \(\Lambda^{dR,R}\) with a \(p\)-structure as follows. We note that
\[\Lambda^{dR,R}_{1}=\mathcal{O}_{X\times\mathbb{A}^{1}}\oplus T_{X\times\mathbb{ A}^{1}/S\times\mathbb{A}^{1}}\quad\text{and}\quad T_{X\times\mathbb{A}^{1}/S \times\mathbb{A}^{1}}=T_{X/S}.\]
Then the \(p\)-structure on \(\Lambda^{dR,R}_{1}\) over \(X\times\mathbb{A}^{1}\) is defined by
\[[p]^{R}:T_{X/S}\to T_{X/S}\qquad D^{[p]^{R}}=t^{p-1}D^{[p]},\]
where \(t\) is the coordinate on \(\mathbb{A}^{1}\) and \(D^{[p]}\) is the \(p\)-th power of the relative vector field \(D\in T_{X/S}\). By construction of \(\Lambda^{\mathrm{dR},R}\) the commutator of elements in \(\Lambda^{\mathrm{dR},R}_{1}\) is the commutator of differential operators multiplied by the coordinate \(t\), i.e., for every \(D\in\Lambda^{\mathrm{dR},R}_{1}\)
\[\mathrm{ad}_{\Lambda^{\mathrm{dR},R}_{1}}(D)=t\,\mathrm{ad}_{\Lambda^{dR}_{1} }(D)\]
Moreover, as the Lie polynomials \(s_{i}(x,y)\) are homogeneous of degree \(p-1\), we have
\[s_{i}^{\Lambda^{\mathrm{dR},R}}(x,y)=t^{p-1}s_{i}^{\Lambda^{dR}}(x,y).\]
Therefore, the following equalities hold for any local sections \(D\in T_{X/S}\) and \(f\in\mathcal{O}_{X\times\mathbb{A}^{1}}\)
\[\mathrm{ad}_{\Lambda^{\mathrm{dR},R}_{1}}(D^{[p]^{R}})=t\, \mathrm{ad}_{\Lambda^{dR}_{1}}(tD^{[p]})=t^{p}\,\mathrm{ad}_{\Lambda^{dR}_{1} }(D)^{p}=(t\,\mathrm{ad}_{\Lambda^{dR}_{1}}(D))^{p}=\mathrm{ad}_{\Lambda^{dR,R }_{1}}(D)^{p},\\ (D_{1}+D_{2})^{[p]^{R}}=t^{p-1}(D_{1}+D_{2})^{[p]}=t^{p-1}D_{1}^{[ p]}+t^{p-1}D_{1}^{[p]}+\sum_{i=1}^{p-1}t^{p-1}s_{i}^{\Lambda^{dR}}(D_{1},D_{2})\\ =D_{1}^{[p]^{R}}+D_{2}^{[p]^{R}}+\sum_{i=1}^{p-1}s_{i}^{\Lambda^{ \mathrm{dR},R}}(D_{1},D_{2}),\]
\[(fD)^{[p]^{R}}=t^{p-1}(fD)^{[p]}=t^{p-1}f^{p}D^{[p]}+t^{p-1}\left( \delta_{fD}^{\Lambda^{dR}}\right)^{p-1}(f)D\\ =f^{p}D^{[p]^{R}}+\left(\delta_{fD}^{\Lambda^{\mathrm{dR},R}} \right)^{p-1}(f)D.\]
This proves that \([p]^{R}\) is a \(p\)-structure for \(\Lambda^{\mathrm{dR},R}\).
### \(p\)-structure on the reduction to the associated graded: general case
More generally, let \(\Lambda=\Lambda_{H}\) be the restricted sheaf of rings of differential operators over \(X\) given as the universal enveloping algebra of a restricted \(\mathcal{O}_{S}\)-Lie algebroid \((H,[-,-],\delta,[p])\) -- see Proposition 3.7. Consider the Rees construction \(\Lambda^{R}\) over \(X\times\mathbb{A}^{1}\) relative to \(S\times\mathbb{A}^{1}\) of the filtered sheaf of rings \(\Lambda\). Then the fiber of \(\Lambda^{R}\) over \(\lambda\in\mathbb{A}^{1}\) is the universal enveloping algebra of the \(\mathcal{O}_{S}\)-Lie algebroid \((H,\lambda[-,-],\lambda\delta)\). We also note that \(\Lambda^{R}_{1}/\Lambda^{R}_{0}=p^{*}_{X}(H)\), where \(p_{X}:X\times\mathbb{A}^{1}\to X\) is the projection onto \(X\). The anchor map \(\delta^{R}\) of \(\Lambda^{R}\) equals
\[\delta^{R}=t\delta:\Lambda^{R}_{1}/\Lambda^{R}_{0}=p^{*}_{X}(H)\to p^{*}_{X}( T_{X/S}).\]
Then the previous argument proves that the map \([p]^{R}:p^{*}_{X}(H)\to p^{*}_{X}(H)\) over \(X\times\mathbb{A}^{1}\) given by
\[D^{[p]^{R}}=t^{p-1}D^{[p]}\]
is a \(p\)-structure for \(\Lambda^{R}\). This also yields an explicit deformation of the \(p\)-structure on \(\Lambda\) to the trivial \(p\)-structure on \(\mathrm{Gr}^{\bullet}(\Lambda)\cong\mathrm{Sym}^{\bullet}(H)\).
### \(p\)-structure on the Atiyah algebroid of a line bundle
Let us study an example which is almost polynomial, but not split. Let \(L\) be a line bundle on \(X\) and take \(\Lambda\) to be the sheaf of crystalline differential operators on \(L\), i.e., the subalgebra
\[\Lambda=\mathcal{D}_{X/S}(L)\subset\operatorname{End}_{\mathcal{O}_{S}}(L)\]
generated by the relative Atiyah algebroid \(\operatorname{At}_{X/S}(L)=\operatorname{Diff}^{1}_{\mathcal{O}_{S}}(L,L)\). Note that
\[\Lambda_{1}=\operatorname{At}_{X/S}(L).\]
Local sections of \(\operatorname{At}_{X/S}(L)\) can be identified with local sections \(D\in\operatorname{End}_{\pi^{-1}(\mathcal{O}_{S})}(L)\) such that for each \(f\in\mathcal{O}_{X}\), \([D,f]\in\operatorname{End}_{\mathcal{O}_{X}}(L)=\mathcal{O}_{X}=\mathcal{D}^{ 0}(L)\). Then, for every \(D\in\operatorname{At}_{X/S}(L)\) let us denote by \(\delta_{D}:\mathcal{O}_{X}\to\mathcal{O}_{X}\) the map
\[\delta_{D}(f)=[D,f]\in\mathcal{O}_{X}.\]
Observe that, as \(\Lambda\) is associative, we have that for each \(f,g\in\mathcal{O}_{X}\)
\[\delta_{D}(fg)=[D,fg] = Dfg-fgD=Dfg-fDg+fDg-fgD\] \[= [D,f]g+f[D,g]=\delta_{D}(f)g+f\delta_{D}(g)\]
thus, \(\delta_{D}\) is a \(\mathcal{O}_{S}\)-derivation and we can consider the map \(\delta:\operatorname{At}_{X/S}(L)\longrightarrow T_{X/S}\). So we obtain the short exact sequence
\[0\longrightarrow\mathcal{O}_{X}\longrightarrow\operatorname{At}_{X/S}(L) \stackrel{{\delta}}{{\longrightarrow}}T_{X/S}\longrightarrow 0. \tag{4.1}\]
Thus the triple \((\operatorname{At}_{X/S}(L),[-,-],\delta)\) becomes a \(\mathcal{O}_{S}\)-Lie algebroid. We will now endow this Lie algebroid with a \(p\)-structure.
**Lemma 4.1**.: _Let \(D\in\operatorname{At}_{X/S}(L)\). Then for every \(f\in\mathcal{O}_{X}\), \([D^{p},f]\in\mathcal{O}_{X}\), so \(D^{p}\) can be identified with an element in \(\operatorname{At}_{X/S}(L)\) that we will denote as \(D^{[p]}\)._
Proof.: As \(\Lambda\) is an associative \(\mathcal{O}_{X}\)-algebra of characteristic \(p\) we can apply Jacobson's formula and we have that for every \(D\in\operatorname{At}_{X/S}(L)\) and every \(f\in\mathcal{O}_{X}\)
\[[D^{p},f]=\operatorname{ad}(D^{p})(f)=\operatorname{ad}(D)^{p}(f)=\delta_{D}^{ p}(f)\in\mathcal{O}_{X}.\]
**Proposition 4.2**.: _The map \([p]:\operatorname{At}_{X/S}(L)\to\operatorname{At}_{X/S}(L)\) described in the previous lemma is a \(p\)-structure for \(\Lambda\)._
Proof.: Property (1) was proved in the previous lemma. For the additivity property (2), observe that in \(\Lambda\) Jacobson's formula yields
\[(D_{1}+D_{2})^{p}=D_{1}^{p}+D_{2}^{p}+\sum_{i=1}^{p-1}s_{i}(D_{1},D_{2}).\]
As this is indeed an equality in the \(\mathcal{O}_{X}\)-algebra \(\Lambda\), the commutator of the left and right side of the equation with an element of \(\mathcal{O}_{X}\) must yield the same element of \(\mathcal{O}_{X}\), so both left and right sides remain equal under the identification of \(D_{i}^{p}\) with the corresponding element \(D_{i}^{[p]}\in\operatorname{At}_{X/S}(L)\). For (3), since \(\Lambda\) is associative, we can apply Deligne's identity [47, Proposition 5.3] and we obtain that
\[(fD)^{p}=f^{p}D^{p}+f\operatorname{ad}(D)^{p-1}(f^{p-1})D=f^{p}D^{p}+f\delta_{ D}^{p-1}(f^{p-1})D.\]
Now, applying Proposition 2.9 we have that
\[f^{p}D^{p}+f\delta_{D}^{p-1}(f^{p-1})D=f^{p}D^{p}+\delta_{fD}^{p-1}(f)D\]
and, applying a similar argument to the previous property, we obtain the desired equality. Finally, it is trivial by construction that for every \(f\in\mathcal{O}_{X}\), \(f^{[p]}=f^{p}\).
Finally, we mention that \(\Lambda=\mathcal{D}_{X/S}(L)\) coincides with the Sridharan enveloping algebra \(\Lambda_{\mathrm{At}_{X/S}(L)}\) associated to the non-split extension (4.1) of the Lie algebroid \(T_{X/S}\) by \(\mathcal{O}_{X}\) as constructed in [13, Section 4.3] (see also [13, Example 3.2.3] for this particular case) or [10, page 516].
### \(p\)-structures on the symmetric algebra
Returning to the abelian setting, let us fix \(\Lambda=\mathrm{Sym}^{\bullet}(H)\) for some locally free \(\mathcal{O}_{X}\)-module \(H\) and let us study the possible \(p\)-structures on \(\Lambda\). As before, \(\Lambda\) being abelian implies that for any \(D\in H\)
\[\mathrm{ad}_{\Lambda_{1}}(D)^{p}=0=\mathrm{ad}_{\Lambda_{1}}(D^{[p]})\]
and for any \(D_{1},D_{2}\in H\), \(s_{i}(D_{1},D_{2})=0\). Moreover, for any \(D\in H\), \(\delta_{D}=0\). Therefore, the conditions for a map \([p]:H\longrightarrow H\) to endow \(\Lambda\) with a \(p\)-structure are the following
1. \((D_{1}+D_{2})^{[p]}=D_{1}^{[p]}+D_{2}^{[p]}\),
2. \((fD)^{[p]}=f^{p}D^{[p]}\).
So a \(p\)-structure on \(\mathrm{Sym}^{\bullet}(H)\) is given by a \(p\)-linear map from \(H\) to \(H\), or equivalently by an \(\mathcal{O}_{X}\)-linear map
\[\alpha:F^{*}H\to H,\]
where \(F\) denotes the absolute Frobenius of \(X\).
### Classification of \(p\)-structures on a general \(\Lambda\)
In this subsection we will describe all \(p\)-structures on a given sheaf of rings of differential operators \(\Lambda\).
**Proposition 4.3**.: _Let \([p]:\Lambda_{1}\to\Lambda_{1}\) be a \(p\)-structure for \(\Lambda\). Then any other \(p\)-structure \([p]^{\prime}:\Lambda_{1}\to\Lambda_{1}\) is given by_
\[[p]^{\prime}=[p]+\varphi\circ\mathrm{sb}\]
_where \(\varphi:H\longrightarrow Z(\Lambda_{1})\) is a \(p\)-linear map from \(H=\Lambda_{1}/\Lambda_{0}\) to the centralizer \(Z(\Lambda_{1})\) of \(\Lambda_{1}\) in \(\Lambda_{1}\)._
Proof.: We put \(\varphi(D)=D^{[p]}-D^{[p]^{\prime}}\). Then for every local sections \(D,E\in\Lambda_{1}\) we have
\[\mathrm{ad}(\varphi(D))(E)=\mathrm{ad}(D^{[p]})(E)-\mathrm{ad}(D^{[p]^{\prime} })(E)=\mathrm{ad}(D)^{p}(E)-\mathrm{ad}(D)^{p}(E)=0.\]
So \(\varphi(D)\in Z(\Lambda_{1})\) for every \(D\in\Lambda_{1}\). Let \(D_{1},D_{2}\in\Lambda_{1}\). Then
\[\varphi(D_{1}+D_{2})=D_{1}^{[p]}+D_{2}^{[p]}+\sum_{i=0}^{p-1}s_{i}(D_{1},D_{2}) -D_{1}^{[p]^{\prime}}-D_{2}^{[p]^{\prime}}-\sum_{i=0}^{p-1}s_{i}(D_{1},D_{2})= \varphi(D_{1})+\varphi(D_{2}).\]
Similarly
\[\varphi(fD)=f^{p}D^{[p]}+\delta_{fD}^{p-1}(f)D-f^{p}D^{[p]^{\prime}}-\delta_{fD }^{p-1}(f)D=f^{p}\varphi(D).\]
So \(\varphi:\Lambda_{1}\to\Lambda_{1}\) is \(p\)-linear. Moreover, clearly
\[\varphi(f)=f^{[p]}-f^{[p]^{\prime}}=f^{p}-f^{p}=0.\]
So \(\varphi\) factors through the quotient \(\varphi:H\longrightarrow Z(\Lambda_{1})\).
Conversely, let \([p]:\Lambda_{1}\to\Lambda_{1}\) be a \(p\)-structure on \(\Lambda\) and let \(\varphi:H\to Z(\Lambda_{1})\) be a \(p\)-linear map. We then define \(D^{[p]^{\prime}}=D^{[p]}+\varphi(\operatorname{sb}(D))\). Then for every local section \(D_{1},D_{2},D,E\in\Lambda_{1}\) and every local section \(f\in\mathcal{O}_{X}\)
\[\operatorname{ad}(D^{[p]^{\prime}})(E)=\operatorname{ad}(D^{[p]})+ \operatorname{ad}(\varphi(\operatorname{sb}(D)))(E)=\operatorname{ad}(D)^{p}( E),\]
\[(D_{1}+D_{2})^{[p]^{\prime}}=(D_{1}+D_{2})^{[p]}+\varphi( \operatorname{sb}(D_{1})+\operatorname{sb}(D_{2}))\\ =D_{1}^{[p]}+D_{2}^{[p]}+\sum_{i=0}^{p-1}s_{i}(D_{1},D_{2})+ \varphi(\operatorname{sb}(D_{1}))+\varphi(\operatorname{sb}(D_{2}))=D_{1}^{[p ]^{\prime}}+D_{2}^{[p]^{\prime}}+\sum_{i=0}^{p-1}s_{i}(D_{1},D_{2})\\ (fD)^{[p]^{\prime}}=(fD)^{[p]}+\varphi(\operatorname{sb}(fD))=f^{p }D^{[p]}+\delta_{fD}^{p-1}(f)D+f^{p}\varphi(\operatorname{sb}(D))\\ =f^{p}D^{[p]^{\prime}}+\delta_{fD}^{p-1}(f)D,\]
\[f^{[p]^{\prime}}=f^{[p]}+\varphi(\operatorname{sb}(f))=f^{p}+\varphi(0)=f^{p}.\]
So \([p]^{\prime}:\Lambda_{1}\to\Lambda_{1}\) induces a \(p\)-structure on \(\Lambda\).
**Corollary 4.4**.: _The \(p\)-structures on \(\Lambda^{dR}=\mathcal{D}_{X/S}\) are classified by global \(1\)-forms \(\omega\in H^{0}(F^{*}\Omega^{1}_{X/S})\) and are given by_
\[(f+v)^{[p]^{\prime}}=f^{p}+v^{[p]}+v^{p-1}(f)+\omega(F^{*}v)\]
_for \(f\in\mathcal{O}_{X}\) and \(v\in T_{X/S}\), where \([p]:T_{X/S}\to T_{X/S}\) denotes the canonical \(p\)-structure on the relative tangent bundle given by the \(p\)-th power of vector fields._
Proof.: We know that the \(p\)-th power on \(T_{X/S}\) induces a \(p\)-structure \([p]\) on \(\mathcal{D}_{X/S}\) given by
\[(f+v)^{[p]}=f^{p}+v^{[p]}+v^{p-1}(f)\]
for \(f\in\mathcal{O}_{X}\) and \(v\in T_{X/S}\). So by Proposition 4.3 any other \(p\)-structure is given by adding a \(\mathcal{O}_{X}\)-linear map \(\varphi:F^{*}T_{X}\longrightarrow Z(\mathcal{D}^{1}_{X/S})\) composed with the symbol. Let us compute the center \(Z(\mathcal{D}^{1}_{X/S})\). Any element of \(Z(\mathcal{D}^{1}_{X/S})\) has to commute in particular with all elements in \(\mathcal{D}^{0}_{X/S}=\mathcal{O}_{X}\). But the elements of \(T_{X/S}\) that commute with \(\mathcal{O}_{X}\) are those in the kernel of the anchor map \(\delta:T_{X/S}\to T_{X/S}\), which is the identity map. Thus we obtain that \(Z(\mathcal{D}^{1}_{X/S})\subset\mathcal{O}_{X}\) and we have
\[F_{*}(Z(\mathcal{D}^{1}_{X/S}))=\mathcal{O}_{X}=\ker(d:F_{*}\mathcal{O}_{X} \to F_{*}\Omega^{1}_{X/S}).\]
Therefore, any other \(p\)-structure \([p]^{\prime}\) must equal \([p]+\varphi\circ\operatorname{sb}\), where \(\varphi:F^{*}T_{X/S}\longrightarrow F^{*}\mathcal{O}_{X}\) is \(\mathcal{O}_{X}\)-linear, which corresponds a global \(1\)-form in \(H^{0}(F^{*}\Omega^{1}_{X/S})\), yielding the result.
## 5. \(p\)-curvature of a restricted \(\Lambda\)-module
Let \(\Lambda\) be a sheaf of rings of differential operators on \(X\) over \(S\) and let \(E\) be a coherent \(\mathcal{O}_{X}\)-module.
**Definition 5.1**.: _A \(\Lambda\)-module structure on \(E\) is an \(\mathcal{O}_{X}\)-linear map_
\[\nabla:\Lambda\otimes_{\mathcal{O}_{X}}E\longrightarrow E\]
_satisfying the usual module axioms and such that the \(\mathcal{O}_{X}\)-module structure on \(E\) induced by \(\mathcal{O}_{X}\to\Lambda\) coincides with the original one._
We will denote a \(\Lambda\)-module \(E\) by \((E,\nabla)\) and for any local section \(D\in\Lambda\) the \(\mathcal{O}_{S}\)-linear endomorphism of \(E\) induced by the action of \(D\) will be denoted by \(\nabla_{D}\in\operatorname{End}_{\mathcal{O}_{S}}(E)\). Given a \(\Lambda\)-module \((E,\nabla)\) and a local section \(D\in\Lambda_{1}\) we define the \(p\)-curvature \(\psi_{\nabla}(D):E\longrightarrow E\) as the map
\[\psi_{\nabla}(D)=(\nabla_{D})^{p}-\nabla_{D^{[p]}}\in\operatorname{End}_{ \mathcal{O}_{S}}(E).\]
We observe that we can define the \(p\)-curvature in terms of the map \(\iota:\Lambda_{1}\to\Lambda\) defined in Subsection 2.3 as follows
\[\psi_{\nabla}(D)=(\nabla_{D})^{p}-\nabla_{D^{[p]}}=\nabla_{D^{p}}-\nabla_{D^{[ p]}}=\nabla_{D^{p}-D^{[p]}}=\nabla_{\iota(D)}.\]
**Proposition 5.2**.: _For any \(D\in\Lambda_{1}\), \(\psi_{\nabla}(D):E\to E\) is an \(\mathcal{O}_{X}\)-linear map._
Proof.: By definition the \(\mathcal{O}_{X}\)-module structure induced by the action of \(\Lambda\) on \(E\) coincides with the \(\mathcal{O}_{X}\)-module structure of \(E\), so for any local sections \(s\in E\) and \(f\in\mathcal{O}_{X}\) we have
\[fs=\nabla_{f}(s).\]
Moreover, as \(\iota(D)\in Z(\Lambda)\) we have for any local section \(D\in\Lambda_{1}\)
\[\psi_{\nabla}(D)(fs) = \nabla_{\iota(D)}\circ\nabla_{f}(s)=\nabla_{\iota(D)f}(s)\] \[= \nabla_{f\iota(D)}(s)=\nabla_{f}\circ\nabla_{\iota(D)}(s)=f\psi_ {\nabla}(D)(s).\]
This, together with Proposition 2.11 and the fact that \(\iota\) factors through the symbol, proves that the \(p\)-curvature induces a \(p\)-linear map
\[\psi_{\nabla}:H\longrightarrow\operatorname{End}_{\mathcal{O}_{X}}(E),\]
where \(H=\Lambda_{1}/\Lambda_{0}\). So we obtain an \(\mathcal{O}_{X}\)-linear map
\[\psi_{\nabla}:F^{*}H\longrightarrow\operatorname{End}_{\mathcal{O}_{X}}(E).\]
**Proposition 5.3**.: _For each \(\Lambda\)-module \((E,\nabla)\) the \(p\)-curvature \(\psi_{\nabla}\) induces a \(F^{*}H^{\vee}\)-valued Higgs field on \(E\), i.e., a morphism of \(\mathcal{O}_{X}\)-algebras_
\[\tilde{\psi}_{\nabla}:\operatorname{Sym}^{\bullet}F^{*}H\longrightarrow \operatorname{End}_{\mathcal{O}_{X}}(E).\]
_Moreover, for any local sections \(D\in H\) and \(D^{\prime}\in\Lambda\), \(\nabla_{D^{\prime}}\) commutes with \(\psi_{\nabla}(D)\)._
Proof.: We have already proven that the \(p\)-curvature induces an \(\mathcal{O}_{X}\)-linear map \(\psi_{\nabla}:F^{*}H\longrightarrow\operatorname{End}_{\mathcal{O}_{X}}(E)\). In order for this map to lift to a morphism of algebras \(\operatorname{Sym}^{\bullet}F^{*}H\longrightarrow\operatorname{End}_{ \mathcal{O}_{X}}(E)\), it is necessary that for each \(D_{1},D_{2}\in H\)
\[[\psi_{\nabla}(D_{1}),\psi_{\nabla}(D_{2})]=0.\]
But, taking into account that from Proposition 2.12 we know that the image of \(\iota:F^{*}H\to\Lambda\) lies in the center \(Z(\Lambda)\), we have
\[[\psi_{\nabla}(D_{1}),\psi_{\nabla}(D_{2})]=\big{[}\nabla_{\iota(D_{1})},\nabla_ {\iota(D_{2})}\big{]}=\nabla_{[\iota(D_{1}),\iota(D_{2})]}=\nabla_{0}=0.\]
The second part follows from a similar computation
\[[\psi_{\nabla}(D),\nabla_{D^{\prime}}]=\big{[}\nabla_{\iota(D_{1})},\nabla_{D^ {\prime}}\big{]}=\nabla_{[\iota(D),D^{\prime}]}=\nabla_{0}=0.\]
**Remark 5.4**.: _The previous proposition was already obtained in [14, Lemma 4.9] for modules over restricted \(\mathcal{O}_{S}\)-Lie algebroids \(H\), which correspond to \(\Lambda\)-modules, where \(\Lambda=\Lambda_{H}\) is the universal enveloping algebra of the \(\mathcal{O}_{S}\)-Lie algebroid \(H\). We note that the proofs of the two previous propositions are similar to those given in [14], but rely on the more general statement obtained in Proposition 2.12 for general restricted sheaves of rings of differential operators._
## 6. Hitchin map for restricted \(\Lambda\)-modules
In this section we assume that \(X\) is an integral projective scheme over \(S=\operatorname{Spec}(\mathbb{K})\). This assumption is needed in our main Theorem 6.6. We refer the reader to [14] sections 3.5 and 4.5 for a construction of the Hitchin map in the relative case.
Given a restricted sheaf \(\Lambda\) of rings of differential operators on \(X\) and a \(\Lambda\)-module \((E,\nabla)\) of rank \(r\) over \(X\), we have proved in Proposition 5.3 that the \(p\)-curvature of \((E,\nabla)\) induces a \(F^{*}H^{\vee}\)-valued Higgs field on \(E\)
\[\psi_{\nabla}\in H^{0}(X,\operatorname{End}(E)\otimes F^{*}H^{\vee}).\]
Then, by taking the (classical) Hitchin map \(h\) for rank-\(r\) Higgs sheaves we obtain a point in the Hitchin base \(\mathcal{A}_{r}(X,F^{*}H^{\vee})\)
\[h(E,\psi_{\nabla})=(\operatorname{tr}(\wedge^{k}\psi_{\nabla}))_{k=1}^{r}\in \mathcal{A}_{r}(X,F^{*}H^{\vee}):=\bigoplus_{k=1}^{r}H^{0}(X,\operatorname{Sym }^{k}(F^{*}H^{\vee})).\]
Therefore, the \(p\)-curvature map \((E,\nabla)\mapsto\psi_{\nabla}\) composed with the Hitchin map \(h\) defines a map \(h_{\Lambda}\)
\[h_{\Lambda}:\mathcal{M}_{X}^{\Lambda}(r,P)\longrightarrow\mathcal{A}_{r}(X,F^ {*}H^{\vee}),\qquad(E,\nabla)\mapsto h(E,\psi_{\nabla}), \tag{6.1}\]
where \(\mathcal{M}_{X}^{\Lambda}(r,P)\) denotes the coarse moduli space parameterizing Giesecker semi-stable \(\Lambda\)-modules over \(X\) of rank \(r\) and with Hilbert polynomial \(P\) ([15], [14], [14]).
In order to understand the structure of the map \(h_{\Lambda}\), let us first consider the example given by the trivial \(p\)-structure on the symmetric algebra \(\operatorname{Sym}^{\bullet}(H)\) -- see Subsection 4.2. In that case a \(\operatorname{Sym}^{\bullet}(H)\)-module is an \(H^{\vee}\)-valued Higgs sheaf and its \(p\)-curvature is just the \(p\)-th power of the Higgs field
\[\psi_{\nabla}(D)=\nabla_{D^{p}}=(\nabla_{D})^{p}.\]
Then it is easily seen that the coefficients of the characteristic polynomial of \(\psi_{\nabla}\) are pull-backs by the Frobenius map of global sections in \(H^{0}(X,\operatorname{Sym}^{k}(H^{\vee}))\).
Before proving our main result on the map \(h_{\Lambda}\), we will need to recall the definition of the canonical connection
\[\nabla^{\operatorname{can}}:F^{*}\mathcal{G}\longrightarrow F^{*}\mathcal{G} \otimes\Omega^{1}_{X}\]
on a pull-back sheaf \(F^{*}\mathcal{G}\) for a coherent \(\mathcal{O}_{X}\)-module \(\mathcal{G}\) under the absolute Frobenius map \(F\) of \(X\). Over an affine open subset \(\operatorname{Spec}(A)=U\subset X\), we denote the \(A\)-module of local sections \(\mathcal{G}(U)\) by \(M\). Then local sections of the pull-back \(F^{*}\mathcal{G}(U)\) correspond to \(A\otimes_{A}M\) with the \(A\)-module structure given by left multiplication and the action of \(A\) on \(A\) given by the Frobenius map \(F\). In other words, we have the identifications \(\lambda^{p}a\otimes_{A}m=a\otimes_{A}\lambda m\) for any \(\lambda,a\in A\) and \(m\in M\). Then with this notation the canonical connection is defined by
\[\nabla^{\operatorname{can}}:a\otimes_{A}m\mapsto da\otimes_{A}m,\]
or equivalently, \(\nabla^{\operatorname{can}}_{\partial}(a\otimes_{A}m)=\partial a\otimes m\) for any vector field \(\partial\).
**Lemma 6.1**.: _Let \(\mathcal{G}\) be a torsion-free \(\mathcal{O}_{X}\)-module over an integral scheme \(X\) and let \(s\in H^{0}(X,F^{*}\mathcal{G})\) be a global section. Suppose that there exists an open subset \(\Omega\subset X\) such that_
\[\nabla^{\operatorname{can}}_{\partial}(s_{|\Omega})=0\]
_for any vector field \(\partial\) over \(\Omega\). Then \(s\) descends under the Frobenius map, i.e. there exists \(s^{\prime}\in H^{0}(X,\mathcal{G})\) such that \(s=F^{*}(s^{\prime})\)._
Proof.: It will be enough to show the statement locally on an affine open subset \(\operatorname{Spec}(A)\) of \(X\). We then apply Cartier's theorem over \(\operatorname{Spec}(K)\), where \(K\) is the fraction field of \(A\), and obtain the existence of the Frobenius descend \(s^{\prime}\) over \(\operatorname{Spec}(K)\). Now the section \(s\) also descends over \(\operatorname{Spec}(A)\) since \(\mathcal{G}\) is torsion-free. The computations are straightforward and left to the reader.
**Lemma 6.2**.: _Let \((E,\nabla)\) be a \(\Lambda\)-module. Then for any local sections \(D\in H=\Lambda_{1}/\Lambda_{0}\) and \(D^{\prime}\in\Lambda_{1}\) we have the following commutative diagram_
_where we define the endomorphism \(\tilde{\nabla}_{D^{\prime}}\) by_
\[\tilde{\nabla}_{D^{\prime}}=\nabla_{D^{\prime}}\otimes\operatorname{Id}_{F^{* }H^{\vee}}+\operatorname{Id}_{E}\otimes\nabla^{\operatorname{can}}_{\delta \operatorname{sob}(D^{\prime})}.\]
Proof.: It is enough to work locally over an affine open subset \(U=\operatorname{Spec}(A)\). Consider an irreducible tensor \(v\otimes a\otimes h\in(E\otimes F^{*}H^{\vee})(U)\), where \(v\in E(U)\), \(h\in H^{\vee}(U)\), \(a\in A=\mathcal{O}_{X}(U)\) and the last tensor product is taken over the Frobenius map, i.e.
\[\lambda^{p}v\otimes a\otimes h=v\otimes\lambda^{p}a\otimes h=v\otimes a\otimes \lambda h.\]
Then, using associativity of the ring \(\Lambda\) and the fact that \([D^{\prime},f]=\delta_{\operatorname{sb}(D^{\prime})}(f)\) for any \(D^{\prime}\in\Lambda_{1}\) and \(f\in\mathcal{O}_{X}\), have the following
\[\tilde{\nabla}_{D^{\prime}}(v\otimes a\otimes h)=\nabla_{D^{\prime }}(v)\otimes a\otimes h+v\otimes\nabla_{\delta\operatorname{sb}(D^{\prime})}^{ \operatorname{can}}(a\otimes h)\\ =\nabla_{D^{\prime}}(v)\otimes a\otimes h+v\otimes\delta_{ \operatorname{sb}(D^{\prime})}(a)\otimes h.\]
Applying \(\operatorname{Id}\otimes D\) for a local section \(D\in H\) we have
\[(\operatorname{Id}\otimes D)\circ\tilde{\nabla}_{D^{\prime}}(v \otimes a\otimes h)=\nabla_{D^{\prime}}(v)\otimes a\otimes\langle h,D\rangle+ v\otimes\delta_{\operatorname{sb}(D^{\prime})}(a)\otimes\langle h,D\rangle\\ =\langle h,D\rangle^{p}a\nabla_{D^{\prime}}(v)+\langle h,D\rangle ^{p}\delta_{\operatorname{sb}(D^{\prime})}(a)v,\]
where \(\langle-,-\rangle\) denotes the standard pairing between \(H^{\vee}\) and \(H\). On the other hand
\[\nabla_{D^{\prime}}\circ(\operatorname{Id}\circ D)(v\otimes a \otimes h)=\nabla_{D^{\prime}}(\langle h,D\rangle^{p}av)=\langle h,D\rangle^{ p}a\nabla_{D^{\prime}}(v)+\delta_{\operatorname{sb}(D^{\prime})}(\langle h,D \rangle^{p}a)v\\ =\langle h,D\rangle^{p}a\nabla_{D^{\prime}}(v)+\langle h,D\rangle ^{p}\delta_{\operatorname{sb}(D^{\prime})}(a)v\]
so we obtain the desired equality for an irreducible tensor. By additivity we conclude equality for any local section of \(E\otimes F^{*}H^{\vee}\).
**Corollary 6.3**.: _Let \((E,\nabla)\) be a \(\Lambda\)-module and let \(\psi_{\nabla}:E\longrightarrow E\otimes F^{*}H^{\vee}\) denote its \(p\)-curvature. Then for any local section \(D^{\prime}\in\Lambda_{1}\) the following diagram commutes_
_where \(\tilde{\nabla}_{D^{\prime}}\) was defined in the previous lemma._
Proof.: By Proposition 5.3 we know that for any local sections \(D\in H\) and \(D^{\prime}\in\Lambda_{1}\) the two endomorphisms \(\psi_{\nabla}(D)\) and \(\nabla_{D^{\prime}}\) commute. Moreover, \(\psi_{\nabla}(D):E\to E\) is the composition of the following maps
\[E\xrightarrow{\psi_{\nabla}}E\otimes F^{*}H^{\vee}\xrightarrow{\operatorname{ Id}\otimes D}E\]
so we have the following diagram in which we know that the outer square and the inner right square (by Lemma 6.2) are commutative
Thus, for any \(D\in H\) and \(D^{\prime}\in\Lambda_{1}\)
\[0=\nabla_{D^{\prime}}\circ\psi_{\nabla}(D)-\psi_{\nabla}(D) \circ\nabla_{D^{\prime}}=\nabla_{D^{\prime}}\circ(\operatorname{Id}\otimes D) \circ\psi_{\nabla}-(\operatorname{Id}\otimes D)\circ\psi_{\nabla}\circ\nabla _{D^{\prime}}\\ =(\operatorname{Id}\otimes D)\circ\tilde{\nabla}_{D^{\prime}} \circ\psi_{\nabla}-(\operatorname{Id}\otimes D)\circ\psi_{\nabla}\circ\nabla _{D^{\prime}}\\ =(\operatorname{Id}\otimes D)\circ\left(\tilde{\nabla}_{D^{\prime }}\circ\psi_{\nabla}-\psi_{\nabla}\circ\nabla_{D^{\prime}}\right).\]
As this composition is zero for any \(D\in H\) and the kernel of the evaluation map in \(F^{*}H^{\vee}\) is trivial, we obtain that
\[\tilde{\nabla}_{D^{\prime}}\circ\psi_{\nabla}-\psi_{\nabla}\circ\nabla_{D^{ \prime}}=0.\]
The next proposition will be used in the proof of the main result (Theorem 6.6).
**Proposition 6.4**.: _Assume that \(p=\operatorname{char}(\mathbb{K})>2\). Let \(\Lambda\) be a restricted sheaf of differential operators on \(X\) over \(S\) and let \(\mathcal{E}\) be a coherent \(\mathcal{O}_{X}\)-module together with a morphism of \(\mathcal{O}_{S}\)-modules \(\nabla:\Lambda_{1}\longrightarrow\operatorname{End}_{\mathcal{O}_{S}}( \mathcal{E})\) satisfying for any local sections \(f,g\in\mathcal{O}_{X}\), \(s\in\mathcal{E}\) and \(D\in\Lambda_{1}\)_
\[\nabla_{D}(fs)=f\nabla_{D}(s)+\delta_{\operatorname{sb}(D)}\quad\text{and} \quad\nabla_{g}(s)=gs.\]
_Let \(\mathcal{G}\) be a coherent \(\mathcal{O}_{X}\)-module and let \(\psi:\mathcal{E}\to\mathcal{E}\otimes F^{*}\mathcal{G}\) be an \(\mathcal{O}_{X}\)-linear map. Suppose that for \(D\in\Lambda_{1}\) we have a commutative diagram_
_where the endomorphism \(\tilde{\nabla}_{D}\) on the right is defined by_
\[\tilde{\nabla}_{D}=\nabla_{D}\otimes\operatorname{Id}_{F^{*}\mathcal{G}}+ \operatorname{Id}_{\mathcal{E}}\otimes\nabla^{\operatorname{can}}_{\delta \operatorname{sb}(D)}.\]
_Then over an open dense subset of \(X\) we have_
\[\nabla^{\operatorname{can}}_{\delta\operatorname{sb}(D)}(\operatorname{tr}( \psi))=0,\]
_where \(\operatorname{tr}(\psi)\in H^{0}(X,F^{*}\mathcal{G})\) denotes the trace of the \(\mathcal{O}_{X}\)-linear endomorphism \(\psi\)._
Proof.: Since \(X\) is integral, we can restrict attention to the open dense subset \(\Omega\subset X\) where both \(\mathcal{E}\) and \(\mathcal{G}\) are locally free. Moreover, it will be enough to check the equality locally. For \(x\in\Omega\) we denote by \(\mathcal{O}\) the local ring of \(\mathcal{O}_{X}\) at the point \(x\). Then we can write
\[\nabla_{D}=\partial+A,\]
where \(\partial=\delta\circ\operatorname{sb}(D)\) is a derivation on \(\mathcal{O}\) and \(A\) is a \(r\times r\) matrix with values in \(\mathcal{O}\) and \(r=\operatorname{rk}(\mathcal{E})\). Similarly, let \(n=\operatorname{rk}(\mathcal{G})\) and choosing an \(\mathcal{O}\)-basis of \(\mathcal{G}_{x}\) then \(\psi\) corresponds to \(n\)\(r\times r\) matrices \(B_{1},\dots,B_{n}\) with values in \(\mathcal{O}\). Then the commutation relations translate into the following \(n\) equalities for \(i=1,\dots,n\) in \(\operatorname{End}(\mathcal{O}^{\oplus r})\)
\[B_{i}(\partial+A) = ((\partial+A)\otimes\operatorname{Id}+\operatorname{Id}\otimes \partial)B_{i}\] \[\Longleftrightarrow B_{i}\partial+B_{i}A = \partial B_{i}+AB_{i}+\partial.B_{i}\] \[\Longleftrightarrow [B_{i},A] = [\partial,B_{i}]+\partial.B_{i}=2\partial.B_{i},\]
where \(\partial.B_{i}\) denotes the matrix obtained from \(B_{i}\) by applying the derivation \(\partial\) to all of its coefficients. We also have used the well-known identity \([\partial,B_{i}]=\partial.B_{i}\). Taking the trace, we obtain
\[0=\operatorname{tr}([B_{i},A])=2\operatorname{tr}(\partial.B_{i})=2\partial( \operatorname{tr}(B_{i})).\]
Hence, since \(p>2\), we obtain
\[\partial(\operatorname{tr}(B_{i}))=0\quad\text{for }i=1,\dots,n.\]
This shows the result.
**Proposition 6.5**.: _Let \((E,\nabla)\) be a restricted \(\Lambda\)-module of rank \(r\) and let \(\psi_{\nabla}:E\to E\otimes F^{*}H^{\vee}\) denote its \(p\)-curvature. Then for \(i=1,\dots,r\) the \(\mathcal{O}_{X}\)-linear composite map_
\[\psi_{i}:\Lambda^{i}E\stackrel{{\Lambda^{i}\psi_{\nabla}}}{{ \longrightarrow}}\Lambda^{i}(E\otimes F^{*}H^{\vee})\stackrel{{ pr}}{{\longrightarrow}}\Lambda^{i}E\otimes F^{*}\operatorname{Sym} ^{i}H^{\vee}\]
_of \(\Lambda^{i}\psi_{\nabla}\) with the natural projection map \(\operatorname{pr}\) satisfies the commutation relations of Proposition 6.4 with \(\mathcal{E}=\Lambda^{i}E\), \(\mathcal{G}=\operatorname{Sym}^{i}H^{\vee}\), \(\psi=\psi_{i}\) and the natural actions of \(\Lambda_{1}\) on \(\mathcal{E}\) and \(\mathcal{G}\) induced by \(\nabla\)._
Proof.: We observe that if \((E,\nabla)\) is a \(\Lambda\)-module, the exterior power \(\Lambda^{i}E\) need not necessarily be a \(\Lambda\)-module, but \(\Lambda^{i}E\) can be equipped by an action of \(\Lambda_{1}\) satisfying the properties given in Proposition 6.4. Since \(\psi_{i}\) is a composite map, it will be enough to check that the two maps \(\Lambda^{i}\psi_{\nabla}\) and \(\operatorname{pr}\) satisfy the commutation relations. Both checks follow immediately from the definitions of the maps.
We can now state our main result.
**Theorem 6.6**.: _Assume that \(p=\operatorname{char}(\mathbb{K})>2\). Let \(\Lambda\) be a restricted sheaf of rings of differential operators over \(X\). We assume that \(H=\Lambda_{1}/\Lambda_{0}\) is locally free and that the anchor map \(\delta:H\to T_{X}\) is generically surjective. Then the coefficients \(\operatorname{tr}(\psi_{i})\) of the characteristic polynomial of the \(p\)-curvature \(\psi_{\nabla}\) of a \(\Lambda\)-module \((E,\nabla)\) over \(X\) are \(p\)-th powers, i.e. descend under the Frobenius map \(F\) of \(X\). This implies that the above defined Hitchin map \(h_{\Lambda}\) (6.1) factorizes as follows_
_where the vertical map is the pull-back map of global sections under the Frobenius map \(F\) of \(X\)._
Proof.: Let \((E,\nabla)\) be a restricted \(\Lambda\)-module of rank \(r\) with \(p\)-curvature \(\psi_{\nabla}\). Proposition 6.5 shows that the global section \(\psi_{i}:\Lambda^{i}E\to\Lambda^{i}E\otimes F^{*}\operatorname{Sym}^{i}H^{\vee}\) obtained by projecting \(\Lambda^{i}\psi_{\nabla}\) satisfies the commutation relations of Proposition 6.4. Therefore, applying Proposition 6.4, we can conclude that for any local section \(D\in\Lambda_{1}\)
\[\nabla^{\operatorname{can}}_{\delta\operatorname{sob}(D)}(\operatorname{tr}( \psi_{i}))=0\]
over an open subset \(\Omega_{1}\) of \(X\). Let \(\Omega_{2}\) be an open subset where the anchor map \(\delta\) is surjective. Then over \(\Omega_{1}\cap\Omega_{2}\) we have \(\nabla^{\operatorname{can}}_{\partial}(\operatorname{tr}(\psi_{i}))=0\) for any local vector field \(\partial\). Now we can apply Lemma 6.1, since \(X\) is integral and \(H\) locally free.
**Remark 6.7**.: _The following example shows that the assumption that \(\delta:H\to T_{X}\) is generically surjective cannot be dropped in Theorem 6.6. Let \(X\) be a smooth projective curve of genus \(g\geq 2\) over \(S=\operatorname{Spec}(\mathbb{K})\) and let \(T_{X}\) (resp. \(K_{X}\)) be its tangent (resp. canonical) line bundle. We choose a non-zero global section \(\varphi\in H^{0}(X,K_{X}^{p-1})\) with reduced zero divisor. We consider as explained in Subsection 4.6 the symmetric algebra \(\Lambda=\operatorname{Sym}^{\bullet}(T_{X})\) with the \(p\)-structure given by the \(\mathcal{O}_{X}\)-linear
map \(\alpha:F^{*}T_{X}=T_{X}^{\otimes p}\to T_{X}\) corresponding to the multiplication with \(\varphi\). Note that in this case \(\delta=0\). Then a \(\Lambda\)-module \((E,\nabla)\) over \(X\) corresponds to a vector bundle \(E\) together with a Higgs field, i.e., an \(\mathcal{O}_{X}\)-linear map \(\nabla:E\to E\otimes K_{X}\). The \(p\)-curvature \(\psi_{\nabla}\) of \((E,\nabla)\) then correponds to the \(\mathcal{O}_{X}\)-linear map \(E\to E\otimes F^{*}K_{X}\)_
\[\psi_{\nabla}=\nabla^{p}-\alpha.\nabla,\]
_where \(\alpha.\nabla\) denotes the composite map \((\mathrm{id}_{E}\otimes\alpha^{\vee})\circ\nabla\). Then clearly \(\mathrm{tr}(\psi_{\nabla})\) does not descend under the Frobenius map._
**Remark 6.8**.: _The previous remark shows that asking for a generally surjective anchor \(\delta:H\longrightarrow T_{X}\) is indeed necessary for the Theorem, but it can be proven that, in some scenarios, this condition is generically satisfied. For instance, if \(X\) is a smooth curve, then any nonzero map \(\delta:H\longrightarrow T_{X}\) is generically surjective. As a consequence, for smooth curves, any restricted sheaf of rings of differential operators \(\Lambda\) on \(X\) with nonzero anchor satisfies Theorem 6.6. In particular, this holds when \(\Lambda\) is the universal enveloping algebra of any restricted Lie algebroid \((H,[-.-],\delta,[p])\) with \(\delta\neq 0\) or, more generally, for any \(\Lambda\) in which the left and right \(\mathcal{O}_{X}\)-module structures are different (see Remark 2.2)._
## 7. Hitchin map for restricted \(\Lambda^{R}\)-modules
The argument used to show Theorem 6.6 can be adapted to the following particular relative case: consider for an integral projective scheme \(X\) over \(S=\mathrm{Spec}(\mathbb{K})\) the restricted sheaf of rings of differential operators \(\Lambda^{R}\) over \(X\times\mathbb{A}^{1}\) relative to \(\mathbb{A}^{1}\) obtained via the Rees construction from the universal enveloping algebra \(\Lambda=\Lambda_{H}\) of a restricted Lie algebroid \((H,[-,-],\delta,[p])\) over \(X\) -- see Subsection 4.4.
We consider the moduli space \(\mathcal{M}^{\Lambda^{R}}_{X\times\mathbb{A}^{1}/\mathbb{A}^{1}}(r,P)\) parameterizing Giesecker semi-stable \(\Lambda^{R}\)-modules over \(X\times\mathbb{A}^{1}/\mathbb{A}^{1}\) of rank \(r\) and with Hilbert polynomial \(P\). Since \(\Lambda^{R}_{1}/\Lambda^{R}_{0}=p^{*}_{X}(H)\) the Hitchin map \(h_{\Lambda^{R}}\) in the relative case (see [12, Sections 3.5 and 4.5]) corresponds to a morphism
(7.1)
over \(\mathbb{A}^{1}\). Then we obtain the
**Theorem 7.1**.: _Assume that \(p=\mathrm{char}(\mathbb{K})>2\). Let \(\Lambda=\Lambda_{H}\) be the universal enveloping algebra of a restricted Lie algebroid \((H,[-,-],\delta,[p])\) over an integral projective scheme \(X\) and let \(\Lambda^{R}\) be the restricted sheaf of rings of differential operators over \(X\times\mathbb{A}^{1}\) relative to \(\mathbb{A}^{1}\) obtained via the Rees construction from \(\Lambda_{H}\). We assume that \(H=\Lambda_{1}/\Lambda_{0}\) is locally free and that the anchor map \(\delta:H\to T_{X}\) is generically
surjective. Then the above defined Hitchin map \(h_{\Lambda^{R}}\) (7.1) factorizes as follows_
_where the vertical map is the pull-back map of global sections under the Frobenius map \(F\) of \(X\)._
Proof.: Since the anchor \(\delta:H\to T_{X}\) is generically surjective over \(X\), the anchor \(\delta^{R}=t\delta:p_{X}^{*}(H)\to p_{X}^{*}(T_{X})\) is also generically surjective over \(X\times\mathbb{A}^{1}\). Hence we can apply the same arguments as in the proof of Theorem 6.6 for local relative vector fields \(\partial\in T_{X\times\mathbb{A}^{1}/\mathbb{A}^{1}}=p_{X}^{*}(T_{X})\).
|
2306.10183 | Algorithm MGB to solve highly nonlinear elliptic PDEs in $\tilde{O}(n)$
FLOPS | We introduce Algorithm MGB (Multi Grid Barrier) for solving highly nonlinear
convex Euler-Lagrange equations. This class of problems includes many highly
nonlinear partial differential equations, such as $p$-Laplacians. We prove
that, if certain regularity hypotheses are satisfied, then our algorithm
converges in $\tilde{O}(1)$ damped Newton iterations, or $\tilde{O}(n)$ FLOPS,
where the tilde indicates that we neglect some polylogarithmic terms. This the
first algorithm whose running time is proven optimal in the big-$\tilde{O}$
sense. Previous algorithms for the $p$-Laplacian required $\tilde{O}(\sqrt{n})$
damped Newton iterations or more. | Sébastien Loisel | 2023-06-16T21:35:10Z | http://arxiv.org/abs/2306.10183v1 | # Algorithm MGB to solve highly nonlinear elliptic PDEs in \(\tilde{O}(n)\) FLOPS.
###### Abstract
We introduce Algorithm MGB (Multi Grid Barrier) for solving highly nonlinear convex Euler-Lagrange equations. This class of problems includes many highly nonlinear partial differential equations, such as \(p\)-Laplacians. We prove that, if certain regularity hypotheses are satisfied, then our algorithm converges in \(\tilde{O}(1)\) damped Newton iterations, or \(\tilde{O}(n)\) FLOPS, where the tilde indicates that we neglect some polylogarithmic terms. This the first algorithm whose running time is proven optimal in the big-\(\tilde{O}\) sense. Previous algorithms for the \(p\)-Laplacian required \(\tilde{O}(\sqrt{n})\) damped Newton iterations or more.
## 1 Introduction
Let \(\Omega\subset\mathbb{R}^{d}\) be an open domain. Let \(\Lambda:\mathbb{R}^{d}\to\mathbb{R}\) be continuous and convex. Let \(W^{k,p}(\Omega)\) and \(W^{k,p}_{0}(\Omega)\) denote the usual Sobolev spaces. Let \(f(x)\) be an integrable forcing. Consider the Euler-Lagrange problem
\[\inf_{u\in W^{1,\infty}_{0}(\Omega)}J(u)\text{ where }J(u)=\int_{\Omega}f(x)u(x)+ \Lambda(\nabla u(x))\,dx. \tag{1}\]
Problem (1) is our model convex optimization problem in function space. We are using homogeneous Dirichlet conditions to streamline our presentation.
In convex optimization, for a given convex function \(\Lambda(q)\), one defines the **epigraph**
\[Q=\{(q,s)\in\mathbb{R}^{d}\times\mathbb{R}\;:\;s\geqslant\Lambda(q)\}. \tag{2}\]
To the convex \(d+1\)-dimensional set \(Q\), one may associate the infinite dimensional convex set \(\mathcal{Q}\) defined by
\[\mathcal{Q}=\{(q(x),s(x))\in L^{\infty}(\Omega;\mathbb{R}^{d}) \times L^{\infty}(\Omega)\;:\;s(x)\geqslant\Lambda(q(x))\text{ a.e. }x\in\Omega\}.\]
Denote by \(D\) the differential operator (and \(D^{*}\) its adjoint)
\[D=\begin{bmatrix}\nabla&\\ &1\end{bmatrix}\text{ and }D^{*}=\begin{bmatrix}-\nabla\cdot&\\ &1\end{bmatrix}. \tag{3}\]
Here, \(\nabla\) denotes the gradient with respect to \(x\in\Omega\), and \(\nabla\cdot\) denotes the divergence. Note that if \(u(x)\in W^{1,\infty}_{0}(\Omega)\), then \(\nabla u(x)\) is uniformly bounded on \(\Omega\), and hence \(\Lambda(\nabla u(x))\) is also uniformly bounded on \(\Omega\) since \(\Lambda\) is continuous. Thus, denoting \(z(x)=[u^{T}(x),s(x)]^{T}\), problem (1) is equivalent to
\[\inf_{\begin{subarray}{c}z\in W^{1,\infty}_{0}(\Omega)\times L^{ \infty}(\Omega)\\ Dz\in\mathcal{Q}\end{subarray}}\int_{\Omega}c(x)[z(x)]\,dx\text{ where }c(x)= \begin{bmatrix}f(x)\\ 1\end{bmatrix}. \tag{4}\]
Here and elsewhere, we use the notation \(f[u]\), \(f[u,v]\), \(f[u,v,w]\), etc... for the application of a \(k\)-form \(f\) to arguments \(u,v,\ldots\) and we identify a vector \(v\) with the corresponding form, so that \(v[x]=v^{T}x\). In a similar fashion, if \(M\) is a matrix, then \(M[u,v]\) is identified with \(u^{T}Mv\).
A barrier for \(Q\) is a convex function \(F(w)=F(q,s)\) that is finite on the interior \(Q^{\circ}\) of \(Q\) and such that \(F(w)=+\infty\) for any \(w\in\partial Q\). If \(F(w)\) is a barrier for \(Q\), then
\[\mathcal{F}(w)=\int_{\Omega}F(w(x))\,dx, \tag{5}\]
is a barrier for \(\mathcal{Q}\).1 To the problem (4), one associates the **central path**\(z^{*}(t,x)\), defined by:
Footnote 1: Because function spaces have multiple inequivalent topologies, there are many different possible definitions of \(\partial\mathcal{Q}\). One such definition is to say that \(z\in\partial\mathcal{Q}\) if \(z\in\mathcal{Q}\) and \(\mathcal{F}(z)=\infty\). This definition of \(\partial\mathcal{Q}\) clearly depends on the choice of barrier.
\[z^{*}(t,x) =\operatorname*{arg\,min}_{w(x)\in W^{1,\infty}_{0}(\Omega)\times L ^{\infty}(\Omega)}f(t,w)\text{ where} \tag{6}\] \[f(w,t) =\int_{\Omega}tc(x)[w(x)]+F(Dw(x))\,dx.\]
We shall denote \(z^{*}(t,x)=[u^{*}(t,x),s^{*}(t,x)]^{T}\). Equation (6) can be regarded as an "energy minimization formulation" for the central path. The corresponding **weak formulation2** is obtained by a formal computation of the first variation of (6):
Footnote 2: Because our equations are highly nonlinear, it is not immediately obvious what is the most appropriate set of test functions \(\{w(z)\}\). For simplicity, we use \(w\in W^{1,\infty}_{0}\times L^{\infty}\).
\[\int_{\Omega}tc[w]+F^{\prime}(Dz^{*})[Dw]=0\text{ for all }w\in W^{1,\infty}_{0}( \Omega)\times L^{\infty}(\Omega). \tag{7}\]
The **strong formulation** is obtained by formal integration by parts:
\[\begin{cases}tc(x)+D^{*}(F^{\prime}(Dz^{*}(t,x)))=0\text{ for }x\in\Omega\text{ and }\\ u^{*}(t,x)=g(x)\text{ for }x\in\partial\Omega.\end{cases} \tag{8}\]
The PDE portion of (8) is a nonlinear algebraic-elliptic system, as is revealed by the componentwise equations:
\[tf(x)-\nabla\cdot F_{u}(\nabla u(t,x),s(t,x)) =0, \tag{9}\] \[t+F_{s}(\nabla u(t,x),s(t,x)) =0. \tag{10}\]
To obtain a concrete solver for the problem (1), we introduce a quasi-uniform triangulation \(T_{h}\) of \(\Omega\). We shall denote by \(\int_{\Omega}^{(h)}\) a quadrature rule that approximates \(\int_{\Omega}\) on the triangulation \(T_{h}\). We further introduce a piecewise polynomial finite element space \(V_{h}\) on \(T_{h}\) such that \(DV_{h}\) has degree \(\alpha-1\). Define
\[f_{h}(w,t) =\int_{\Omega}^{(h)}tc[w]+F(Dw)\text{ and } \tag{11}\] \[z_{h}^{*}(t,x) =\operatorname*{arg\,min}_{z\in V_{h}}f_{h}(z,t). \tag{12}\]
A basic algorithm for solving (1) is to approximately follow the central path \(z_{h}^{*}(t,x)\) at discrete values of \(t=t_{k}=\rho^{k}t_{0}\), where \(\rho>1\) is the "step size". Concretely, given the (approximate) value of \(z_{h}^{*}(t_{k-1},x)\), one finds \(z_{h}^{*}(t_{k},x)\) by damped Newton iterations on the optimization problem (12). This is called the "return to the central path" or simply "return to the center".
This strategy was first analyzed in Loisel (2020) for the \(p\)-Laplacian, where it was shown that it converges in \(\tilde{O}(\sqrt{n})\) damped Newton iterations; the tilde indicates that polylogarithmic terms are neglected. Here, \(n=O(h^{-d})\) is the number of grid points. In that paper, it was also mentioned that if one uses an \(\tilde{O}(n)\) FLOPS linear solver for the damped Newton steps, then one obtains an \(\tilde{O}(n^{1.5})\) FLOPS solver for the \(p\)-Laplacian. In the present paper, we introduce a multigrid algorithm for solving the nonlinear problem (1) in \(\tilde{O}(n)\) FLOPS, which is optimal in the \(\tilde{O}\)-sense, since it takes at least \(\tilde{O}(n)\) operations simply to write out a solution to main memory.
Note that each damped Newton iteration requires the solution of a linear system whose structure is that of a moderately heterogeneous elliptic problem. In dimension \(d=1\), this system is tridiagonal, so we may in fact perform each damped Newton step in \(O(n)\) FLOPS. In dimension \(d=2\), on
the one hand, it is known that direct solvers cannot run faster than \(O(n^{1.5})\) FLOPS (Hoffman et al., 1973). Despite this "negative" result, it is folk-loric that, for the finite values of \(n\) encountered on laptop computers, direct solvers effectively scale like \(O(n)\) FLOPS, see e.g. Loisel (2021). Although direct solvers do not appear to achieve \(O(n)\) performance in dimension \(d\geqslant 2\), \(H\)-matrix-based solvers can indeed be used in all regimes and run in \(\tilde{O}(n)\) FLOPS, see Bebendorf (2016) and references therein. We also mention domain decomposition preconditioners, e.g. Loisel et al. (2015), Loisel (2013), Subber and Loisel (2014), Loisel et al. (2010), Karangelis and Loisel (2015), Greer and Loisel (2015), Loisel and Nguyen (2017), Loisel et al. (2008). For nonlinear problems, see also Berninger et al. (2014). Such methods are also related to subspace correction methods (Tai and Xu, 2002).
In Loisel (2020), the theoretically optimal step size is found to be the well-known "short step" of convex optimization, which here is \(\rho-1\sim n^{-0.5}\sim h^{d/2}\). In convex optimization, one often prefers to use a step size \(\rho\) that is independent of \(n\), this is called a "long step", for example \(\rho=2\). If, by luck, each return to the center requires \(O(1)\) damped Newton iterations, then the long step approach is a clear winner. Unfortunately, the theory of convex optimization predicts that long-stepping schemed may need as many as \(O(n)\) damped Newton steps to return to the center. In Loisel (2020), it was revealed that the \(p\)-Laplacian triggers this worst-case behavior in long-stepping schemes. In order to obtain the "best of both worlds", an adaptive algorithm was devised in that paper, but the theoretical performance estimate is still governed by the short-step scheme.
In the present paper, we introduce Algorithm MGB, which converges to the solution in \(\tilde{O}(1)\) Newton steps, and \(\tilde{O}(n)\) FLOPS. This is clearly optimal in the big-\(\tilde{O}\) sense since it takes time \(\tilde{O}(n)\) to simply write out the solution to main memory. Algorithm MGB achieves this performance by returning to the center in \(\tilde{O}(1)\) damped Newton steps, and the \(t\) step size is long, in the sense that \((\rho-1)^{-1}=\tilde{O}(1)\). Although the theory says that the \(t\) step size may depend polylogarithmically on \(n\), we found in our numerical experiments that the \(t\) step sizes were completely independent of \(n\).
Iterative numerical algorithms are often parametrized in terms of the problem size \(n\) and the tolerance \(\text{tol}>0\) used to stop the iteration. However, in our case, the number \(\text{tol}\) can be expressed in terms of \(n\), as we now describe.
Although the function \(J(u)\) in (1) is unlikely to be globally twice differentiable, it often happens that \(J^{\prime\prime}(u^{*})[v^{2}]\) is indeed well defined near a minimizer \(u^{*}\) of (1), provided the test function \(v\) is sufficiently regular. When this happens, then \(J(v_{h})-J(u^{*})\sim h^{2\alpha}\) if \(v_{h}-u^{*}\sim h^{\alpha}\). If \(v_{h}=u_{h}^{*}\) is
obtained by a central path, i.e. \(z_{h}^{*}(t,x)=[u_{h}^{*}(t,x),s_{h}^{*}(t,x)]\), then we have
\[J(u_{h}^{*}(t,x))-J(u^{*}(x)) =J(u_{h}^{*}(t,x))-J(u_{h}^{*}(\infty,x))+J(u_{h}^{*}(\infty,x))-J (u^{*}(x))\] \[=O(t^{-1}+h^{2\alpha}), \tag{13}\]
see Lemma 4.1. Equilibrating the error terms produces a termination criterion for the central path: \(t^{-1}\sim h^{2\alpha}\).
In the multigrid method, we have a sequence of grid parameters \(1\geq h^{(1)}>\ldots>h^{(L)}>0\) and corresponding quasi-uniform triangulations \(T_{h^{(\ell)}}\). To simplify our exposition, the grid \(T_{h^{(\ell+1)}}\) is obtained by bisecting the edges of \(T_{h^{(\ell)}}\) so that \(h^{(\ell+1)}=0.5h^{(\ell)}\) and the grids are automatically quasi-uniform. To each grid \(T_{h}\), we associate the finite element space \(V_{h}\subset W^{1,\infty}(\Omega)\times L^{\infty}(\Omega)\), and we have that \(V_{h^{(1)}}\subset V_{h^{(2)}}\subset\ldots\subset V_{h^{(L)}}\).
**Definition 1.1** (Naive multigrid algorithm).: _Let \(t_{0}=O(1)\) and \(h_{0}=h^{(1)}\) be the coarsest grid parameter. Assume \(V_{h}\) is a piecewise polynomial finite element space, and the degree of \(DV_{h}\) is \(\alpha-1\). Assume that \(\alpha\geq d\). Assume \(z^{(0)}\in g+V_{h_{0}}\). For \(j=1,2,\ldots\), given \((t_{j-1},h_{j-1})\), define inductively \((t_{k},h_{k})\) using, at each step, either \(t\)-refinement or \(h\)-refinement, as follows._
* \(t\) **refinement:** _set_ \((t_{j},h_{j})=(\rho_{j}t_{j-1},h_{j-1})\)_, where_ \(\rho_{j}>1\) _is a_ \(t\)_-step size._
* \(h\) **refinement:** _set_ \((t_{j},h_{j})=(t_{j-1},\frac{1}{2}h_{j-1})\)_, where the factor_ \(1/2\) _is imposed by the grid subdivision scheme._
Figure 1: The naïve algorithm (left) shrinks the grid parameter \(h\) “as needed” as \(t\) increases, converging in \(\tilde{O}(\sqrt{n})\) damped Newton steps (marked by circles), mostly on the fine grid. Our Algorithm MGB (right) requires \(\tilde{O}(1)\) damped Newton steps.
_Then, \(z^{(j)}\) is found by damped Newton iterations so as to minimize \(f_{h}(z,t_{j})\) over \(z\in g+V_{h_{j}}\) with initial guess \(z=z^{(j-1)}\). The stopping criterion is \(t_{j}>C_{\mathrm{stop}}h^{-2k}\)._
_The_ **schedule** _is the choice of the order of \(h\) and \(t\) refinements, i.e. the sequence \((t_{j},h_{j})\). The "\(h\)-then-\(t\)" schedule is \(t_{0}=\ldots=t_{L-1}=t_{0}\) and \(\{h_{j}\}_{j=0}^{L-1}=\{h^{(1)},\ldots,h^{(L)}\}\) (i.e. \(h\) refinements), followed by \(h_{L}=h_{L+1}=\ldots=h^{(L)}\) and \(t_{j}=\rho_{j}t_{j-1}\) for \(j=L,L+1,\ldots\) (i.e. \(t\) refinements). In other words, the \(h\)-then-\(t\) schedule performs all \(h\) refinements first, and then all \(t\) refinements are done on the finest grid level \(h=h^{(L)}\)._
The "naive multigrid algorithm" is depicted in Figure 1, left. This method is similar to the method described by Schiela and Gunther (2011), where the authors write that "This allows to perform most of the required Newton steps on coarse grids, such that the overall computational time is dominated by the last few steps." Unfortunately, we have found this not to be the case, as we now explain with our first main theorem.
**Theorem 1.2** (Newton iterations of naive multigrid).: _Assume \((T_{h},c,F)\) is regular, as per Definition 4.4. Let \(t_{0}=O(1)\). The "\(h\)-then-\(t\)" schedule converges in_
\[\tilde{O}(h^{-0.5d})=\tilde{O}(n^{0.5})\text{ damped Newton iterations,} \tag{14}\]
_where \(n\) is the number of grid points in \(T_{h}\)._
_For an arbitrary schedule of \(h\) and \(t\) refinements, we estimate the number of damped Newton iterations on the finest grid. The best possible theoretical estimate is \(\tilde{O}(n^{0.5})\) iterations. In other words, the \(h\)-then-\(t\) schedule is optimal in the big-\(\tilde{O}\) sense._
We delay the proof of our main theorems to later sections.
We also mention that there is seemingly no satisfactory analysis of multigrid algorithms for nonlinear problems in the literature. Indeed, Brabazon et al. (2014) write that "To the best of our knowledge there exists no valid theory for the convergence of FAS for the case where the nonlinearity is in the highest order term" (see also references therein). This is despite the methods in (Hackbusch, 2013, Chapter 13).
In order to describe our algorithm, we introduce the notion of **shifted central paths**. For given \(H\geq h\) and \(t_{0}>0\), define
\[z_{h,t_{0},H}^{*}(t)=\operatorname*{arg\,min}_{z\in z_{h}^{*}(t_{0})+V_{H}}f_{ h}(z,t). \tag{15}\]
As before, the parameter \(x\in\Omega\) omitted in the notation is implied.
**Definition 1.3** (Algorithm MGB).: _Let \(t_{0}=O(1)\), \(z^{(0)}\in V_{h^{(1)}}\cap\mathcal{Q}\) be given. Let \(h=h^{(L)}\) be the fine grid. We define \(z^{(k)}\) inductively as follows._
_Given an approximation \(z^{(k)}\approx z^{*}_{h}(t_{k})\), compute \(z^{(k+1)}\approx z^{*}_{h}(t_{k+1})\) as follows, where \(t_{k+1}=\rho_{k}t_{k}\) and \(\rho_{k}>1\) is some step size._
* _For_ \(\ell=1,\ldots,L\)_, find an approximation_ \(z^{(k+\frac{\ell}{L})}\) _of_ \(z^{*}_{h,t_{k},h^{(\ell)}}(t_{k+1})\) _by damped Newton iterations on (_15_), starting with the initial guess_ \(z=z^{(k+\frac{\ell-1}{L})}\)_._
_Stop when \(t_{j}>C_{\mathrm{stop}}h^{-2\alpha}\)._
Our second main result shows that Algorithm MGB converges in \(\tilde{O}(1)\) Newton iterations.
**Theorem 1.4** (Newton iterations of Algorithm MGB).: _Assume \((T_{h},c,F)\) is regular, as per Definition 4.4. There is a long step size \(\rho_{k}=t_{k+1}/t_{k}>1\) (i.e. \((\rho-1)^{-1}=\tilde{O}(1)\)) such that Algorithm MGB converges in_
\[\tilde{O}(1)\text{ damped Newton iterations.} \tag{16}\]
Using an \(\tilde{O}(n)\) FLOPS linear solver for the damped Newton steps, our algorithm thus requires \(\tilde{O}(n)\) FLOPS to solve (1), which is optimal in the big-\(\tilde{O}\) sense.
We also mention that in the convex optimization literature, long-stepping schemes generally require \(O(n)\) iterations per centering step, which is often thought to be optimal for long-stepping schemes for general convex optimization problems. Within that framework, our result can be interpreted as a description of a wide class of convex optimization problems, for which return to the center in a long-stepping scheme can be achieved in \(\tilde{O}(1)\) damped Newton steps. We are not aware of any other long-stepping scheme that has this extremely fast convergence property.
Our regularity hypotheses are described in Definition 4.4. It says in part that \(Dz^{*}_{0}(\tau^{-1},x)\) should be sufficiently smooth as a function of \(\tau=t^{-1}\) and \(x\in\Omega\). Such smoothness hypotheses are commonplace in the analysis of PDEs. Mathematicians are usually comfortable with regularity hypotheses of a well-known type; solutions of PDEs are expected to be smooth. However, one of the requirements of Definition 4.4 is that \(\sqrt{F^{\prime\prime}(Dz^{*}_{h,t_{0},H}(t))}\) should be in some uniform reverse Holder class.
We are not aware of any other algorithm that uses reverse Holder classes as a regularity hypothesis. In order for such a class to be suitable as a regularity hypothesis, these functions should somehow be plentiful, especially
as solutions of PDEs. The reverse Holder classes are studied in the context of classical Fourier analysis (Grafakos, 2008), (Cruz-Uribe and Neugebauer, 1995) and have applications to the analysis of nonlinear PDEs (Kinnunen and Lewis, 2000).
In this paper, the symbol \(C\) shall be used to denote a generic constant that may depend on \(\Omega\), \(c\), \(F\) or the regularity parameter of \(T_{h}\) or the degree \(\alpha-1\) of the piecewise polynomial functions \(DV_{h}\), but is otherwise independent of \(h\) and \(t\). The constant \(C\) may not represent the same number from one equation to the next. If multiple constants are involved, we may use the notation \(C_{1},C_{2},\ldots\) to distinguish them.
Our paper is organized as follows. In Section 2, we briefly review the theory of self-concordant calculus in finite dimensions, and collect some essential results for the special case of the epigraph \(Q\). In Section 3, we introduce the quadrature rules that we will be using. In Section 4, we give a abstract theory of self-concordance in function spaces, culminating with the notion of "reverse Holder continuation". In Section 5, we use this theory to analyze the behavior of the naive algorithm, while in Section 6, we analyze Algorithm MGB. In section 7, we give an "a priori estimate" for the reverse Holder inequality that is part of our regularity hypotheses. This shows that the reverse Holder inequality can be obtained from some smoothness conditions on the solution and on the barrier. In Section 8, we discuss the practical MGB algorithm, which introduces some optimizations and methods for handling floating point inaccuracies. We have some numerical experiments in Section 9, and we end with some conclusions in Section 10.
## 2 Self-concordance in finite dimensions
If \(\|\cdot\|_{+}\) is any norm on \(\mathbb{R}^{n}\), then for each \(k=1,2,3,\ldots\), there is an induced norm on homogeneous polynomials of degree \(k\), defined by \(\|P\|_{+}=\sup|P[u^{k}]|/\|u\|_{+}^{k}\), where the supremum is taken over \(u\neq 0\). If \(\|\cdot\|_{+}\) is a Hilbertian norm, then Banach (1938) showed that \(\|\cdot\|_{+}\) is also given by \(\|\!\|P\|\!\|_{+}=\sup|P[u^{(1)},\ldots,u^{(k)}]|/\prod_{j}\|u^{(j)}\|_{+}\), with the supremum taken over nonzero vectors, see also Loisel (2001).
Let \(F(z)\) be a strictly convex thrice-differentiable barrier function on \(Q\). In particular, \(H(z)=F^{\prime\prime}(z)\) is symmetric positive definite (SPD) for every \(z\in Q^{\circ}\), and thus \((Q^{\circ},H)\) is a Riemannian manifold. The norm at \(z\in Q^{\circ}\) is denoted \(\|u\|_{z}=\|u\|_{F^{\prime\prime}(z)}=\sqrt{u^{T}F^{\prime\prime}(z)u}\). Recall that \(F\) is said to
be **standard self-concordant** if
\[|F^{\prime\prime\prime}(z)[u^{3}]|\leq 2\|u\|_{z}^{3}. \tag{17}\]
Equation (17) says that the induced norm \(|\!|\!|F^{\prime\prime\prime}(z)|\!|\!|_{z}\) is at most \(2\).
A standard self-concordant function is further said to be a **self-concordant barrier** with parameter \(1\leq\nu<\infty\) if
\[|F^{\prime}(z)[u]|\leq\sqrt{\nu}|u|_{z}. \tag{18}\]
Equation (18) says that the induced norm \(|\!|\!|F^{\prime}(z)|\!|\!|_{z}\) is at most \(\sqrt{\nu}\). The dual norm is \(|\!|u|_{z}^{*}=\sqrt{u^{T}[F^{\prime\prime}(z)]^{-1}u}\) and satisfies the "Cauchy-Schwarz" type inequality \(|u^{T}v|\leq|\!|u|_{x}^{*}|\!|v|\!|_{x}\). The **Newton decrement** is \(\lambda_{F}(z)=|\!|F^{\prime}(z)|\!|_{x}^{*}=\sqrt{F^{\prime}(z)^{T}[F^{ \prime\prime}(z)]^{-1}F^{\prime}(z)}\) and one checks that (18) is equivalent to \(\lambda_{F}^{2}(z)\leq\nu\).
In order to avoid repeating the well-known theory, we shall frequently quote results from the book of Nesterov and Nemirovskii [1994], e.g. [NN, Theorem 2.5.1] states that there exists a \(\nu\)-self-concordant barrier for the convex set \(Q\), with \(\nu=O(d)\), and the equation (NN2.5.1) is a formula for the "universal barrier".
On the value of \(\nu\) in dimension \(n\), [NN, Section 2.3.4] states that "generally speaking, the parameter cannot be less than \(n\)". As a result, even if \(F(z)\) is a self-concordant barrier, still \({\cal F}(w)\) will not be a self-concordant barrier, since its domain \({\cal Q}\) is infinite dimensional, and hence the parameter of \({\cal F}\) would have to be \(\infty\).
Even though \({\cal F}\) is not a self-concordant barrier for \({\cal Q}\), still a variant of [Nesterov, 2013, Theorem 4.2.7] holds for our central path \(z^{*}(t,x)\), provided \(F(z)\) is a \(\nu\)-self-concordant barrier on \(Q\). As we shall see in Lemma 4.1, our central path \(z^{*}(t,x)\) satisfies
\[c[z^{*}]-c^{*}\leq\frac{|\Omega|\nu}{t}\mbox{ where }c[w]=\int_{\Omega}c(x)[w(x)]\,dx. \tag{19}\]
As usual, \(|\Omega|\) is the Lebesgue measure of \(\Omega\), and we have denoted by \(c^{*}\) the infimum in (4). The bound (19) shows that \(z^{*}(t,\cdot)\) is a minimizing filter in \({\cal Q}\) for the functional \(c\). In other words, the central path can also be used in infinite dimensions to solve convex optimization problems.
Self-concordant functions satisfy the following explicit bounds.
**Lemma 2.1**.: _Let \(G(z)\) be a standard self-concordant function on \(Q\). Define_
\[\psi(\alpha)=\alpha-\log(1+\alpha). \tag{20}\]
_Denote \(\|u\|_{v}^{2}=G^{\prime\prime}(v)[u^{2}]\). For \(y,z\in Q\),_
\[\frac{\|y-z\|_{z}^{2}}{(1+\|y-z\|_{z})^{2}} \leq\|y-z\|_{y}^{2}\text{ and } \tag{21}\] \[G(z)+G^{\prime}(z)[y-z]+\psi(\|y-z\|_{z}) \leq G(y). \tag{22}\]
_Furthermore, if \(\|y-z\|_{z}<1\), then_
\[\|y-z\|_{y}^{2} \leq\frac{\|y-z\|_{z}^{2}}{(1-\|y-z\|_{z})^{2}}\text{ and } \tag{23}\] \[G(y) \leq G(z)+G^{\prime}(z)[y-z]+\psi(-\|y-z\|_{z}) \tag{24}\]
Proof.: The following argument is standard. Let \(\phi(t)=G(z+th)\) with \(h=y-z\). Thus, \(\phi^{\prime\prime}(t)=G^{\prime\prime}(z+th)[h^{2}]\) and
\[|\phi^{\prime\prime\prime}(t)| =|G^{\prime\prime\prime}(z+th)[h^{3}]| \tag{25}\] \[\leq 2(G^{\prime\prime}(z+th)[h^{2}])^{1.5}=2\phi^{\prime\prime}(t). \tag{26}\]
Solving the extremal differential equations \(\phi^{\prime\prime\prime}(t)=\pm 2\phi^{\prime\prime}(t)^{1.5}\) for the unknown \(\phi^{\prime\prime}(t)\) yields \(-t\leq\phi^{\prime\prime}(t)^{-0.5}-\phi^{\prime\prime}(0)^{-0.5}\leq t\), and hence
\[\frac{\phi^{\prime\prime}(0)}{(1+t\phi^{\prime\prime}(0)^{0.5})^{2}}\leq\phi^ {\prime\prime}(t)\leq\frac{\phi^{\prime\prime}(0)}{(1-t\phi^{\prime\prime}(0)^ {0.5})^{2}}. \tag{27}\]
The lower bound is valid for \(t\geq 0\) and the upper bound is valid for all \(0\leq t<\phi^{\prime\prime}(0)^{-0.5}\). Integrating twice, we find that
\[\psi(t\phi^{\prime\prime}(0))\leq\phi(t)-\phi(0)-\phi^{\prime}(0)t\leq\psi(-t \phi^{\prime\prime}(0)). \tag{28}\]
The results follow by substituting \(t=1\).
We now collect several basic properties from the literature.
**Lemma 2.2**.: _Assume that \(\Lambda(q)\) is convex and satisfies \(\Lambda(q)\geq\alpha\|q\|_{2}+\beta\) for some \(\alpha>0\), \(\beta\in\mathbb{R}\) and all \(q\in\mathbb{R}^{d}\); in other words, the graph of \(\Lambda\) lies above some cone. Denote \(z=(q,s)\in Q\). For any \(q\) and \(t>0\) let \(s^{(t)}(q)\) satisfy \(F_{s}(q,s^{(t)}(q))+t=0\). Then, \(s^{(t)}(q)\) is uniquely defined for any \(q\) and any \(t>0\). Furthermore, let \(K\subset\mathbb{R}^{d}\) be compact. Let \(t_{0}>0\). There is a constant \(C=C(K,F,t_{0})\), \(1\leq C<\infty\), such that the following are true:_
1. \(F_{s}(q,s)\) _is a monotonically increasing function of_ \(s\)_._
2. _For_ \(\epsilon>0\)_,_ \(-\frac{\nu}{\epsilon}\leq F_{s}(q,\Lambda(q)+\epsilon)\leq-\frac{1}{\epsilon}\)_._
3. \(\frac{1}{t}\leqslant s^{(t)}(q)-\Lambda(q)\leqslant\frac{\nu}{t}\)_._
4. _Let_ \(q\in K\) _and_ \(\epsilon:=s-\Lambda(q)\)_. If_ \(0<\epsilon\leqslant\frac{\nu}{t_{0}}\) _then_ \(\sigma(F^{\prime\prime}(q,s))\subset[C_{1},C_{2}\epsilon^{-2}]\)_, where_ \(\sigma(\cdot)\) _denotes the spectrum of a matrix._
5. \(F(z)=-\log\Phi(z)\) _with_ \(-\Phi^{\frac{1}{\nu}}(z)\) _convex and_ \(\{z\ :\ \Phi(z)>0\}\) _is the interior of_ \(Q\)_._
6. _For_ \(x\in Q\) _and_ \(y\in\mathbb{R}^{d+1}\)_, put_ \(r=\|x-y\|_{x}\)_. If_ \(r<1\) _then_ \(y\in Q\) _and also_ \[(1-r)^{2}F^{\prime\prime}(x)\preceq F^{\prime\prime}(y)\preceq \frac{1}{(1-r)^{2}}F^{\prime\prime}(x).\] (29)
Proof.: Recall the function \(\pi_{y}(x)\) [NN] defined by
\[\pi_{y}(x)=\inf\{t\geqslant 0\ :\ \overbrace{y+t^{-1}(x-y)}^{w}\in Q\}. \tag{30}\]
The fact that \(s^{(t)}(q)\) is uniquely defined for any \(t>0\) follows from properties 1-3. We now show all the numbered properties in order.
1. \(F_{s}\) is a monotonically increasing function of \(s\) because \(F\) is strictly convex.
2. Put \(z=(r,\Lambda(r))\) and \(x=(r,\Lambda(r)+\epsilon)\) (note that \(\pi_{z}(x)=0\)) into (NN2.3.7) to find \(F_{s}(r,\Lambda(r)+\epsilon)(-\epsilon)\geqslant 1\). Conversely, put \(y=\Lambda(r)^{+}\) and \(x=\Lambda(r)+\epsilon\) into (NN2.3.2) to find \(F_{s}(r,\Lambda(r)+\epsilon)(-\epsilon)\leqslant\nu\).
3. From the first two properties, \(s^{(t)}(r)\) is uniquely defined. From property 2, \(t+F_{s}(r,\Lambda(r)+\epsilon)\leqslant t-\frac{1}{\epsilon}<0\) if \(\epsilon<1/t\) thus \(s^{(t)}(r)\geqslant\Lambda(r)+\frac{1}{t}\). Also from property 4, \(t+F_{s}(r,\Lambda(r)+\epsilon)\geqslant t-\frac{\nu}{\epsilon}>0\) if \(\epsilon>\frac{\nu}{t}\). Thus, \(s^{(t)}(r)\leqslant\Lambda(r)+\frac{\nu}{t}\).
4. Now we use (NN2.3.9): \[F^{\prime\prime}(z)[h^{2}]^{-0.5}\leqslant q_{z}(h)\leqslant(1+3\nu)F^{ \prime\prime}(z)[h^{2}]^{-0.5},\] (31) where \(q_{z}(h)=\sup\{t\ :\ z\pm th\in Q\}\). The function \(q_{z}(h)\) measures the distance from point \(z\) to some point \(z\pm th\) on \(\partial Q\) in the directions \(\pm h\). Since the graph of \(\Lambda\) lies above some cone, i.e. \(\Lambda(q)\geqslant\alpha\|q\|_{2}+\beta\), we can find an upper bound for \(q_{z}(h)\). To do this, write \(z=(q,s)\) and \(h=(w,\eta)\). From \(z\pm th\in Q\), we find that
\(\alpha\|q\pm tw\|_{2}+\beta\geqslant\alpha(t\|w\|_{2}-\|q\|_{2})+\beta\). Picking the sign so that \(\pm t\eta<0\) yields \[t\leqslant\frac{s+\alpha\|q\|_{2}-\beta}{(\alpha\|w\|_{2}+|\eta|)}\leqslant\frac {s+\alpha\|q\|_{2}-\beta}{C\|h\|_{2}},\] (32) where we have used that \(\alpha\|w\|_{2}+|\eta|\) is a norm of \(h=(w,\eta)\) that is equivalent to the Euclidian norm \(\|h\|_{2}\) by norm equivalence in finite dimensions. Furthermore, the numerator of (32) is also bounded, since \(z=(q,s)\) ranges in some compact set. This shows that \(t\) is uniformly bounded by \(C/\|h\|_{2}\), and hence \[q_{z}(h)\leqslant\frac{C}{\|h\|_{2}}.\] (33) We now find a lower bound for \(q_{z}(h)\). For \(s>\Lambda(q)\), put \(z=(q,s)\). Since \(\Lambda\) is Lipschitz over \(K\), the epigraph of \(\Lambda\) contains a ball of radius \(C(s-\Lambda(q))=C\epsilon\) centered at \(z\). Thus, \[q_{z}(h)\geqslant C\epsilon/\|h\|_{2}.\] (34) The result follows from (31), (33), (34).
5. This is part (iv) of [NN, Proposition 2.3.2].
6. This is [NN, Theorem 2.1.1].
**Corollary 2.3**.: \[\sigma(F^{\prime\prime}(Dz_{0}^{*}(t,x)))\subset[C_{1},C_{2}t^{2}],\] (35)
_where \(\sigma(M)\) denotes the spectrum of the matrix \(M\)._
For self-concordant functions, damped Newton iterations can be proven to converge, as follows.
**Lemma 2.4**.: _Let \(G(z)\) be convex with a minimum at \(z^{*}\). Define \(\mathcal{L}(\delta)=\{w\ :\ G(w)-G(z^{*})\leqslant\delta\}\). Assume that \(G(z)\) is standard self-concordant on \(\mathcal{L}(\delta)\). Let \(z^{(0)}\in\mathcal{L}(\delta)\) be given, we define the_ **suboptimality gap** _as \(G(z^{(0)})-G(z^{*})\leqslant\delta\). Define the sequence \(z^{(k)}\) for \(k\geqslant 1\) by damped Newton iterations. Assume we stop the damped Newton iterations at the first iteration \(k\) such that \(G(z^{(k)})-G(z^{*})<\epsilon\). Then,_
\[k\leqslant C(G(z^{(0)})-G(z^{*}))+\log_{2}\log_{2}\epsilon^{-1}. \tag{36}\]
Proof.: This is [Boyd and Vandenberghe, 2004, (9.56)], which also reveals that \(C\leq 375\). A much smaller value of \(C\) can be estimated by following [NN, section 2.2.3]. In double precision arithmetic, the expression \(\log_{2}\log_{2}\epsilon^{-1}\) can be bounded by \(6\).
## 3 Quadrature
**Definition 3.1**.: _Let \(\hat{K}\subset\mathbb{R}^{d}\) be the "reference simplex". For each \(h>0\), assume \(T_{h}\) is a triangulation of \(\Omega\). For each \(K\in T_{h}\), we associate an affine map \(f_{K}(x)=A_{K}x+b_{K}\) such that \(K=f_{K}(\hat{K})\). We say that \(T_{h}\) is quasi-uniform with parameter \(\rho\geq 0\) if \(\left\|\!\left|\!\left|A_{K}\right|\!\right|\!\right|\leq h\) and \(\left\|\!\left|\!\left|A_{K}^{-1}\right|\!\right|\!\right|\!\right|^{-1}\leq\rho h\)._
Assume we have a quadrature rule for the reference simplex:
\[I_{\hat{K}}\eta(x):=\sum_{j=1}^{\beta}\omega_{j}\eta(x_{\hat{K},j}). \tag{37}\]
We shall require that the quadrature weights satisfy
\[\omega_{j}>0. \tag{38}\]
From this, we obtain a quadrature rule for \(K\in T_{h}\):
\[\int_{K}^{(h)}\eta(x)=|\det A_{K}|I_{\hat{K}}\eta(A_{K}x+b_{K}). \tag{39}\]
Then, if \(E\) is a union of simplices in \(T_{h}\),
\[\int_{E}^{(h)}\eta(x)=\sum_{\begin{subarray}{c}K\in T_{h}\\ K\subset E\end{subarray}}\int_{K}^{(h)}\eta|_{K}. \tag{40}\]
Here, the notation \(\eta|_{K}\) indicates that we first restrict \(\eta\) to \(K\), and it suffices for \(\eta\) to be continuous on each \(K\in T_{h}\). This allows one to integrate a function \(\eta\) which may have jump discontinuities on edges of \(T_{h}\). In particular, if \(E\) is the union of simplices in \(T_{h}\), then
\[\int_{E}^{(h)}1=|E|. \tag{41}\]
We shall also denote the exact integral by \(\int_{E}^{(0)}=\int_{E}\). Because of (38), we may write the quadrature rule as
\[\int_{\Omega}^{(h)}\eta =\sum_{\begin{subarray}{c}K\in T_{h}\\ k=1,\ldots,\beta\end{subarray}}\omega_{K,k}y_{K,k}\text{ where} \tag{42}\] \[\omega_{K,k} =|\det A_{K}|\omega_{k},\ x_{K,k}=A_{K}x^{(k)}+b_{K}\text{ and }y_{K,k}=\eta|_{K}(x_{K,k}). \tag{43}\]
Thus,
\[C_{\min}h^{d}\leqslant\{\omega_{K,k},|K|\}\leqslant C_{\max}h^{d}, \tag{44}\]
for some constants \(0<C_{\min}<1<C_{\max}<\infty\). We thus have Jensen's inequality
\[\psi\left(\frac{1}{|E|}\int_{E}^{(h)}\eta\right)\leqslant\frac{1}{|E|}\int_{E} ^{(h)}\psi(\eta), \tag{45}\]
for any convex function \(\psi\). We further define the discrete norms
\[\|\eta\|_{L_{h}^{p}(E)}^{p} =\int_{E}^{(h)}|\eta|^{p}\text{ for }1\leqslant p<\infty, \tag{46}\] \[\|\eta\|_{L_{h}^{\infty}(E)} =\sup_{\begin{subarray}{c}K\in T_{h}\\ K\subset E\end{subarray}}|\eta(x_{K,k})|. \tag{47}\]
We then have the discrete Holder inequalities:
\[\int_{E}^{(h)}|\eta|\leqslant\|\eta\|_{L_{h}^{p}(E)}\|\eta\|_{L_{h}^{p^{ \prime}}(E)}\text{ where }\frac{1}{p}+\frac{1}{p^{\prime}}=1. \tag{48}\]
The discretizations of \(\mathcal{F}\) and \(f\) are:
\[\mathcal{F}_{h}(w) =\int_{\Omega}^{(h)}F(w(x))\text{ and} \tag{49}\] \[f_{h}(w) =\int_{\Omega}^{(h)}tc(x)[w(x)]+F(Dw(x)). \tag{50}\]
Let \(V_{h}\subset W^{1,\infty}(\Omega)\times L^{\infty}(\Omega)\) be a piecewise polynomial space, such that \(DV_{h}\) is of degree \(\alpha-1\). We define
\[\mathcal{Q}_{h}=\{v\in V_{h}\ :\ v(x_{K,k})\in Q\text{ for all }K\in T_{h},\ k=1,\ldots,\beta\}. \tag{51}\]
Then, we define the discretizations
\[z_{h}^{*}(t) =\operatorname*{arg\,min}_{z\in V_{h}}f_{h}(z,t)\text{ and} \tag{52}\] \[z_{h,t_{1},H}^{*}(t) =\operatorname*{arg\,min}_{z\in z_{h}^{*}(t_{1})+V_{H}}f_{h}(z,t). \tag{53}\]
Note that
\[\int_{\Omega}^{(h)}tc[\phi_{h}]+F^{\prime}(Dz_{h}^{*}(t))[D\phi_{ h}] =0\text{ for all }\phi_{h}\in V_{h}\text{ and} \tag{54}\] \[\int_{\Omega}^{(h)}tc[\phi_{H}]+F^{\prime}(Dz_{h,t_{1},H}^{*}(t))[ D\phi_{H}] =0\text{ for all }\phi_{H}\in V_{H}. \tag{55}\]
Note that (55) uses the quadrature \(\int_{\Omega}^{(h)}\) on the fine grid \(T_{h}\), but the test function \(\phi_{H}\in V_{H}\) is on the coarse grid \(T_{H}\), \(H\geq h\).
## 4 Self-concordance in function spaces
We were not able to find the system (9), (10) in the literature on nonlinear elliptic partial differential equations, but see Gilbarg and Trudinger [1977]. In the present paper, we shall assume that the central path \(z^{*}(t,x)=z_{0}^{*}(t,x)\) exists and is unique for \(t>0\) and solves (6), (7), (8), (9), (10).
**Lemma 4.1**.: _Let \(h\geq 0\). Assume \(F(z)\) is a self-concordant barrier for \(Q\) with parameter \(\nu\). Then,_
\[\int_{\Omega}^{(h)}c[z_{h,t_{0},H}^{*}(t)]-\inf_{z\in(z_{h}(t_{0})+V_{H})\cap \Omega}\int_{\Omega}^{(h)}c[z]\leq\frac{\nu|\Omega|}{t}. \tag{56}\]
Proof.: \[\int_{\Omega}^{(h)}c[z_{h,t_{0},H}^{*}(t)-w] =\frac{1}{t}\int_{\Omega}^{(h)}F^{\prime}(Dz_{h,t_{0},H}^{*}(t))[ D(w-z_{h,t_{0},H}^{*}(t))]\] (57) \[\leq\frac{1}{t}\int_{\Omega}^{(h)}\nu=\frac{\nu|\Omega|}{t},\] (58)
where we have used (7) and (NN2.3.2). The result follows by taking an infimum over admissible \(w(x)\).
**Lemma 4.2**.: _Let \(z\in\mathcal{Q}\cap L^{\infty}\) and \(q\in L^{\infty}\) and \(h\geq 0\). Then,_
\[(\mathcal{S}_{h}^{\prime}(z)[q])^{2}\leq|\Omega|\nu\mathcal{S}_{h}^{\prime \prime}(z)[q^{2}]. \tag{59}\]
Proof.: \[|\mathcal{G}^{\prime}_{h}(z)[q]|=\left|\int_{\Omega}^{(h)}F^{\prime}(z)[q]\right| \leq\int_{\Omega}^{(h)}\sqrt{\nu F^{\prime\prime}(z)[q^{2}]}.\] (60)
The conclusion follows from Jensen's inequality.
If we also had
\[|\mathcal{G}^{\prime\prime\prime}(z)[q^{3}]|\leq C\mathcal{G}^{\prime\prime}(z )[q^{2}]^{1.5}, \tag{61}\]
for all \(z\in\mathfrak{Q}\) and \(q\in L^{\infty}\), then we would indeed have a self-concordant barrier on \(\mathfrak{Q}\), which we have already noted is impossible in infinite dimensions. We must therefore find some relaxation of (61). We begin with a crude estimate that is useful as a "fallback".
**Lemma 4.3**.: _Let \(C_{1},\rho\) be as per (44). The function \(C_{1}^{-1}h^{-d}\mathcal{G}_{h}\) is a self-concordant barrier for \(\mathfrak{Q}_{h}\) with parameter_
\[\nu(h)=O(h^{-d}). \tag{62}\]
Proof.: \[C_{1}^{-1}h^{-d}\mathcal{G}_{h}(w)=\sum_{\begin{subarray}{c}K\in T_{h}\\ k=1,\ldots,k\end{subarray}}C_{1}^{-1}h^{-d}\omega_{K,k}F(w(x_{K,k})).\] (63)
The coefficients \(C_{1}^{-1}h^{-d}\omega_{K,k}\) of the sum are bounded between \(1\) and \(C_{2}/C_{1}\), so the result follows by self-concordant calculus.
Any algorithm that relies on the estimate (62) typically results in \(\tilde{O}(h^{-0.5d})=\tilde{O}(\sqrt{n})\) damped Newton iterations when short \(t\) steps are used. We now discuss a more nuanced theory with better iteration counts for some situations.
**Definition 4.4** (Regularity hypotheses).: _Denote \(\partial_{t^{-1}}=\frac{\partial}{\partial t^{-1}}=t\frac{\partial}{\partial t }=t\partial_{t}\). Assume \(T_{h}\) is a quasi-uniform triangulation of \(\Omega\). Denote by \(\Pi_{h}\) the interpolation operator for the piecewise polynomial space \(V_{h}\subset W^{1,\infty}(\Omega)\times L^{\infty}(\Omega)\); assume \(DV_{h}\) is of degree \(\alpha-1\), and that \(\alpha\geq d\). We say that \((T_{h},c,F)\) is regular if the following properties are satisfied._
1. \(F\) _is a self-concordant barrier for_ \(Q\) _with parameter_ \(\nu\)_, and_ \(|\Lambda(q)|\geq\alpha_{1}\|q\|_{2}+\alpha_{2}\) _for some constants_ \(\alpha_{1}>0\) _and_ \(\alpha_{2}\in\mathbb{R}\)
2. _The uniform discrete reverse Holder inequality. There is a function_ \(C_{RH}(z_{h}^{*}(t))\) _of_ \((h,t)\) _such that, for all_ \(0<h\leq H\)_,_ \(K\in T_{H}\)_,_ \(t_{0}\leq t<\infty\)_, polynomial_ \(q\) _such that_ \(Dq\) _has degree_ \(\alpha-1\)_, then_ \[\left\|\sqrt{F^{\prime\prime}(Dz_{h}^{*}(t))[(Dq)^{2}]}\right\|_{L_ {h}^{\infty}(K)}\] (64) \[\leq C_{RH}(z_{h}^{*}(t))|K|^{-1}\left\|\sqrt{F^{\prime\prime}(Dz_{ h}^{*}(t))[(Dq)^{2}]}\right\|_{L_{h}^{1}(K)}.\] _The function_ \(C_{RH}(z_{h}^{*}(t))\) _grows no faster than polylogarithmically in_ \((t,h)\)_._
3. _Smoothness:_ \[D\partial_{t^{-1}}z_{0}^{*}\in L^{\infty}([t_{0},\infty];W^{\alpha,\infty}( \Omega)).\] (65)
4. _Optimal approximation property:_ \[\|Dz_{0}^{*}-Dz_{h}^{*}\|_{L^{\infty}([t_{0},\infty]\times\Omega)}+\|D \partial_{t^{-1}}z_{0}^{*}-D\partial_{t^{-1}}z_{h}^{*}\|_{L^{\infty}([t_{0}, \infty]\times\Omega)}\leq Ch^{\alpha}.\] (66)
In complex analysis, if a function \(f\) is analytic at a point \(z^{*}\), then one may expand it into a power series. If \(w\) is within the region of convergence of this power series, \(f\) is also analytic at \(w\), and one may find a new power series expansion, this time at \(w\), to expand the domain of analyticity of \(f\); this procedure is called "analytic continuation". It turns out that if \(G(z(x))\) satisfies a certain reverse Holder inequality for some \(z\in\mathcal{Q}_{h}\), then \(G(w(x))\) will satisfy a related Holder inequality if \(w\in\mathcal{Q}_{h}\) is in a suitable neighborhood of \(z\), and this procedure can be used to propagate the reverse Holder inequality to larger subsets of \(\mathcal{Q}_{h}\).
**Theorem 4.5** (Reverse Holder continuation).: _Let \(z^{*}\in\mathcal{Q}_{h}\), \(1\geq H\geq h\geq 0\) and \(t>0\). Consider the affine space \(A=z^{*}+V_{H}\) and assume that \(z^{*}=z_{A}^{*}(t)\) minimizes \(z\to f_{h}(z,t)\) over \(z\in A\). Assume that the following reverse Holder inequality holds:_
\[\|\sqrt{F^{\prime\prime}(Dz^{*})[(Dv)^{2}]}\|_{L_{h}^{\infty}(K)}\leq C_{RH}( z^{*})|K|^{-1}\|\sqrt{F^{\prime\prime}(Dz^{*})[(Dv)^{2}]}\|_{L_{h}^{1}(K)}, \tag{67}\]
_for all \(K\in T_{\hat{H}}\), \(1\geq\hat{H}\geq h\), and \(v\) a polynomial such that \(Dv\) has degree \(\alpha-1\). Let \(0\leq\delta<1\). Let \(w\in A\) such that_
\[f_{h}(w,t)-f_{h}(z^{*},t)\leq\delta(1+\sqrt{2|\Omega|})^{-2}C_{RH}^{-2}(z^{*}) C_{\min}^{2}H^{2d}=:\beta. \tag{68}\]
_Then, the following reverse Holder inequality also holds:_
\[\|\sqrt{F^{\prime\prime}(Dw)[(Dv)^{2}]}\|_{L^{\infty}_{h}(K)}\leq C_{ RH}(A,t,\beta)|K|^{-1}\|\sqrt{F^{\prime\prime}(Dw)[(Dv)^{2}]}\|_{L^{1}_{h}(K)} \tag{69}\] \[\text{where }C_{RH}(A,t,\beta)\leq C_{RH}(z^{*})(1-\sqrt{\delta})^{-2}, \tag{70}\]
_for all \(K\in T_{\hat{H}}\), \(1\geq\hat{H}\geq h\)._
Proof.: Note that
\[\partial_{w}f_{h}(z^{*},t)=\int_{\Omega}^{(h)}tc[w]+F^{\prime}(Dz^{*})[Dw]=0. \tag{71}\]
Therefore,
\[f_{h}(w,t)-f_{h}(z^{*},t) =\int_{\Omega}^{(h)}tc[w-z^{*}]+F(Dw)-F(Dz^{*}) \tag{72}\] \[=\int_{\Omega}^{(h)}F(Dw)-F(Dz^{*})-F^{\prime}(Dz^{*})[D(w-z^{*})]. \tag{73}\]
We apply (22) with \(G=F\) and \(y=Dw\) and \(z=Dz^{*}\) to arrive at
\[f_{h}(w,t)-f_{h}(z^{*},t) \geq\int_{\Omega}^{(h)}\psi\left(\sqrt{F^{\prime\prime}(Dz^{*})[ (Dw-Dz^{*})^{2}]}\right) \tag{74}\] \[\geq|\Omega|\psi\left(|\Omega|^{-1}\int_{\Omega}^{(h)}\sqrt{F^{ \prime\prime}(Dz^{*})[(Dw-Dz^{*})^{2}]}\right), \tag{75}\]
where we have used Jensen's inequality. We continue by using the bound \(\psi^{-1}(\beta)\leq\beta+\sqrt{2\beta}\) to arrive at
\[\beta+\sqrt{2|\Omega|\beta} \geq\int_{\Omega}^{(h)}\sqrt{F^{\prime\prime}(Dz^{*})[(Dw-Dz^{*}) ^{2}]} \tag{76}\] \[=\sum_{K\in T_{H}}\|\sqrt{F^{\prime\prime}(Dz^{*})[(Dw-Dz^{*})^{2 }]}\|_{L^{1}_{h}(K)}\] (77) \[\geq\sum_{K\in T_{H}}C_{RH}^{-1}(z^{*})|K|\|\sqrt{F^{\prime \prime}(Dz^{*})[(Dw-Dz^{*})^{2}]}\|_{L^{\infty}_{h}(K)}\] (78) \[\geq C_{RH}^{-1}(z^{*})\max_{K\in T_{H}}|K|\|\sqrt{F^{\prime \prime}(Dz^{*})[(Dw-Dz^{*})^{2}]}\|_{L^{\infty}_{h}(K)}. \tag{79}\]
\[\left\|\sqrt{F^{\prime\prime}(Dz^{*})[(Dw-Dz^{*})^{2}]}\right\|_{L^{\infty}_{ h}(K)}\leq(1+\sqrt{2|\Omega|})C_{RH}|K|^{-1}\sqrt{\beta}=:r. \tag{80}\]
provided \(\beta\leqslant 1\). Then, from (29), if \(r<1\), we find that on \(K\in T_{\hat{H}}\),
\[(1-r)^{2}F^{\prime\prime}(Dz^{*})\preceq F^{\prime\prime}(Dw)\preceq(1-r)^{-2}F^ {\prime\prime}(Dz^{*}). \tag{81}\]
In particular, for any polynomial \(v\) such that \(Dv\) is of degree \(\alpha-1\), we have the following reverse Holder inequality:
\[\|\sqrt{F^{\prime\prime}(Dw)[(Dv)^{2}]}\|_{L_{h}^{\infty}(K)} \leqslant(1-r)^{-1}\|\sqrt{F^{\prime\prime}(Dz^{*})[(Dv)^{2}]}\|_ {L_{h}^{\infty}(K)} \tag{82}\] \[\leqslant(1-r)^{-1}C_{RH}|K|^{-1}\|\sqrt{F^{\prime\prime}(Dz^{*})[ (Dv)^{2}]}\|_{L_{h}^{1}(K)}\] (83) \[\leqslant C_{RH}(1-r)^{-2}|K|^{-1}\|\sqrt{F^{\prime\prime}(Dw)[( Dv)^{2}]}\|_{L_{h}^{1}(K)}, \tag{84}\]
valid for any \(K\in T_{\hat{H}}\).
**Definition 4.6**.: _Let \(z^{*}\in\mathcal{Q}\), \(\beta\geqslant 0\) and \(H\geqslant h\geqslant 0\) and \(t>0\). Consider the affine space \(A=z^{*}+V_{H}\) and assume that \(z^{*}\) minimizes \(z\to f_{h}(z,t)\) over \(z\in A\). Define the "Lebesgue set"_
\[\mathcal{L}_{A,t}(\beta)=\{w\in A\;:\;f_{h}(w,t)-f_{h}(z^{*},t)\leqslant\beta\}. \tag{85}\]
Theorem 4.5 states that reverse Holderness propagates from a single point \(z^{*}\) to a "neighborhood" \(\mathcal{L}_{A,t}(\beta)\). In Section 6, we shall use this continuation procedure iteratively.
## 5 Analysis of the naive algorithm
**Lemma 5.1**.: _Assume \((T_{h},c,F)\) is regular._
\[f_{h}(z_{h}^{*}(t_{1}),t)-f_{h}(z_{h}^{*}(t),t)\leqslant\nu|\Omega|(\rho-\log \rho-1)\text{ where }\rho=\frac{t}{t_{1}}\geqslant 1. \tag{86}\]
\[f_{h}(z_{h}^{*}(t),t)-f_{h}(z_{0}^{*}(t),t)\leqslant C\min\{th^{\alpha},(th^{ \alpha})^{2}\}. \tag{87}\]
Proof.: From the regularity of \((T_{h},c,F)\) and Lemmas 2.1 and 2.2,
\[f_{h}(z_{h}^{*}(t),t)-f_{h}(z_{0}^{*}(t),t) \leqslant\int_{\Omega}^{(h)}\psi\left(-\sqrt{F^{\prime\prime}(z_{ 0}^{*}(t))[(Dz_{h}^{*}(t)-Dz_{0}^{*}(t))^{2}]}\right) \tag{88}\] \[\leqslant\int_{\Omega}^{(h)}\psi\left(-Ct\|Dz_{h}^{*}(t)-Dz_{0}^{ *}(t)\|_{2}\right)\] (89) \[\leqslant\int_{\Omega}^{(h)}\psi\left(-Cth^{\alpha}\right)\] (90) \[\leqslant Ct^{2}h^{2\alpha}, \tag{91}\]
valid for \(0\leqslant Cth^{\alpha}\leqslant 0.5\), and where we have used that \(\psi(\alpha)=O(\alpha^{2})\) as \(\alpha\to 0\) and \(\psi\) is monotonically decreasing for \(\alpha\leqslant 0\).
In the regime \(Cth^{\alpha}>0.5\), we use instead the following argument. Put \(g(t)=f_{h}(z_{h}^{*}(t),t)-f_{h}(z_{0}^{*}(t),t)\). Then,
\[g^{\prime}(t) =\int_{\Omega}^{(h)}c[z_{h}^{*}(t)-z_{0}^{*}(t)]\,dx \tag{92}\] \[+\overbrace{\int_{\Omega}^{(h)}tc[\partial_{t}z_{h}^{*}(t)]+F^{ \prime}(Dz_{h}^{*}(t))[D\partial_{t}z_{h}^{*}(t)]\,dx}^{0}\] (93) \[-\overbrace{\int_{\Omega}^{(h)}tc[\partial_{t}z_{0}^{*}(t)]+F^{ \prime}(Dz_{0}^{*}(t))[D\partial_{t}z_{0}^{*}(t)]\,dx}^{0}. \tag{94}\]
The regularity of \((\Omega,c,F)\) then gives \(|g^{\prime}(t)|\leqslant Ch^{\alpha}\) and hence \(g(t)\leqslant g(1)+(t-t_{1})Ch^{\alpha}\leqslant tCh^{\alpha}+O(1)\), which proves (87).
Now set \(g(t)=f_{h}(z_{h}^{*}(t_{1}),t)-f_{h}(z_{h}^{*}(t),t)\). Note that \(g(t_{1})=0\). Furthermore,
\[g^{\prime}(t) =\int_{\Omega}^{(h)}c[z_{h}^{*}(t_{1})-z_{h}^{*}(t)]\,dx \tag{95}\] \[-\overbrace{\int_{\Omega}^{(h)}tc[\partial_{t}z_{h}^{*}(t)]+F^{ \prime}(Dz_{h}^{*}(t))[D\partial_{t}z_{h}^{*}(t)]\,dx}^{0}. \tag{96}\]
We see that \(g^{\prime}(t_{1})=0\). Thus,
\[g^{\prime\prime}(t) =-\int_{\Omega}^{(h)}c[\partial_{t}z_{h}^{*}(t)] \tag{97}\] \[=\frac{1}{t}\int_{\Omega}^{(h)}F^{\prime}(Dz_{h}^{*})[D\partial_{ t}z_{h}^{*}]\] (98) \[\leqslant\frac{\sqrt{\nu}}{t}\int_{\Omega}^{(h)}\sqrt{F^{\prime \prime}[(D\partial_{t}z_{h}^{*})^{2}]}\] (99) \[\leqslant\frac{\sqrt{\nu|\Omega|}}{t}\sqrt{\int_{\Omega}^{(h)}F^{ \prime\prime}[(D\partial_{t}z_{h}^{*})^{2}]}\] (100) \[=\frac{\sqrt{\nu|\Omega|}}{t}\sqrt{-\int_{\Omega}^{(h)}c[ \partial_{t}z_{h}^{*}(t)]}\] (101) \[=\frac{\sqrt{\nu|\Omega|}}{t}\sqrt{g^{\prime\prime}(t)} \tag{102}\]
Thus,
\[g^{\prime\prime}(t)\leq\frac{\nu|\Omega|}{t^{2}}. \tag{103}\]
The result follows by integrating twice.
We now prove our first main theorem.
Proof of Theorem 1.2.: In view of Lemma 4.3, note that short \(t\) steps on the fine grid satisfy \(t_{k+1}=\rho t_{k}\) with \(\rho-1\sim h^{0.5d}\).
We begin with the analysis of the \(h\)-then-\(t\) schedule. The initial step of the algorithm is to start from an admissible \(z^{(0)}\in V_{h^{(1)}}\cap\mathcal{Q}\), the coarsest space, and find the center \(z^{*}_{h^{(1)}}(t_{0})\) by damped Newton iterations. This will require a certain number \(N_{0}\) of damped Newton iterations, but this initial problem the same regardless of the choice of the finest grid level. In other words, \(N_{0}\) is independent of the fine grid parameter \(h\), so this initial step requires \(O(1)\) damped Newton iterations.
According to Lemma 4.3 and (87), since \(t_{0}=O(1)\), for any grid level \(h^{(\ell)}\geq h^{(L)}=h\), the function \(Ch^{-d}f_{h}(w,t_{0})\) is standard self-concordant on \(V_{h^{(\ell)}}\cap\mathcal{Q}\), and the suboptimality gap is \(O(h^{\alpha-d})=O(1)\) so each \(h\) refinement converges in \(O(1+\log\log\epsilon^{-1})\) damped Newton iterations.
Once on the fine grid, according to (86), the short \(t\) step length is optimal, resulting in \(\tilde{O}(h^{-0.5d})\) damped Newton iterations.
Now consider an arbitrary schedule of \(t\) and \(h\) refinements. We only consider the final grid refinement (i.e. from level \(h^{(L-1)}\) to \(h^{(L)}=h\)), and the subsequent \(t\) refinement on the fine grid \(h\). Say that this occurs at iteration \(j\), i.e. \(h_{j}=h^{(L-1)}\) and \(h_{j+1}=h^{(L)}=h\). We make two cases. First, if \(t_{j}>h^{-\alpha}\), then the suboptimality gap of \(Ch^{-d}f\) for the final \(h\) refinement given by (87) is at best \(O(h^{-d})\). By the standard theory, short \(t\) steps are theoretically optimal and converge in \(\tilde{O}(h^{-0.5d})\) damped Newton iterations.
We now consider the case \(t_{j}\leq h^{-\alpha}\). We count the \(t\) refinements on the fine grid. Because short \(t\) steps are optimal, and because the stopping criterion is \(t\sim h^{-2\alpha}\), the theoretical estimate must be at least \(\tilde{O}(h^{-0.5d})\) damped Newton iterations for these \(t\) refinements.
## 6 Analysis of Algorithm MGB
**Lemma 6.1**.: _Assume \((T_{h},c,F)\) is regular. There is a constant \(C_{\mathrm{href}}\) such that_
\[f_{h}(z_{h,t_{1},H}^{*}(t),t)-f_{h}(z_{h}^{*}(t),t)\leq C_{\mathrm{href}}^{2}H^{2 \alpha}\left(\frac{t}{t_{1}}-1\right)^{2}, \tag{104}\]
_provided that \(C_{\mathrm{href}}(t/t_{1}-1)\leq 0.6838\)._
Proof.: We shall denote by \(\Pi_{h}\) the interpolation operator for \(V_{h}\). Let
\[w_{h,t_{1},H}(t)=z_{h}^{*}(t_{1})+\Pi_{H}(z_{0}^{*}(t)-z_{0}^{*}(t_{1})). \tag{105}\]
Put \(g(t)=Dw_{h,t_{1},H}(t)-Dz_{h}^{*}(t)\), and note that \(g(t_{1})=0\). Then,
\[\|Dw_{h,t_{1},H}(t)-Dz_{h}^{*}(t)\|_{L_{h}^{\infty}} =\|g(t)\|_{L_{h}^{\infty}}=\|g(t)-g(t_{1})\|_{L_{h}^{\infty}} \tag{106}\] \[=\left\|\int_{t_{1}^{-1}}^{t^{-1}}\partial_{t^{-1}}g(\tau)\,d \tau\right\|_{L_{h}^{\infty}}\] (107) \[\leq\int_{t_{1}^{-1}}^{t^{-1}}\|D\Pi_{H}\partial_{t^{-1}}z_{0}^{ *}(\tau)-D\partial_{t^{-1}}z_{h}^{*}(\tau)\|_{L_{h}^{\infty}}\ d\tau\] (108) \[\leq\int_{t_{1}^{-1}}^{t^{-1}}\|D\Pi_{H}\partial_{t^{-1}}z_{0}^{ *}(\tau)-D\partial_{t^{-1}}z_{0}^{*}(\tau)\|_{L_{h}^{\infty}}\] (109) \[\qquad+\|D\partial_{t^{-1}}z_{0}^{*}(\tau)-D\partial_{t^{-1}}z_{ h}^{*}(\tau)\|_{L_{h}^{\infty}}\,d\tau\] (110) \[\leq CH^{\alpha}(t_{1}^{-1}-t^{-1}). \tag{111}\]
Thus,
\[f_{h}(w_{h,t_{1},H}(t))-f_{h}(z_{h}^{*}(t)) \leq\int_{\Omega}^{(h)}\psi\left(-\sqrt{F^{\prime\prime}(Dz_{h}^ {*})[(Dw_{h,t_{1},H}(t)-Dz_{h}^{*}(t))^{2}]}\right) \tag{112}\] \[\leq\int_{\Omega}^{(h)}\psi\left(-Ct\|Dw_{h,t_{1},H}(t)-Dz_{h}^{ *}(t)\|_{2}\right)\] (113) \[\leq\int_{\Omega}^{(h)}\psi\left(-CH^{\alpha}(t/t_{1}-1)\right)\] (114) \[\leq CH^{2\alpha}(t/t_{1}-1)^{2}, \tag{115}\]
where we have used that \(\psi(\alpha)\leq\alpha^{2}\) when \(-0.6838\leq\alpha\leq 0\) and \(\psi(\alpha)\) is monotonically decreasing for \(\alpha\leq 0\)
**Lemma 6.2**.: _Denote by \(L=O(\log h)\) the number of grid levels, from the coarsest level \(h^{(1)}\) to the fine grid level \(h=h^{(L)}\). Assume that \(\alpha\geq d\). We denote \(z_{h}^{*}=z_{h}^{*}(t)\) (i.e. the ommited \((t)\) is implied), but \(z_{h}^{*}(t_{1})\) has its usual meaning. For \(t>0\), assume that \(z_{h}^{*}=z_{h}^{*}(t)\) satisfies the following reverse Holder inequality:_
\[\|\sqrt{F^{\prime\prime}(Dz_{h}^{*})[(Dv)^{2}]}\|_{L_{h}^{\infty }(K)}\leqslant C_{RH}(z_{h}^{*})|K|^{-1}\|\sqrt{F^{\prime\prime}(Dz_{h}^{*})[( Dv)^{2}]}\|_{L_{h}^{1}(K)}, \tag{116}\]
_for all \(1\geq H\geq h\), \(K\in T_{H}\) and polynomial \(v\) such that \(Dv\) is of degree \(\alpha-1\). Define_
\[\tilde{\beta} =4^{-d}e^{-4}(1+\sqrt{2|\Omega|})^{-2}C_{\min}^{2} \tag{117}\] \[\beta =(L+1)^{-2}C_{RH}^{-2}(z_{h}^{*}(t))\tilde{\beta}. \tag{118}\]
_Denote \(\rho=t/t_{1}\) and assume that_
\[\rho<1+C_{\rm{href}}^{-1}\sqrt{\beta}. \tag{119}\]
_For \(\ell=1,\ldots,L\), put \(A_{\ell}=z_{h}^{*}(t_{1})+V_{h^{(\ell)}}\). Then,_
\[f_{h}(z_{h,t_{1},h^{(\ell)}}^{*})-f_{h}(z_{h}^{*}) \leqslant(0.5h^{(\ell)})^{2d}\beta\text{ and} \tag{120}\] \[C_{RH}(A_{\ell},t,(h^{(\ell)})^{2d}\beta) \leqslant e^{2}C_{RH}(z_{h}^{*}). \tag{121}\]
_for all \(K\in T_{H}\) and \(1\geq H\geq h\) and polynomial \(v\) such that the degree of \(Dv\) is \(\alpha-1\)._
_Furthermore, if_
\[w\in\mathcal{L}_{A_{\ell},t}((h^{(\ell)})^{2d}\beta), \tag{122}\]
_then, for an arbitrary test function \(\phi\in V_{H}\),_
\[\left|\mathcal{F}_{h}^{\prime\prime\prime}(Dw)[(D\phi)^{3}]\right| \leqslant 2e^{2}C_{RH}(z_{h}^{*})C_{\min}^{-0.5}H^{-0.5d}(\mathcal{F}(Dw)[(D \phi)^{2}])^{1.5}. \tag{123}\]
_In particular, the function \(w\to e^{4}C_{RH}^{2}(z_{h}^{*})C_{\min}^{-1}(h^{(\ell)})^{-d}f_{h}(w,t)\) is standard self-concordant on \(\mathcal{L}_{A_{\ell},t}((h^{(\ell)})^{2d}\beta)\) with suboptimality gap bounded by_
\[4^{-d}(h^{(\ell)})^{(d)}C_{\min}(1+\sqrt{2|\Omega|})^{-2}(L+1)^{-2}. \tag{124}\]
_The damped Newton method on \(\mathcal{L}_{A_{\ell},t}((h^{(\ell)})^{2d}\beta)\) converges in_
\[O(1)+\log\log\epsilon^{-1}\text{ iterations}. \tag{125}\]
Proof.: For \(\ell=1,\ldots,L\), we begin by proving a reverse Holder inequality of the form
\[\|\sqrt{F^{\prime\prime}(Dz^{*}_{h,t_{1},h^{(\ell)}})[(Dv)^{2}]}\|_{L^{\infty}_{h }(K)}\leq C|K|^{-1}\|\sqrt{F^{\prime\prime}(Dz^{*}_{h,t_{1},h^{(\ell)}})[(Dv)^{2 }]}\|_{L^{1}_{h}(K)}, \tag{126}\]
for all \(1\geqslant H\geqslant h>0\) and for all \(K\in T_{H}\). We shall denote by \(C_{RH}(z^{*}_{h,t_{1},h^{(\ell)}})\) the smallest constant \(C\) such that (126) holds. We do a proof by induction "backwards", starting from \(\ell=L\), that the following inequality holds
\[C_{RH}(z^{*}_{h,t_{1},h^{(\ell)}})\leqslant\left(1-\frac{1}{L+1}\right)^{-2(L -\ell)}C_{RH}(z^{*}_{h}). \tag{127}\]
For \(\ell=L\), since \(z^{*}_{h,t_{1},h^{(L)}}=z^{*}_{h,t_{1},h}=z^{*}_{h}\), the induction hypothesis is tautological. We now prove by induction the cases \(\ell=L-1,\ldots,1\). We find that
\[f_{h}(z^{*}_{h,t_{1},h^{(\ell)}})-f_{h}(z^{*}_{h,t_{1},0.5h^{( \ell)}}) \tag{128}\] \[\leqslant f_{h}(z^{*}_{h,t_{1},h^{(\ell)}})-f_{h}(z^{*}_{h})\] (129) \[\stackrel{{(\ref{eq:100})}}{{\leqslant}}C_{\text{ href}}^{2}(h^{(\ell)})^{2d}(\rho-1)^{2}\] (130) \[\leqslant(h^{(\ell)})^{2d}(L+1)^{-2}C_{RH}^{-2}(z^{*}_{h}(t)) \tilde{\beta}\] (131) \[=(h^{(\ell)})^{2d}(L+1)^{-2}4^{-d}e^{-4}(1+\sqrt{2|\Omega|})^{-2} C_{RH}^{-2}(z^{*}_{h})C_{\text{min}}^{2}. \tag{132}\]
Note that \(\left(1-\frac{1}{L+1}\right)^{-2(L-\ell)}\leqslant\left(1-\frac{1}{L+1} \right)^{-2L}\leqslant e^{2}\), so from (127) with \(\ell\) replaced by \(\ell+1\) (i.e. the induction hypothesis), we find that \(C_{RH}(z^{*}_{h,t_{1},h^{(\ell+1)}})\leqslant e^{2}C_{RH}(z^{*}_{h})\), so that
\[f_{h}(z^{*}_{h,t_{1},h^{(\ell)}})-f_{h}(z^{*}_{h,t_{1},0.5h^{( \ell)}}) \leqslant\beta_{\ell+1}\text{ where } \tag{133}\] \[\beta_{\ell} =\frac{C_{\text{min}}^{2}(h^{(\ell)})^{2d}}{(L+1)^{2}(1+\sqrt{2| \Omega|})^{2}C_{RH}^{2}(z^{*}_{h,t_{1},h^{(\ell)}})}. \tag{134}\]
We put \(A=A_{\ell+1}=z^{*}_{h}(t_{1})+V_{0.5h^{(\ell)}}\) and \(\beta=\beta_{\ell+1}\) to find that (68) holds
with \(\delta=(L+1)^{-2}<1\). Thus,
\[C_{RH}(A_{\ell+1},t,\beta_{\ell+1})\stackrel{{\eqref{eq: \eqref{eq:C_RH}}}}{{\leqslant}}(1-\sqrt{\delta})^{-2}C_{RH}(z_{h,t_{1},0.5h^{( \ell)}}^{*}) \tag{135}\] \[\stackrel{{\eqref{eq:C_RH}}}{{\leqslant}}\left(1- \frac{1}{L+1}\right)^{-2}\left(1-\frac{1}{L+1}\right)^{-2(L-\ell-1)}C_{RH}(z_ {h}^{*})\] (136) \[=\left(1-\frac{1}{L+1}\right)^{-2(L-\ell)}C_{RH}(z_{h}^{*}). \tag{137}\]
Then, from \(z_{h,t_{1},h^{\ell}}^{*}\in\mathcal{L}_{A_{\ell+1},t}(\beta_{\ell+1})\), we have that \(C_{RH}(z_{h,t_{1},h^{\ell}}^{*})\leqslant C_{RH}(A_{\ell+1},t,\beta_{\ell+1})\), and there follows (127). This completes the induction proof of (127) for \(\ell=1,\ldots,L\).
We now prove the self-concordance of \(\mathcal{G}_{h}\). Let \(\phi\in V_{H}\) be an arbitrary test function, and write \(\phi=\sum_{K\in T_{H}}\mathbbm{1}_{K}v_{K}\), where each \(v_{K}\) is a polynomial such that the degree of \(Dv_{k}\) is \(\alpha-1\).
\[|\mathcal{F}_{h}^{\prime\prime\prime}(Dw)[(D\phi)^{3}]| \tag{138}\] \[\leqslant\int_{\Omega}^{(h)}2|F^{\prime\prime}(Dw)[(D\phi)^{2}]|^ {1.5}\] (139) \[\leqslant 2\|F^{\prime\prime}(Dw)[(D\phi)^{2}]\|_{L_{h}^{1}( \Omega)}\|\sqrt{F^{\prime\prime}(Dw)[(D\phi)^{2}]}\|_{L_{h}^{\infty}(\Omega)}\] (140) \[=2\|F^{\prime\prime}(Dw)[(D\phi)^{2}]\|_{L_{h}^{1}(\Omega)}\max_{ K\in T_{H}}\|\sqrt{F^{\prime\prime}(Dw)[(Dv_{K})^{2}]}\|_{L_{h}^{\infty}(K)}\] (141) \[\leqslant 2e^{2}C_{RH}(z_{h}^{*})\|F^{\prime\prime}(Dw)[(D\phi)^{2} ]\|_{L_{h}^{1}(\Omega)}\max_{K\in T_{H}}|K|^{-1}\|\sqrt{F^{\prime\prime}(Dw)[( Dv_{K})^{2}]}\|_{L_{h}^{1}(K)}\] (142) \[\leqslant 2e^{2}C_{RH}(z_{h}^{*})\|F^{\prime\prime}(Dw)[(D\phi)^{2} ]\|_{L_{h}^{1}(\Omega)}\max_{K\in T_{H}}|K|^{-0.5}\sqrt{\|F^{\prime\prime}(Dw)[ (Dv_{K})^{2}]}\|_{L_{h}^{1}(K)}\] (143) \[\leqslant 2e^{2}C_{RH}(z_{h}^{*})C_{\min}^{-0.5}H^{-0.5d}\|F^{ \prime\prime}(Dw)[(D\phi)^{2}]\|_{L_{h}^{1}(\Omega)}^{1.5}. \tag{144}\]
**Lemma 6.3**.: \[f_{h}(z_{h}^{*}(t_{1}),t)-f_{h}(z_{h,t_{1},H}^{*}(t),t)\leqslant\nu|\Omega|( \rho-\log\rho-1)\text{ where }\rho=\frac{t}{t_{1}}\geqslant 1.\] (145)
Proof.: Let \(g(t)=f_{h}(z_{h}^{*}(t_{1}),t)-f_{h}(z_{h,t_{1},H}^{*}(t),t)\).
\[g^{\prime}(t) =\int_{\Omega}^{(h)}c[z_{h}^{*}(t_{1})-z_{h,t_{1},H}^{*}(t)]\,dx \tag{146}\] \[-\overbrace{\left(\int_{\Omega}^{(h)}tc[\partial_{t}z_{h,t_{1},H }^{*}(t)+F^{\prime}(Dz_{h,t_{1},H}^{*}(t))[D\partial_{t}z_{h,t_{1},H}^{*}(t)] \,dx\right)}^{0}, \tag{147}\]
where we have used that \(\partial_{t}z_{h,t_{1},H}^{*}(t)\in V_{H}\), the tangent space of \(A=z_{h}^{*}(t_{1})+V_{H}\). We further see that \(g^{\prime}(t_{1})=0\). Thus,
\[|g^{\prime\prime}(t)| =\left|\int_{\Omega}^{(h)}c[\partial_{t}z_{h,t_{1},H}^{*}(t)]\right| \tag{148}\] \[=\frac{1}{t}\left|\int_{\Omega}^{(h)}F^{\prime}(Dz_{h,t_{1},H}^{ *}(t))[D\partial_{t}z_{h,t_{1},H}^{*}(t)]\right|\] (149) \[\leq\frac{1}{t}\int_{\Omega}^{(h)}\sqrt{\nu F^{\prime\prime}(Dz_{ h,t_{1},H}^{*}(t))[(D\partial_{t}z_{h,t_{1},H}^{*}(t))^{2}]}\] (150) \[\leq\frac{1}{t}\sqrt{\nu|\Omega|\int_{\Omega}^{(h)}F^{\prime \prime}(Dz_{h,t_{1},H}^{*}(t))[(D\partial_{t}z_{h,t_{1},H}^{*}(t))^{2}]}\] (151) \[=\frac{1}{t}\sqrt{-\nu|\Omega|\int_{\Omega}^{(h)}c[\partial_{t}z_ {h,t_{1},H}^{*}(t)]}\] (152) \[=\frac{1}{t}\sqrt{\nu|\Omega|\,|g^{\prime\prime}(t)|}. \tag{153}\]
As a result,
\[|g^{\prime\prime}(t)|\leq\frac{\nu|\Omega|}{t^{2}}. \tag{154}\]
**Lemma 6.4**.: _Define_
\[C_{\rm mg}=\min\left\{\frac{\sqrt{\tilde{\beta}}}{C_{\rm href}}, \;\frac{\sqrt{2\tilde{\beta}}\,(h^{(1)})^{d}}{\sqrt{\nu|\Omega|}}\right\}. \tag{155}\]
_Assume_
\[\rho\leq 1+\frac{C_{\rm mg}}{(L+1)C_{RH}(z_{h}^{*}(t))}. \tag{156}\]
_Put \(t=\rho t_{1}\). Then, Algorithm MGB compute \(z_{h}^{\star}(t)\) from \(z_{h}^{\star}(t_{1})\) in_
\[L\left(O(1)+\log\log\epsilon^{-1}\right)\text{ Newton iterations.} \tag{157}\]
Proof.: Denote \(A_{\ell}=z_{h}^{\ast}(t_{1})+V_{h^{(\ell)}}\). From (145), (155), (156) and from \(\rho-\log\rho-1\leqslant 0.5(\rho-1)^{2}\),
\[z_{h}^{\star}(t_{1})\in\mathcal{L}_{A_{1},t}(\tilde{\beta}(h^{(1)})^{2d}(L+1) ^{-2}C_{RH}^{-2}(z_{h}^{\star}(t)))=\mathcal{L}_{A_{1},t}(\beta(h^{(1)})^{2d}). \tag{158}\]
According to (125), the damped Newton methos starting at \(z_{h}^{\star}(t_{1})\) will locate \(z_{h,t_{1},h^{(1)}}^{\ast}(t)\) in
\[O(1)+\log\log\epsilon^{-1}\text{ iterations.} \tag{159}\]
Furthermore, for each \(\ell=1,\ldots,L-1\), we find that
\[z_{h,t_{1},h^{(\ell)}}^{\ast}(t)\in\mathcal{L}_{A_{\ell+1},t}((h^{(\ell+1)})^ {2d}\beta). \tag{160}\]
According to (125), the damped Newton methos starting at \(z_{h,t_{1},h^{(\ell)}}^{\ast}(t)\) will locate \(z_{h,t_{1},h^{(\ell+1)}}^{\ast}(t)\) in (159) iterations. Thus, the total number of iterations is obtained by multiplying (159) by \(L\).
We are now ready to prove our second main theorem.
Proof of Theorem 1.4.: The initial step of Algorithm MGB begins with an admissible \(z^{(0)}\in V_{h^{(1)}}\cap\mathcal{Q}\), finds \(z_{h^{(1)}}^{\ast}(t_{0})\) by damped Newton steps, and from there performs \(h\) refinements to compute \(z_{h}^{\star}(t_{0})\). This procedure is identical to the initial phase of the \(h\)-then-\(t\) schedule of the naive algorithm. The proof of Theorem 1.2 shows that this initial phase requires \(\tilde{O}(1)\) damped Newton steps.
The number of damped Newton iterations to compute \(z_{h}^{\star}(t_{k+1})\) from \(z_{h}^{\ast}(t_{k})\) is given by (157). It thus suffices to count the number of \(t\)-steps. The \(t\) step size is limited by (156). Since we start with \(t_{0}=O(1)\) and end when \(t_{k}\sim h^{2\alpha}\), the total number of \(t\) steps at most
\[O\left((\log(h^{2\alpha}/t_{0}))(1-\log_{2}(h))\sup_{t\in[t_{0},h^{2\alpha}]} C_{RH}(z_{h}^{\star}(t))\right). \tag{161}\]
This whole expression is bounded by a polylogarithm of \(h\).
Reverse inequalities
In the present section, we show that many functions satisfy reverse Holder and Sobolev inequalities. Our goal is to show that the reverse Holder inequality of Definition 4.4 is satisfied if the solution \(z_{0}^{*}\) and the barrier \(F\) satisfy some smoothness conditions.
**Lemma 7.1**.: _Let \(D(x,R)\subset\mathbb{C}\) be a disc centered at \(x\in\mathbb{C}\) of radius \(R\), and let \(r<R\). Then, for any bounded analytic function \(f(x)\) on \(D(x,R)\),_
\[\|f^{\prime}\|_{L^{\infty}(D(x,r))}\leqslant\frac{R}{(R-r)^{2}}\|f\|_{L^{ \infty}(D(x,R))}. \tag{162}\]
Proof.: For \(y\in D(x,r)\), the Cauchy integral formula gives
\[|f^{\prime}(y)| =\left|\frac{1}{2\pi i}\oint\limits_{\partial D(x,R)}\frac{f(z)}{ (z-y)^{2}}\,dz\right| \tag{163}\] \[\leqslant\frac{R}{(R-r)^{2}}\|f\|_{L^{\infty}(\partial D(x,R)}. \tag{164}\]
**Lemma 7.2**.: _Denote by \(P_{\beta}\) the set of polynomials in \(x\in\mathbb{C}\) of degree \(\beta\), and let \(\epsilon>0\). Given \(\beta\), there is a constant \(C=C(\beta,\epsilon)\) such that the following holds. For any \(q\in P_{\beta}\), \(x\in\mathbb{R}\) and \(r>0\),_
\[\|q\|_{L^{\infty}(D(x,r))}\leqslant Cr^{-1}\|q\|_{L^{1}(x-\epsilon r,x+ \epsilon r)}. \tag{165}\]
Proof.: For \(u\in P_{\beta}\), by norm equivalence in finite dimensions, there is a constant \(C\) such that
\[\|u\|_{L^{\infty}(D(0,1))}\leqslant C\|u\|_{L^{1}(-\epsilon,\epsilon)}. \tag{166}\]
For arbitrary \(q\in P_{\beta}\), the substitution \(u(y)=q((y-x)/r)\) gives
\[\|q\|_{L^{\infty}(D(x,r))} =\|u\|_{L^{\infty}(D(0,1))} \tag{167}\] \[\leqslant C\|u\|_{L^{1}(-\epsilon,\epsilon)}\] (168) \[=Cr^{-1}\|q\|_{L^{1}(x-\epsilon r,x+\epsilon r)}. \tag{169}\]
Let \(U\subset\mathbb{C}^{m}\) be a domain. We now recall the Weierstrass preparation theorem, see Krantz [2001, Theorem 6.4.5] for details. A Weierstrass polynomial of degree \(\beta\) is a function \(p(x,y)=x^{\beta}+\sum_{j=0}^{\beta-1}a_{j}(y)x^{j}\) defined on some polydisc \((x,y)\subset B\), such that each function \(a_{j}(y)\) is holomorphic on its domain. A unit \(u(x,y)\) is a nonzero holomorphic function on \(B\). The Weierstrass preparation theorem states that, if \(f\) is holomorphic on \(U\) and \(z\in U\) then \(f(z)=p(z)u(z)\) for some Weierstrass polynomial \(p\), unit \(u\), on some polydisc neighborhood \(B\subset U\) of \(z\).
The Weierstrass polynomial satisfies the formula
\[p(x,y)=\prod_{j=1}^{\beta}(x-\alpha_{j}(y)), \tag{170}\]
where \(\alpha_{1}(y),\ldots,\alpha_{\beta}(y)\) are the roots of the function \(x\to f(x,y)\) on \(B\). The functions \(\{\alpha_{j}(y)\}\) are holomorphic in \(y\).
**Lemma 7.3**.: _With the notation of the Weierstrass preparation theorem, if \(f(x,y)\geq 0\) when \((x,y)\in\mathbb{R}^{m}\), then_
\[p(x,y)=\prod_{j=1}^{\beta/2}(x-\beta_{j}(y))(x-\bar{\beta}_{j}(y)), \tag{171}\]
_where each \(\beta_{j}\) satisfies \(\Re\beta_{j}(y)\geq 0\)._
Proof.: Passing to a smaller neighborhood \(B\) if necessary, \(f(x,y)\) is given by its power series, which has real coefficients, and thus satisfies \(f(\bar{x},y)=\overline{f(x,y)}\). Thus, the roots of \(f(x,y)\) in \(B\) are either real (in which case they must be of even order because \(f\geq 0\)), or they must occur in conjugate pairs, giving rise to the roots \(\beta_{j}(y)\) in (171).
**Corollary 7.4**.: _We set_
\[r(x,y) =\prod_{j=1}^{\beta/2}(x-\beta_{j}(y))\text{ and } \tag{172}\] \[v(z) =\sqrt{u(z)}. \tag{173}\]
_Then, \(f(x,y)=v^{2}(x,y)|r(x,y)|^{2}\) on \(B\cap\mathbb{R}^{m}\), and \(v\) is a unit on \(B\)._
Proof.: The fact that \(r(x,y)\overline{r(\bar{x},y)}=p\) is directly from Lemma 7.3. Passing to a smaller \(B\) if necessary, since \(u\neq 0\) on \(B\), we may assume that \(u(B)\) is contained in a half-plane that excludes the origin. Thus, \(v=\sqrt{u}\) is a well-defined holomorphic unit function.
**Lemma 7.5**.: _Let \(z\in U\cap\mathbb{R}^{m}\), and let \(0<\epsilon<1\). Assume \(f\) is holomorphic on \(U\), and \(f\geq 0\) on \(U\cap\mathbb{R}^{m}\). There is a constant \(C\) and polydisc \(B=\prod_{j}D(z_{j},R_{j})\)such that the following holds. For every \(a\in D(z_{1},R_{1})\cap\mathbb{R}\) and \(\delta>0\) such that \([a-2\delta,a+2\delta]\subset D(z_{1},R_{1})\), and for every \(y\in\mathbb{R}^{m-1}\cap\prod_{j=2}^{m}D(z_{j},R_{j})\),_
\[\delta\|\sqrt{f(\cdot,y)}\|_{L^{\infty}(a-\delta,a+\delta)}+\delta^{2}\|\sqrt{ f(\cdot,y)}^{\prime}\|_{L^{\infty}(a-\delta,a+\delta)}\leq C\|\sqrt{f(\cdot,y)} \|_{L^{1}(a-\epsilon\delta,a+\epsilon\delta)} \tag{174}\]
_where \(\cdot^{\prime}\) denotes the partial derivative with respect to \(x\)._
Proof.: We use the Weierstrass preparation theorem, in the form of Corollary 7.4.
\[\|\sqrt{f(\cdot,y)}\|_{L^{\infty}(a-\delta,a+\delta)} =\|vr\|_{L^{\infty}(a-\delta,a+\delta)} \tag{175}\] \[\leq\|v\|_{L^{\infty}}\|r(\cdot,y)\|_{L^{\infty}(D(a,\delta))}\] (176) \[\overset{\eqref{eq:165}}{\leq}\frac{C}{\delta}\|r(\cdot,y)\|_{L^ {1}(a-\epsilon\delta,a+\epsilon\delta)}\] (177) \[\leq\frac{C}{\delta}\|f(\cdot,y)\|_{L^{1}(a-\epsilon\delta,a+ \epsilon\delta)}. \tag{178}\]
Furthermore,
\[\|\sqrt{f(\cdot,y)}^{\prime}\|_{L^{\infty}(a-\delta,a+\delta)} =\|(v|r|)^{\prime}\|_{L^{\infty}(a-\delta,a+\delta)} \tag{179}\] \[=\|v^{\prime}|r|+v\operatorname{sgn}(r)r^{\prime}\|_{L^{\infty}( a-\delta,a+\delta)}\] (180) \[\leq C(\|r\|_{L^{\infty}(a-\delta,a+\delta)}+\|r^{\prime}\|_{L^{ \infty}(a-\delta,a+\delta)})\] (181) \[\overset{\eqref{eq:162}}{\leq}C\left(\|r\|_{L^{\infty}(D(a, \delta))}+\frac{2}{\delta}\|r\|_{L^{\infty}(D(a,2\delta))}\right)\] (182) \[\overset{\eqref{eq:165}}{\leq}\frac{C}{\delta^{2}}\|r\|_{L^{1}(a -\epsilon\delta,a+\epsilon\delta)}\] (183) \[\leq\frac{C}{\delta^{2}}\|\sqrt{f}\|_{L^{1}(a-\epsilon\delta,a+ \epsilon\delta)}. \tag{184}\]
The following inequality is also sometimes called a reverse Poincare or Friedrichs inequality.
**Lemma 7.6** (Strong reverse Sobolev inequality).: _Let \(U\subset\mathbb{C}^{m}\) be a domain, and \(K\subset U\cap\mathbb{R}^{m}\) be compact. For \(z\in\mathbb{C}^{m}\), denote \(z=(z^{(j)},\tilde{z}^{(j)})\) with \(z^{(j)}\in\mathbb{C}^{j}\). Let \(f\) be holomorphic on \(U\) and \(f\geq 0\) on \(U\cap\mathbb{R}^{m}\) and \(\epsilon>0\)._
_There is a constant \(C\) such that the following holds. If \(z\in K\) and \(\delta>0\), put \(V^{(j)}(\delta)=\prod_{i=1}^{j}[z_{i}^{(j)}-\delta,z_{i}^{(j)}+\delta]\). If \(V^{(j)}(2\delta)\times\{\tilde{z}^{(j)}\}\subset U\), then_
\[\delta^{j}\|\sqrt{f(\cdot,\tilde{z}^{(j)})}\|_{L^{\infty}(V^{(j)} (\delta))}+\delta^{j+1}\|\partial_{z^{(j)}}\sqrt{f(\cdot,\tilde{z}^{(j)})}\|_ {L^{\infty}(V^{(j)}(\delta))}\] \[\leq C\|\sqrt{f(\cdot,\tilde{z}^{(j)})}\|_{L^{1}(V^{(j)}(\epsilon \delta))}. \tag{185}\]
Proof.: We can cover \(K\) by polydiscs \(B\) as per Lemma 7.5, so that we may replace \(U\) by some polydisc \(B\). On \(B\), we proceed by induction on \(j\).
If \(j=1\), then (185) coincides with (174).
For the inductive step, assume that (185) holds for a given value of \(j\), we show that it also holds with \(j\) replaced by \(j+1\).
\[\delta^{j+1}\|\sqrt{f(\cdot,\tilde{z}^{(j+1)})}\|_{L^{\infty}(V^ {(j+1)}(\delta))}+\delta^{j+2}\|\partial_{z_{1}}\sqrt{f(\cdot,\tilde{z}^{(j+1 )})}\|_{L^{\infty}(V^{(j+1)}(\delta))} \tag{186}\] \[=\delta\sup_{\xi\in[z_{j+1}-\delta,z_{j+1}+\delta]}\left(\delta^ {j}\|\sqrt{f(\cdot,\xi,\tilde{z}^{(j+1)})}\|_{L^{\infty}(V^{(j)}(\delta))}\right.\] (187) \[\left.+\delta^{j+1}\|\partial_{z_{1}}\sqrt{f(\cdot,\xi,\tilde{z}^ {(j+1)})}\|_{L^{\infty}(V^{(j)}(\delta))}\right)\] (188) \[\stackrel{{\eqref{eq:K_1}}}{{\leq}}C\delta\sup_{\xi \in[z_{j+1}-\delta,z_{j+1}+\delta]}\|\sqrt{f(\cdot,\xi,\tilde{z}^{(j+1)})}\|_ {L^{1}(V^{(j)}(\delta))}\] (189) \[\leq C\left\|\delta\sup_{\xi\in[z_{j+1}-\delta,z_{j+1}+\delta]} \sqrt{f(\cdot,\xi,\tilde{z}^{(j+1)})}\right\|_{L^{1}(V^{(j)}(\delta))}\] (190) \[\stackrel{{\eqref{eq:K_1}}}{{\leq}}C\left\|\int_{z_{j +1}-\delta}^{z_{j+1}+\delta}\sqrt{f(\cdot,\xi,\tilde{z}^{(j+1)})}\,d\xi\right\| _{L^{1}(V^{(j)}(\delta))}\] (191) \[=C\|\sqrt{f(\cdot,\tilde{z}^{(j)})}\|_{L^{1}(V^{(j)}(\epsilon \delta))}. \tag{192}\]
Permuting the entries of \(z\) if necessary, we see that the partial derivative \(\partial_{z_{1}}\) can be replaced by any \(\partial_{z_{i}}\) with \(i=1,\ldots,j+1\), and the conclusion follows.
**Lemma 7.7**.: _Assume that \(U\times Y\subset\mathbb{C}^{m}\) is a domain, and \(f^{2}:U\times Y\to\mathbb{C}\) is complex analytic. Let \(L\subset U\times Y\cap\mathbb{R}^{m}\) be compact. Assume \(f(w)\geq 0\) for all \(w\in L\). Denote \(w=(x,y)\) with \(x\in\mathbb{C}^{d}\). Assume \(g_{0}:\Omega\times Y\to L\). Assume that the singular values of \(\partial_{x}g_{0}\) are uniformly bounded above and below. For \(H\geq h\geq 0\), assume \(g_{h}:\Omega\times Y\to L\) with \(\|g_{h}(\cdot,y)-g_{0}(\cdot,y)\|_{L^{\infty}(\Omega)}\leq C_{0}h\), where \(C_{0}>0\) is some constant. There are constants \(\epsilon_{0}>0\) and \(C_{1}<\infty\)
_such that for any \(H\) such that \(h\leqslant\epsilon_{0}H\) and \(K\in T_{H}\), there holds the reverse Holder inequality:_
\[\|f(g_{h}(\cdot,y),y)\|_{L^{\infty}_{h}(K)}\leqslant C_{1}H^{-d}\|f(g_{h}(\cdot, y),y)\|_{L^{1}_{h}(K)}. \tag{193}\]
Proof.: For any \(K\in T_{H}\), let \(x(K)\in K\) be the center of mass, and \(z=z(K)=g_{0}(x(K))\). The bounds on the singular values of \(\partial_{x}g_{0}\) guarantee that \(V(\epsilon H)\subset g_{0}(K)\subset V(CH)\), where \(V(\delta)=\prod_{i=1}^{d}[z_{i}-\delta,z_{i}+\delta]\), and \(0<\epsilon<C<\infty\) are constants. In addition, from \(\int_{K}f(g_{0}(x,y),y)\,dx=\int_{g_{0}(K)}f(w)/\det((\partial_{x}g_{0})(g_{0} ^{-1}(w)))\,dx\) and the bounds on \(\partial_{x}g_{0}\), we have
\[C_{3}\|f(\cdot,y)\|_{L^{1}(V(\epsilon H))}\leqslant\|f(g_{0}(\cdot,y),y)\|_{L ^{1}(K)}\leqslant C_{4}\|f(\cdot,y)\|_{L^{1}(V(CH))}. \tag{194}\]
Thus,
\[\left|\int_{K}^{(h)}f(g_{h}(\cdot,y),y)-\int_{K}f(g_{0}(\cdot,y), y)\right| \tag{195}\] \[\leqslant\int_{K}^{(h)}|f(g_{h}(\cdot,y),y)-f(g_{0}(\cdot,y),y)|\] (196) \[+\left|\int_{K}^{(h)}f(g_{0}(\cdot,y),y)-\int_{K}f(g_{0}(\cdot,y),y)\right|\] (197) \[\leqslant|f(\cdot,y)|_{W^{1,\infty}(V(CH))}\|g_{h}(\cdot,y)-g_{0 }(\cdot,y)\|_{L^{1}(K)}+C|f(g_{0}(\cdot,y))|_{W^{1,\infty}(K)}|K|h\] \[\leqslant C|f(\cdot,y)|_{W^{1,\infty}(V(CH))}h|K|\] (198) \[\overset{\eqref{eq:f___1}}{\leqslant}C\|f(\cdot,y)\|_{L^{1}(V( \epsilon H))}H^{-d-1}|K|h\] (199) \[\leqslant C\|f(g_{0}(\cdot,y),y)\|_{L^{1}(K)}\left(\frac{h}{H} \right). \tag{200}\]
Thus, if \(C\left(\frac{h}{H}\right)<0.5\), we have that
\[\|f(g_{h}(\cdot,y),y)\|_{L^{\infty}_{h}(K)} \leqslant\|f(\cdot,y)\|_{L^{\infty}(V(CH))} \tag{201}\] \[\overset{\eqref{eq:f__1}}{\leqslant}CH^{-d}\|f(\cdot,y)\|_{L^{1} (V(\epsilon H))}\] (202) \[\leqslant CH^{-d}\|f(g_{0}(K),y)\|_{L^{1}(K)}\] (203) \[\leqslant 2CH^{-d}\|f(g_{h}(K),y)\|_{L^{1}_{h}(K)}. \tag{204}\]
**Theorem 7.8** ("A priori estimate" for the uniform discrete reverse Holder inequality).: _Let \(U\in\mathbb{C}^{d}\) be a domain and \(L\subset U\cap\mathbb{R}^{d}\) be compact. Assume that \(\partial_{x}u_{h}^{*}(t,x)\in L\) for all \(0\leq h\leq h^{(1)}\), \(t_{0}\leq t\leq\infty\) and \(x\in\operatorname{cl}\Omega\). Further assume that the singular values of the Hessian \(\partial_{x}^{2}u_{0}^{*}(t,x)\) are uniformly bounded above and below. Assume that \(F=-\log\Phi\) and that \(\Phi\) is analytic on \(U\). Assume that \(\Phi_{s}\) is uniformly bounded below on \(L\). There is a constant \(C<\infty\) such that, for all \(H\geq h\geq 0\) and \(K\in T_{H}\), and all polynomial functions \(v\) such that \(Dv\) has degree \(\alpha-1\), then_
\[\|\sqrt{F^{\prime\prime}(Dz_{h}^{*}(t))[(Dv)^{2}]}\|_{L^{\infty}(K)}\leq CH^{- d}\|\sqrt{F^{\prime\prime}(Dz_{h}^{*}(t))[(Dv)^{2}]}\|_{L^{1}(K)}. \tag{205}\]
_This is the uniform discrete reverse Holder inequality of Definition 4.4._
Proof.: From \(t\Phi-\Phi_{s}(q,s^{(t)}(q))=0\) and the implicit function theorem, we see that \(s^{(t)}(q)\) is an analytic function of \((t,q)\). Denoting \(w=(q,s^{(t)}(q))\), the function \(f^{2}(q,t,v)=(Dv)^{T}\Phi^{2}(w)F^{\prime\prime}(w)Dv=(\Phi^{\prime}(w)[Dv])^{ 2}-\Phi(w)\Phi^{\prime\prime}(w)[(Dv)^{2}]\) is complex analytic on \(U\times Y\) where \(Y=[t_{0},\infty]\times S\). Furthermore,
\[\Phi(w)=\frac{1}{t}\Phi_{s}(w)=\Theta(t^{-1}); \tag{206}\]
i.e. \(\Phi(w)t\) is uniformly bounded below and above. We put \(q(x)=g_{h}(x,t):=\partial_{x}u_{h}^{*}(t,x)\). Note that
\[\|Du_{h}^{*}(t)-Du_{0}^{*}(t)\|_{L^{\infty}(\Omega)}\leq Ch. \tag{207}\]
Now let \(\epsilon_{0}>0\) be as in Lemma 7.7. If \(\epsilon_{0}H\leq h=O(h)\) then we use the "rough" quadrature bound
\[\int_{K}^{(h)}\eta=\sum_{i}\omega_{i}\eta(x_{i})\geq\omega_{i_{0}}\eta(x_{i_{ 0}})\geq Ch^{d}\|\eta\|_{L_{h}^{\infty}(K)}, \tag{208}\]
for some suitable \(i_{0}\) such that \(\|\eta\|_{L_{h}^{\infty}(K)}=\eta(x_{i_{0}})\). In this regime, we have \(H=O(h)\) so that
\[\frac{\|\sqrt{F^{\prime\prime}(Dz_{h}^{*}(t))[(Dv)^{2}]}\|_{L_{h}^{\infty}(K) }}{\|\sqrt{F^{\prime\prime}(Dz_{h}^{*}(t))[(Dv)^{2}]}\|_{L_{h}^{1}(K)}}\leq CH^ {-d}. \tag{209}\]
In the regime \(h<\epsilon_{0}H\), (193) is the desired estimate.
Implementation: the practical MGB algorithm
Theorem 1.4 states that Algorithm MGB converges for certain large \(t\) steps, but not arbitrarily large \(t\) steps. Thus, some sort of \(t\) step size adaptation is needed. Furthermore, it was shown in Loisel (2020) that the great majority of the time, \(z_{h}^{*}(t_{k+1})\) can be computed directly from \(z_{h}^{*}(t_{k})\) with a long step size, with very few Newton steps. In view of these two facts, we now introduce the practical MGB algorithm, which operates as follows.
**Definition 8.1** (The practical MGB algorithm).: _To compute \(z_{h}^{*}(t_{k})\) from \(z_{h}^{*}(t_{k-1})\), proceed as follows._
1. _Set_ \(t_{k}=t_{k-1}\rho_{k-1}\) _and attempt to find_ \(z_{h}^{*}(t_{k})\) _by Newton iteration starting from_ \(z_{h}^{*}(t_{k-1})\)_, with a maximum of 5 Newton iterations allowed. We call this a direct step. Denote by_ \(m_{k,0}\) _the number of Newton iterations used here._
2. _If the direct step failed to converge in 5 iterations, compute instead_ \(z_{h}^{*}(t_{k})\) _by the usual MGB algorithm of definition_ 1.3_. Denote by_ \(m_{k,\ell}\) _the number of Newton iterations used on grid level_ \(\ell\)_._
3. _Stepsize adaptation. Denote_ \(m_{k}=\max_{\ell}m_{k,\ell}\)_. Set the step size_ \[\rho_{k}=\begin{cases}\rho_{k-1}^{2}&\text{if }m_{k}\leq 2,\\ \rho_{k-1}&\text{if }3\leq m_{k}\leq 5,\\ \sqrt{\rho_{k-1}}&\text{if }m_{k}\geq 6.\end{cases}\] (210)
Each step of this algorithm requires the minimization of a function by Newton iteration, which we have named Barrier.minimize\((F,c,x^{(0)},R)\). Here, \(F\) is the barrier, \(c=c[x]\) is a functional of \(x\), and \(R\) is a matrix whose columns form a basis for the relevant finite element space \(V_{h^{(\ell)}}\subset W_{0}^{1,\infty}(\Omega)\times L^{\infty}(\Omega)\). The function Barrier.minimize uses damped Newton iterations to solve
\[\texttt{Barrier.minimize}(F,c,x^{(0)},R)\approx\operatorname*{arg\,min}_{y \in x^{(0)}+\operatorname{span}R}\int_{\Omega}^{(h)}c[y]+F(Dy). \tag{211}\]
This architecture allows one to also solve boundary value problems. Indeed, if \(x^{(1)}=\texttt{Barrier.minimize}(F,c,x^{(0)},R)\) and \(x^{(0)}=(u^{(0)},s^{(0)})\) with \(u^{(0)}|_{\partial\Omega}=g\neq 0\), where \(g\) is some Dirichlet data, then since \(\operatorname{span}R\subset W_{0}^{1,\infty}(\Omega)\), we will have that also have \(u^{(1)}|_{\partial\Omega}=g\neq 0\). Thus, Dirichlet data that is injected into the first iterate, will be preserved across all subsequent iterates, allowing one to solve inhomogeneous Dirichlet problems.
### Inhomogeneous Dirichlet problems.
When solving inhomogeneous Dirichlet problems, it is important to find a smooth prolongation of \(g\) to the interior of \(\Omega\). To that end, we define \(u_{h}(g)\) to be the solution of the discrete Poisson problem
\[\Delta_{h}u_{h}=0\text{ in }\Omega\text{ and }u_{h}=g\text{ on }\partial\Omega. \tag{212}\]
Here, \(\Delta_{h}\) is the usual finite element discretization of the Laplacian on the piecewise polynomial space of degree \(\alpha\) on \(T_{h}\).
The user provides boundary data \(g\) and a forcing \(f\). We then automatically produce an initial value for \(z^{(0)}=(u^{(0)},s^{(0)})\) by putting \(u^{(0)}=u_{h}(g)\). For the slack \(s^{(0)}\), we initialize it to the constant \(s^{(0)}=1\) and iteratively double it until \(F(u^{(0)},s^{(0)})<\infty\) for all \(x\in\Omega\). Given this value of \(z^{(0)}\), we may begin the MGB Algorithm to follow the central path.
Although Theorem 1.4 states that it suffices to choose \(t_{0}=O(1)\), we found that it is better to use \(t_{0}\sim h^{d}\). This seems to result in a more moderate number of initial centering steps needed to locate the central path.
### Issues of floating point arithmetic.
One can quickly reach the limits of double precision floating point accuracy. Denote by \(\epsilon\approx 2.22\times 10^{-16}\) the "machine epsilon". We will be using piecewise quadratic elements in dimension \(d=2\), so that our stopping criterion will be \(t\sim h^{-4}\). In view of Lemma 2.3, the condition number \(\kappa\) of \(F^{\prime\prime}(Dz_{h}^{*}(t))\) may be as large as \(h^{-8}\) and it becomes practically impossible to compute Newton steps if \(h^{-8}\sim\epsilon\), i.e. \(h\sim 0.01\). At this point, the matrix \(H=\mathcal{F}_{h}^{\prime\prime}(Dz_{h}^{*}(t))\) becomes numerically singular.
To avoid complications due to floating point roundoff, we regularize our problem as follows. First, we add \(10^{-15}\|H\|_{\infty}I\) to \(H\), which has a negligible effect on \(H\) when it is well-conditioned, but prevents numerical catastrophe when \(H\) becomes extremely ill-conditioned. Second, we limit \(t\) to \(t\leqslant 10^{8}\), beyond which it is numerically futile to continue the optimization.
### The naive algorithm.
We have also implemented the naive algorithm. As it was shown in Loisel (2020) that automatic stepsize adaptation is significantly better in practice than the theoretically optimal short step size, we also use the stepsize adaptation (210) for the naive algorithm. However, (210) can only compensate
for "stiffness" caused by large \(t\) steps, and cannot compensate for the difficulty of refining the \(h\) parameter when \(t\) is already large, as we will see in the numerical experiments.
## 9 Numerical experiments
We have implemented a suite of tests based on the \(p\)-Laplacian, parametrized by \(1\leq p<\infty\), with
\[\Lambda(q)=\|q\|_{2}^{p}. \tag{213}\]
We use the self-concordant barrier
\[F(q,s)=-\log(s^{\frac{2}{p}}-\|q\|_{2}^{2})-2\log s, \tag{214}\]
see Loisel (2020) for more information on the \(p\)-Laplacian and this barrier.
When using homogeneous Dirichlet conditions, a smooth solution \(u\) will have some extrema that are interior to \(\Omega\), and at those points, one will have \(\nabla u(x)=0\). At these points, if \(1\leq p<2\), the function \(\Lambda(\nabla u(x))\) becomes a distribution, so the forcing must be singular. Since our algorithm does not handle distributional data, we prefer to solve a problem with boundary value \(g\) on \(\partial\Omega\), and forcing \(f=0\). Specifically
We report the iteration counts in Figure 2. We vary \(p\in\{1,1.1,1.2,1.5,3,4\}\), \(0.01<h\leq 1.3\), and compare Algorithm MGB and the naive algorithm with a range of \(h\) and \(t\) refinement schedules. We report the number of iterations needed in each case to obtain convergence. Each algorithm is stopped if it runs longer than 5 minutes, in which case it is deemed to have failed to converge.
The naive algorithm is parametrized by its \(h\) and \(t\) refinement schedule. We have used the schedule \(\ell=\theta\log_{2}t\), where \(\ell\) denotes the grid level \(h^{(\ell)}\). The algorithm alluded to by Schiela and Gunther (2011) can be related to the case \(\theta=0.25\); indeed, in that scenario, if large \(t\) steps are used throughout and if the \(h\) refinements converge quickly, then indeed most of the iterations will occur on the coarse grid levels. When \(\theta\geq 0.5\), at least half of the iterations are expected to be computed on the fine grid.
Unfortunately, all the versions of the naive algorithm have trouble converging for small values of \(h\). The failures are all caused by an extremely large number of Newton iterations required to perform the \(h\) refinement when \(t\) becomes large. Note that when failures occur because of large \(t\) step sizes, then smaller \(t\) step sizes can be used to allow the algorithm to
Figure 2: Iteration counts of the MGB algorithm, compared to the naïve algorithm with various refinement schedules.
Figure 4: Step sizes as a function of \(t_{k}\), for the 1.0-Laplacian.
Figure 3: Iteration counts for Algorithm MGB (left) and the naive algorithm (right), as a function of \(t_{k}\), for the 1.0-Laplacian.
converge, but with \(h\) refinements, it is impossible to find intermediate grid levels between a level \(h^{(\ell)}\) and the next one \(h^{(\ell+1)}=\frac{1}{2}h^{(\ell)}\). As a result, the only way of making the naive algorithm work is to refine the \(h\) grid earlier, e.g. as per the \(h\)-then-\(t\) schedule.
The MGB algorithm converges in all cases \(p<2\) and for all grid parameters, and the convergence is quite fast. Algorithm MGB does not converge for all values of \(h\) when \(p>2\), but this is expected because of floating point loss of accuracy in these scenarios, as noted in Loisel (2020). Briefly speaking, it is difficult to precisely locate the minimum of the function \(|x|^{p}\) when \(p\) is large. Despite this, Algorithm MGB is able to converge faster in a wider array of situations, than the naive algorithms.
We have also displayed how many Newton iterations are needed at each \(t_{k}\), for Algorithm MGB and the naive algorithm for the 1-Laplacian. We have displayed iteration counts in different color for each grid level \(\ell\). We see that the naive algorithm's iteration count skyrockets up to 60 when grid transitions occur. By contrast, Algorithm MGB never needs more than 15 iterations, and then only for the very first iteration, which is expected to take \(O(1)\) iterations. We also notice that, at \(t=17.0667\), algorithm MGB used 6 iterations on the coarsest grid level. This is the only iteration, for this specific problem instance, where the "direct step" described in Section 8 failed to converge, and the full MGB step was used instead.
As can be seen in Figure 4, this MGB step allows the path-following algorithm to take large steps at all values of \(t\), and the step size \(\rho\) never decreases to less than 1.18. By contrast, the naive algorithm struggles and resorts to step sizes \(\rho\approx 1.02\). In this case, both algorithm converged, but for larger problems (with smaller values of \(h\)), the naive algorithm fails to converge by taking too many Newton iterations and too much time.
## 10 Conclusions and outlook
Algorithm MGB is the first algorithm that is a provably optimal solver (in the big-\(\tilde{O}\) sense) for convex Euler-Lagrange problems, or nonlinear elliptic PDEs. Its running time is shown to be \(\tilde{O}(n)\) FLOPS. Numerical experiments confirm the analysis.
|
2301.02520 | Analysis of a spatio-temporal advection-diffusion model for human
behaviors during a catastrophic event | In this work, using the theory of first-order macroscopic crowd models, we
introduce a compartmental advection-diffusion model, describing the
spatio-temporal dynamics of a population in different human behaviors (alert,
panic and control) during a catastrophic event. For this model, we prove the
local existence, uniqueness and regularity of a solution, as well as the
positivity and $L^1$--boundedness of this solution. Then, in order to study the
spatio-temporal propagation of these behavioral reactions within a population
during a catastrophic event, we present several numerical simulations for
different evacuation scenarios. | K. Khalil, V. Lanza, D. Manceau, M. A. Aziz-Alaoui, D. Provitolo | 2023-01-06T14:14:14Z | http://arxiv.org/abs/2301.02520v6 | Analysis of a spatio-temporal advection-diffusion model for human behaviors during a catastrophic event
###### Abstract.
In this work, using the theory of first-order macroscopic crowd models, we introduce a compartmental advection-diffusion type model, describing the spatio-temporal dynamics of a population in different human behaviors (alert, panic and control behaviors) during a catastrophic event. For this model, we prove the local existence, uniqueness and regularity of a solution, as well as the positivity and boundedness of this solution that allows the global existence. Then, in order to study the spatio-temporal behavioral dynamics of a population during a disaster event, we present several numerical simulations for different scenarios of evacuation.
Key words and phrases:First-order macroscopic crowd models; human behaviors; mathematical modeling; panic; semigroup theory 2000 Mathematics Subject Classification: 34G20, 47D06.
\({}^{\star}\)Corresponding author: K. Khalil; [email protected]
## 1. Introduction
In the last decades the world has known some radical changes at almost all levels such as technological developments, climatic changes and human evolution. Due to these factors, populations (in both, developed or undeveloped countries) are aggressively facing many natural disasters (tsunamis, earthquakes), technological events and terrorist attacks. In particular situations of sudden, unexpected and without alert disasters require high security measures and strategies in order to predict and manage the movement and the behavior of a crowd. During a catastrophe, people may experience many different behaviors, but there is still few information about the dynamics and the succession of such behaviors during the event, see [15, 37, 38]. Thus, for the development of an efficient disaster management strategy, it becomes necessary not only to take into account the disaster features but also the different psychological human behaviors during the disaster event.
Recently, several pedestrians crowd models have been developed. Their main objective is to predict the movements of a crowd in different environmental situations. Mathematical models of human crowds are mainly divided into two categories, namely, microscopic models and macroscopic ones. In the microscopic approach, individuals are treated separately as particles and the evolution is determined using Newton's second law and by considering physical and social forces that describe the interaction among individuals as well as their interactions with the physical surrounding (for more details we refer to the works of Helbing described in [34]). The macroscopic approach, that we adopt in this paper, considers a crowd as a whole quantity, without recognizing individual differences, and it is therefore more suitable to the study of the movement of an extremely large number of pedestrians. In particular, first-order macroscopic models introduced by Hughes [25] (see also [14]) are based on a mass conservation equation and a density-velocity
closure equation with suitable boundary conditions. Furthermore, several models are devoted to study the dynamics of multiple pedestrian species in the context of macroscopic first-order systems (see [8, 9, 13, 18, 19, 21, 25, 39, 41] and references therein). In [25] Hughes studied crowds with large density of multiple pedestrian classes with different walking characteristics and destinations (identified by the index \(i\)). The system reads as
\[\left\{\begin{aligned} \partial_{t}\rho_{i}+\nabla\cdot q_{i}( \rho)&=0&\text{in }[0,T)\times\Omega,\\ q_{i}\cdot n&=q_{i}^{0}\cdot n&\text{in }[0,T)\times \partial\Omega,\\ \rho_{i}(0)&=\rho_{i}^{0}&\text{in }\Omega, \end{aligned}\right. \tag{1.1}\]
for \(i=1,\ldots,N\,(N\geq 2)\), where \(\Omega\subset\mathbb{R}^{2}\) is a bounded domain with smooth boundary \(\partial\Omega\) and \(q_{i}(\rho)=\rho_{i}v(\rho)\nu_{i}\). The velocity is defined as \(v(\rho)=A-B\tilde{\rho}\), thus it is linear with respect to the total population \(\tilde{\rho}\) and is the same for all populations. On the contrary, each population can have a different direction of the movement \(\nu_{i}\). Finally, for each population \(i\), \(q_{i}^{0}\) is the outflow from the boundary in the direction of the normal vector \(n\) and \(\rho_{i}^{0}\) is the initial data. Moreover, in [8] authors studied a nonlinear drift-diffusion model with in-outflow boundary conditions for the transport of particles. Notice that these systems are consisting of conservative equations (_i.e._ with no reactions terms). A non-conservative system is proposed in [27] but the authors consider only one population species and Neumann homogeneous boundary conditions, namely
\[\left\{\begin{aligned} \partial_{t}\rho+\nabla\cdot q(\rho)& =\alpha(t,x)f(\rho)-\beta(t,x)\rho&\text{in }[0,T)\times \Omega,\\ \nabla\rho\cdot n&=0&\text{in }[0,T) \times\partial\Omega,\\ \rho(0)&=\rho^{0}&\text{in }\Omega, \end{aligned}\right. \tag{1.2}\]
where \(q(\rho)=-\nabla\rho+f(\rho)\nabla V(\rho)\) where \(f(\rho)=\rho(1-\rho)\) and \(V:\mathbb{R}^{n}\longrightarrow\mathbb{R}\) is a potential. In all these papers, either there is no mention about the behaviors of the pedestrians or all the pedestrians have the same emotional state (mainly panic).
In the recent years the RCP (Reflex-Panic-Control) [10] and the APC (Alert-Panic-Control) [31] models have been proposed in order to describe the evolution in time of human behaviors during a catastrophe. They both consist in systems of nonlinear ODEs and have been devised following the structure of the compartmental models in mathematical epidemiology. In [30] the spatial dynamics has been integrated in the APC model, by considering the space as a discrete variable. In [11] the first system of reaction-diffusion equations describing a population with several behaviors has been proposed.
The aim of the present paper is to introduce a spatio-temporal macroscopic first-order non-conservative pedestrians model describing the evolution of a population in a sudden, unexpected and without warning signs disaster. For this purpose, starting from the nonlinear ODE APC model proposed in [30, 31], we introduce a non-conservative first-order macroscopic model to describe the spatio-temporal dynamics of a population exhibiting different behavioral states. Our model reads as
\[\left\{\begin{aligned} \partial_{t}\rho_{i}+\nabla\cdot q_{i}( \rho)&=f_{i}(\rho_{1},\ldots,\rho_{5})&\text{in }[0,T)\times \Omega,\\ q_{i}\cdot n&=q_{i}^{0}\cdot n&\text{in }[0,T) \times\partial\Omega,\\ \rho_{i}(0)&=\rho_{i}^{0}&\text{in }\Omega, \end{aligned}\right. \tag{1.3}\]
where, for each \(i=1,\ldots,5\), \(\rho_{i}\) is the density of population representing a specific human behavior, \(q_{i}:=-d_{i}\nabla\rho_{i}+\rho_{i}\vec{v}_{i}(\rho)\) is the corresponding flux, \(f_{i}\) is a given nonlinear coupling reaction therm
and \(\rho_{i}^{0}\) is the initial population density.
Therefore, we establish a mathematical analysis of the model (1.3). By virtue of the abstract boundary evolution equations and the theory of semigroups of bounded linear operators (see [2, 17, 22]), we prove the local existence, uniqueness and regularity of a solution of this system. Moreover, using the positively invariant regions approach (see [24, 28, 29]), we provide sufficient conditions on the parameters of our model ensuring the positivity as well as uniform boundedness of this solution which gives the global existence. We also provide different numerical simulations for several scenarios of evacuation of a population in an emergency situation.
The organization of this paper is as follows. In Section 2, we briefly present the equations of the APC model, we recall the structure of a first-order macroscopic pedestrian model and we introduce our advection-diffusion pedestrian APC model (1.3). Section 3 is devoted to the mathematical analysis of the model. Finally, Section 4 presents numerical results about different scenarios of evacuation.
## 2. A spatio-temporal advection-diffusion model for human behaviors during a catastrophic event
### The temporal model of the human behaviors of a population during a catastrophic event
In this section, we briefly present a model describing the evolution of a population during a sudden, rapid and unpredictable catastrophic event. The model describes the evolution of different human behaviors during a disaster, see [30, 31]. Depending on the emotional charges and their regulation, the different human reactions of a population in an emergency situation are here subdivided into three main categories, namely, alert, panic and control behaviors. The APC model considers the time evolution of the following five variables:
* the density of individuals in an alert state \(\rho_{1}(t)\),
* the density of individuals that exhibit panic behaviors \(\rho_{2}(t)\),
* the density of individuals in a state of control \(\rho_{3}(t)\),
* the density of individuals in a daily behavior before the catastrophe \(\rho_{4}(t)\),
* the density of individuals in a behavior of everyday life after the disaster \(\rho_{5}(t)\),
* the density of individuals who die during the disaster \(\rho_{6}(t)\).
The corresponding model is given by the following nonlinear ODE system that matches with the classical compartmental SIR models (\(t\geq 0\)):
\[\left\{\begin{array}{ll}\rho_{1}^{\prime}=&-(b_{1}+b_{2}+\delta_{1})\rho_{1 }+\gamma(t)q+b_{3}\rho_{3}+b_{4}\rho_{2}-\mathcal{F}(\rho_{1},\rho_{3})- \mathcal{G}(\rho_{1},\rho_{2}),\\ \rho_{2}^{\prime}=&-(b_{4}+c_{1}+\delta_{2})\rho_{2}+b_{2}\rho_{1}+c_{2}\rho_ {3}+\mathcal{G}(\rho_{1},\rho_{2})-\mathcal{H}(\rho_{2},\rho_{3}),\\ \rho_{3}^{\prime}=&-(b_{3}+c_{2}+\delta_{3})\rho_{3}+b_{1}\rho_{1}+c_{1}\rho_ {5}-\phi(t)\rho_{3}+\mathcal{F}(\rho_{1},\rho_{3})+\mathcal{H}(\rho_{2},\rho_ {3}),\\ \rho_{4}^{\prime}=&-\gamma(t)q,\\ \rho_{5}^{\prime}=&\phi(t)\rho_{3},\\ \rho_{6}^{\prime}=&\delta_{1}\rho_{1}+\delta_{2}\rho_{2}+\delta_{3}\rho_{3}, \end{array}\right. \tag{2.1}\]
with the initial condition \((\rho_{1},\rho_{2},\rho_{3},\rho_{4},\rho_{5},\rho_{6})(0)=(0,0,0,1,0,0)\), since the population is supposed to be in a daily behavior before the onset of the disaster. The detailed description of all the parameters is given in Table 3 in the Appendix. In particular, the transitions among the compartments are of two types since they model two fundamental phenomena (see Figure 1):
* **The intrinsic transitions:** They represent the behavioral transitions that depend on the individual properties (past experiences, level of risk culture etc.) They are modeled
by linear terms in system (2.1). The parameters of these transitions are \(b_{i}>0\) for \(i=1,\ldots,4\) and \(c_{j}>0\) for \(j=1,2\).
* **The imitation phenomenon:** Individuals have a tendency to imitate the behaviors of people around. Here we follow the dominant behavior principle, i.e. in the case of two populations in interaction, the most adopted behavior is the most imitated one. Thus, imitation between two behaviors depend on the ratio of the two populations. Only alert behaviors are not imitable. In system (2.1) the behavioral transitions due to imitation are represented by nonlinear terms defined as: \[\mathcal{F}(\rho_{1},\rho_{3}) :=\alpha_{13}\xi\left(\frac{\rho_{3}}{\rho_{1}+\varepsilon} \right)\rho_{1}\rho_{3},\] (2.2) \[\mathcal{G}(\rho_{1},\rho_{2}) :=\alpha_{12}\xi\left(\frac{\rho_{2}}{\rho_{1}+\varepsilon} \right)\rho_{1}\rho_{2},\] (2.3) \[\mathcal{H}(\rho_{2},\rho_{3}) :=\left(\alpha_{23}\xi\left(\frac{\rho_{3}}{\rho_{2}+\varepsilon} \right)-\alpha_{32}\xi\left(\frac{\rho_{2}}{\rho_{3}+\varepsilon}\right) \right)\rho_{2}\rho_{3}.\] (2.4) The parameter \(0<\varepsilon\ll 1\) is considered here to avoid singularities, and the following function \[\xi(w):=\frac{w^{2}}{1+w^{2}},\;w\in\mathbb{R}.\] (2.5) takes into account the dominant behavior principle, that is the fact that the rate of imitation depends on the ratio of the corresponding populations (see Figure 2). For example, if we consider the imitation phenomenon from alert to panic, we remark that if \(\frac{\rho_{2}}{\rho_{1}+\varepsilon}<1\) is small, then \(\xi\left(\frac{\rho_{2}}{\rho_{1}+\varepsilon}\right)\) is almost equal to zero, so the imitation is weak. Conversely, if \(\frac{\rho_{2}}{\rho_{1}+\varepsilon}\gg 1\) is large, it means that we have a majority of individuals in panic.
Figure 1. The transfer diagram of the APC model. The arrows indicate the transitions among the compartments.
In this case, \(\xi\left(\frac{\rho_{2}}{\rho_{1}+\varepsilon}\right)\) goes to \(1\) and alerted individuals would imitate the panic ones. The same situation holds for the other imitation transitions.
Finally, function \(\gamma\) describes the transition from the daily to the alert behavior at the beginning of the catastrophic event, while function \(\phi\) represents the transition from a control behavior to an everyday life behavior at the end of the event. It is assumed that they are time-dependent functions that depend on the nature of the disaster. In [30] the authors consider the functions \(\phi,\ \gamma:[0,\infty)\longrightarrow[0,1]\) defined as
\[\phi(t):=\zeta(t,\tau_{0},\tau_{1})\quad\text{for}\quad\tau_{0}<\tau_{1}, \tag{2.6}\]
and
\[\gamma(t):=\zeta(t,\sigma_{0},\sigma_{1})\quad\text{for}\quad\sigma_{0}<\sigma _{1}, \tag{2.7}\]
Figure 3. Example of the functions \(\gamma\) and \(\phi\), which describe the transition from the daily to the alert behaviors, and from the control to everyday life behaviors: \(\gamma(t)=\zeta(t,1,3)\) and \(\phi(t)=\zeta(t,20,70)\) respectively.
Figure 2. The function \(\xi\) involved in the imitation terms: the imitation starts very slowly, then it accelerates before slowing down and saturating.
where
\[\zeta(t,z_{0},z_{1}):=\begin{cases}0&\text{if }t<z_{0},\\ \frac{1}{2}-\frac{1}{2}\cos\left(\frac{t-z_{0}}{z_{1}-z_{0}}\pi\right)&\text{if }z_{ 0}\leq t\leq z_{1},\\ 1&\text{elswhere}.\end{cases} \tag{2.8}\]
Here \(\tau_{0}\) is the time at which the daily population starts to be impacted by the event, and \(\tau_{1}\) is the time at which the total daily population becomes alert. Additionally, \(\sigma_{0}\) represents the time at which the first individuals in a control state can go back to a pseudo-daily behavior, while \(\sigma_{1}\) is the time where this transition is highest. See Figure 3 for an example of these two functions.
Summing up the equations of (2.1), we notice that
\[\sum_{i=1}^{6}\rho_{i}(t)=1\ \text{(the total population density)},\ \forall t\geq 0. \tag{2.9}\]
For this reason we only need to solve the system (2.1) without considering the victims density \(\rho_{6}\), since the last equation is a linear combination of the others.
### First-order macroscopic pedestrian models
In this section we give the mathematical framework about first-order macroscopic pedestrian models [14, 25]. We recall that the continuity principle indicates that the variation of the density \(\rho\) of a certain quantity, in \(\Omega\subset\mathbb{R}^{2}\), is given by the balance of the flow \(q\) of this quantity across the boundary \(\partial\Omega\) and the amount of quantity produced or removed inside the domain. Mathematically, this can be expressed as follows:
\[\partial_{t}\rho+\nabla\cdot q=S(t,x,\rho),\quad t\geq 0,\ x\in\Omega, \tag{2.10}\]
where \(S\) is the source term. To be more precise, the flux \(q\) can be advective (\(q_{adv}\)), proportional to a velocity \(\vec{v}(\rho)\) where \(\rho\) is the transported scalar quantity, _i.e._\(q_{adv}=\rho\vec{v}(\rho)\); but it may also be diffusive (\(q_{diff}\)), that is, it consists of a diffusion term \(q_{diff}=-d\nabla\rho\) corresponding to the transportation of the scalar quantity according to its gradient, where \(d\) is the constant diffusion rate. Thus, we have
\[q=q_{adv}+q_{diff}:=-d\nabla\rho+\rho\vec{v}(\rho).\]
Moreover, the source term \(S\) can be divided into a pure and a reaction source terms:
\[S=S_{p}+S_{r}.\]
The pure source term denoted by \(S_{p}\) represents the self-creation/destruction rate inside the domain (using population dynamics terminology, it corresponds to the birth and death terms for example). This term will be denoted by \(S_{p}(t,x,\rho)=g(t,x,\rho)\) where \(g\) is a given linear function with respect to \(\rho\). The reaction term \(S_{r}\) describes the creation/destruction processes as a reaction to this quantity (corresponding to the reaction and interaction terms). This term will be denoted by \(S_{r}(t,x,\rho)=f(t,x,\rho)\), where \(f\) is a given nonlinear function with respect to \(\rho\). Moreover, the associated boundary conditions are expressed as follows
\[q\cdot n=q^{0}\cdot n,\]
where \(n\) is the outward unit normal vector to \(\partial\Omega\).
These boundary conditions mean that the flux crossing the boundary part \(\partial\Omega\) in the direction of the normal vector \(n\) is given by an observed flux \(q_{0}\) in the same direction \(n\). According to that, the complete first-order equation (2.10) in its non-conservative form is given by:
\[\left\{\begin{array}{ll}\partial_{t}\rho&=d\Delta\rho-\nabla\cdot(\rho\vec{v} (\rho))+g(t,x,\rho)+f(t,x,\rho),&\quad t\geq 0,\text{ in }\Omega,\\ q\cdot n&=q^{0}\cdot n,&\quad t\geq 0,\text{ on }\partial\Omega,\\ \rho(0)&=\rho_{0},&\quad\text{in }\Omega,\end{array}\right. \tag{2.11}\]
where \(\rho_{0}\) is the initial condition. For more details we refer to [3] and references therein.
### The spatio-temporal model corresponding to (2.1)
In this section, we present our advection-diffusion APC (Alert-Panic-Control) model using the first-order continuity equations (2.11) presented in Section 2.2. Notice that system (2.11) can be generalized to the case where several populations are in interaction (as is the case for the system (1.1), where each population can for example have different directions of movement). Moreover, our new model takes into account the transitions and the imitations characteristics which are not considered in the conservative system (1.1).
Let \(\Omega\subset\mathbb{R}^{2}\) be a non-empty bounded domain with Lipschitz boundary. Consider the local population densities \(\rho_{i}:[0,+\infty)\times\Omega\longrightarrow\mathbb{R}\) for \(i=1,\ldots,5\) where
* \(\rho_{1}(t,x)\) is the local density of individuals in the alert situation,
* \(\rho_{2}(t,x)\) is the local density of individuals in the panic situation,
* \(\rho_{3}(t,x)\) is the local density of individuals in the control situation,
* \(\rho_{4}(t,x)\) is the local density of individuals in the daily behavior before the disaster,
* \(\rho_{5}(t,x)\) is the local density of individuals corresponding to the daily situation after the disaster,
and let \(\rho\) be given by
\[\rho:=(\rho_{1},\rho_{2},\rho_{3},\rho_{4},\rho_{5})^{*}.\]
As for the model (2.11) we will consider an advective flux and a diffusive flux. The advection phenomenon models the movement of a population in a chosen direction, typically to escape the \(\Omega\) domain. It is therefore natural to incorporate advection terms in our case.
We set
\[q_{i,adv}:=\rho_{i}\vec{v}_{i}(\rho),\]
the advective flux, where \(\rho\vec{v}_{i}(\rho)\), \(i=1,\ldots,5\) is the corresponding velocity for each population of density \(\rho_{i}\), \(i=1,\ldots,5\). The alert population corresponds to the set of behaviors such as information seeking and hazard identification, and therefore here it is assumed that it cannot undergo the advection phenomenon. Thus, it is assumed that only the populations in a situation of panic or control are concerned by the advection phenomenon, hence
\[\vec{v}_{i}(\rho)=0,\quad\text{for }i=1,4,5.\]
For \(i=2,3\), we assume that the velocities \(\vec{v}_{i}\) satisfy
\[\vec{v}_{i}(\rho)=V_{i}(\rho)\,\vec{\nu},\quad\text{for }i=2,3,\]
where the vector \(\vec{\nu}:\overline{\Omega}\rightarrow\mathbb{R}^{2}\) that represents the direction of the movement, and satisfies certain conditions that will specified later. Moreover, \(V_{2},V_{3}\) are the scalar speed-density functions.
Several different type of speed-density functions are used in the literature (see a.e. [7, 14, 42]). Here, we choose a linear dependence:
\[V_{2}(\rho)=V_{2,\max}\left(1-\tilde{\rho}\right)\quad\text{and}\quad V_{3}(\rho )=V_{3,\max}\left(1-\tilde{\rho}\right),\]
where \(V_{2,\max},V_{3,\max}\) are two positive constants and
\[\tilde{\rho}:=\sum_{i=1}^{5}\rho_{i}.\]
Similar assumptions on the panic and the control maximum speeds are used in [32].
Moreover, each population diffuses in the spatial domain \(\Omega\) according to the density gradient
\[q_{i,diff}:=-d_{i}\nabla\rho_{i},\]
with constant diffusion coefficients \(d_{i}\) for \(i=1,\ldots,5\). It is assumed that the whole crowd diffuses with different diffusion coefficients depending on the type of behavior.
With these assumptions and notations, the associated fluxes are given by
\[q_{i}:=-d_{i}\nabla\rho_{i}+\rho_{i}\vec{v}_{i}(\rho),\quad\text{for }i=1, \ldots,5.\]
The source (pure and reaction) terms correspond to the intrinsic transitions and the imitation terms described in Section 2.1.
Moreover, we divide the boundary \(\partial\Omega\) of \(\Omega\) in two parts, each of them corresponding to different boundary conditions:
\[\partial\Omega:=\Gamma_{1}\cup\Gamma_{2}\quad\text{with}\quad\Gamma_{1}\cap \Gamma_{2}=\emptyset.\]
Here \(\Gamma_{1}\) corresponds to the part of boundary that could not be crossed by the population, while \(\Gamma_{2}\) corresponds to an escape. We define the observed fluxes \(q_{i}^{0}\) on the boundary \(\partial\Omega\) by
\[q_{i}^{0}(\rho_{i}):=\left\{\begin{array}{ll}0&\text{on }\Gamma_{1}\\ -\rho_{i}v_{i,\text{out}}\,\vec{\nu}&\text{on }\Gamma_{2}\end{array}\right.\]
where \(v_{i,\text{out}}\geq 0\) is the constant speed at the boundary \(\Gamma_{2}\) and \(\vec{\nu}\) is the direction of the movement. This means that each population cannot cross along \(\Gamma_{1}\) and cross the escape \(\Gamma_{2}\) with speed \(v_{i,\text{out}}\). We assume that the function \(\vec{\nu}\) satisfies:
\[\vec{\nu}(x)=\left\{\begin{array}{ll}(0,0)^{*},&x\in\Gamma_{1}\\ (\nu_{x_{1}}(x),\nu_{x_{2}}(x))^{*},&x\in\Omega\\ n(x),&x\in\Gamma_{2}\end{array}\right. \tag{2.12}\]
where \(n\) is the unit normal vector at the boundary part \(\Gamma_{2}\). This choice of vector \(\vec{\nu}\) means that, at the part of the boundary where pedestrians cannot cross, i.e. \(\Gamma_{1}\), the direction of movement vanishes, but at the target exit, i.e., \(\Gamma_{2}\), the pedestrians cross this part of the boundary in a direction parallel to the normal vector \(n\), while the desired direction of motion inside the domain \(\Omega\) is given in a suitable way that depends on the regularity of \(\Omega\), and it satisfies the following assumption:
\[\vec{\nu}_{|\Omega}\in W^{1,\infty}(\Omega,\mathbb{R}^{2}),\]
such that \(\nabla(\nu(x))\leq 0\) for all \(x\in\Omega\). For example, we can take \(\vec{\nu}_{|\Omega}\) to be normalized vectors between any point \(x\in\Omega\) and a centered (fixed) target point that lies outside \(\overline{\Omega}\), see (4.1). Thus,
the observed fluxes on the boundary \(\partial\Omega\) are given by
\[q_{i}^{0}(\rho_{i})=\rho_{i}v_{i,\mathrm{out}}\,\vec{\nu}.\]
We assume that at the beginning \(t=0\), the whole population is in a daily behavior, so we consider the following initial conditions: \(\rho(t=0,0)=\rho_{0}\) where \(\rho_{0}\) is given by
\[\rho_{0}:=(0,0,0,\theta,0)^{*}\quad\text{on }\Omega, \tag{2.13}\]
and \(\theta:\Omega\to[0,\infty)\) is such that, the integral exists, and that
\[\int_{\Omega}\theta(x)dx=1.\]
Thus, from (2.1) and (2.11), we obtain the following system:
\[\left\{\begin{array}{ll}\partial_{t}\rho_{1}=&d_{1}\Delta\rho_{1}-(b_{1}+b_ {2}+\delta_{1})\rho_{1}+\gamma(t)\rho_{4}+b_{3}\rho_{3}+b_{4}\rho_{2}\\ &-\mathcal{F}(\rho_{1},\rho_{3})-\mathcal{G}(\rho_{1},\rho_{2})\\ \partial_{t}\rho_{2}=&d_{2}\Delta\rho_{2}-(b_{4}+c_{1}+\delta_{2})\rho_{2}+b_ {2}\rho_{1}+c_{2}\rho_{3}\\ &-\nabla\cdot(\rho_{2}\vec{v}_{2}(\rho))+\mathcal{G}(\rho_{1},\rho_{2})- \mathcal{H}(\rho_{2},\rho_{3})\\ \partial_{t}\rho_{3}=&d_{3}\Delta\rho_{3}-(b_{3}+c_{2}+\delta_{3})\rho_{3}+b_ {1}\rho_{1}+c_{1}\rho_{2}\\ &-\phi(t)\rho_{3}-\nabla\cdot(\rho_{3}\vec{v}_{3}(\rho))+\mathcal{F}(\rho_{1},\rho_{3})+\mathcal{H}(\rho_{2},\rho_{3})\\ \partial_{t}\rho_{4}=&d_{4}\Delta\rho_{4}-\gamma(t)\rho_{4}\\ \partial_{t}\rho_{5}=&d_{5}\Delta\rho_{5}+\phi(t)\rho_{3}\end{array}\right. \text{in }\Omega,\ t\geq 0, \tag{2.14}\]
with the boundary conditions
\[d_{i}\nabla\rho_{i}\cdot n=(\rho_{i}\vec{v}_{i}(\rho)-\rho_{i}v_{i,\mathrm{ out}}\vec{\nu})\cdot n\quad\text{on }\partial\Omega,\ t\geq 0,\quad i=1,\ldots 5, \tag{2.15}\]
or more explicitly, using the definition of \(\vec{\nu}\):
\[\begin{cases}d_{i}\nabla\rho_{i}\cdot n=0&\text{on }\Gamma_{1}\ \text{ for }i=1,\cdots,5,\\ d_{i}\nabla\rho_{i}\cdot n=\rho_{i}V_{i}(\rho)-\rho_{i}v_{i,out}&\text{on }\Gamma_{2}\ \text{ for }i=1,\cdots,5,\end{cases}\]
and the initial condition
\[\rho(t=0,\cdot)=\rho_{0}\quad\text{in }\Omega. \tag{2.16}\]
Well-posedness, positivity, boundedness and global existence of the spatio-temporal model (2.14)-(2.16)
In this section, we prove the well-posedness of the spatio-temporal APC model (2.14), (2.15) and (2.16) introduced previously in Section 2. Then, we establish the positivity of the solutions, the \(L^{1}\)-boundedness of the sum of the population densities and the boundedness of the solution which gives the global existence.
### The abstract formulation and the associated boundary value Cauchy problem
To study the existence and uniqueness of solutions to the system spatio-temporal APC model (2.14)-(2.16) we use the abstract formulation and semigroup theory [20, 36]. In order to do that, for \(p>2\), we define the Banach space \(X:=L^{p}(\Omega)^{5}\), the product of the Lebesgue spaces of order \(p\), equipped with the following norm
\[\|\varphi:=(\varphi_{1},\cdots,\varphi_{5})^{*}\|:=\sum_{i=1}^{5}\|\varphi_{i }\|,\]
where \(\|\cdot\|\) is the usual norm in \(L^{p}(\Omega)\). It is clear that \(X\) is a Banach lattice, i.e.,
\[|\varphi_{i}(x)|\leq|\psi_{i}(x)|\text{ for }a.e.\,x\in\Omega\text{ for all }i=1,\cdots,5\text{ implies that }\|\varphi\|\leq\|\psi\|.\]
Moreover, we define the linear closed operator \((\mathcal{A},D(\mathcal{A}))\) on \(X\) by
\[\begin{cases}\quad\mathcal{A}=diag(d_{1}\Delta,\cdots,d_{5}\Delta)\\ D(\mathcal{A})=W^{2,p}(\Omega)^{5}.\end{cases} \tag{3.1}\]
The nonlinear function \(\mathcal{K}:[0,\infty)\times X_{\alpha}\longrightarrow X\) is defined by
\[\mathcal{K}(t,\varphi)=\begin{pmatrix}-(b_{1}+b_{2}+\delta_{1})\varphi_{1}+ \gamma(t)\varphi_{4}+b_{3}\varphi_{3}+b_{4}\varphi_{2}-\mathcal{F}(\varphi_{1 },\varphi_{3})-\mathcal{G}(\varphi_{1},\varphi_{2})\\ -(b_{4}+c_{1}+\delta_{2})\varphi_{2}+b_{2}\varphi_{1}+c_{2}\varphi_{3}- \nabla\cdot(\varphi_{2}V_{2}(\varphi)\nu)+\mathcal{G}(\varphi_{1},\varphi_{2 })-\mathcal{H}(\varphi_{2},\varphi_{3})\\ -(b_{3}+c_{2}+\delta_{3})\varphi_{3}+b_{1}\varphi_{1}+c_{1}\varphi_{2}-\phi(t )\varphi_{3}-\nabla\cdot(\varphi_{3}V_{3}(\varphi)\nu)+\mathcal{F}(\varphi_{1 },\varphi_{3})+\mathcal{H}(\varphi_{2},\varphi_{3})\\ -\gamma(t)\varphi 4\\ \phi(t)\varphi_{3}\end{pmatrix}, \tag{3.2}\]
where we take
\[\begin{cases}\mathcal{K}_{1}(t,\varphi_{1},\nabla\varphi)=-(b_{1}+b_{2}+ \delta_{1})\varphi_{1}+\gamma(t)\varphi_{4}+b_{3}\varphi_{3}+b_{4}\varphi_{2 }-\mathcal{F}(\varphi_{1},\varphi_{3})-\mathcal{G}(\varphi_{1},\varphi_{2}), \\ \mathcal{K}_{2}(t,\varphi_{2},\nabla\varphi)=-(b_{4}+c_{1}+\delta_{2}) \varphi_{2}+b_{2}\varphi_{1}+c_{2}\varphi_{3}-\nabla\cdot(\varphi_{2}V_{2}( \varphi)\nu)+\mathcal{G}(\varphi_{1},\varphi_{2})-\mathcal{H}(\varphi_{2}, \varphi_{3}),\\ \mathcal{K}_{3}(t,\varphi_{3},\nabla\varphi)=-(b_{3}+c_{2}+\delta_{3}) \varphi_{3}+b_{1}\varphi_{1}+c_{1}\varphi_{2}-\phi(t)\varphi_{3}-\nabla\cdot( \varphi_{3}V_{3}(\varphi)\nu)+\mathcal{F}(\varphi_{1},\varphi_{3})+\mathcal{H }(\varphi_{2},\varphi_{3}),\\ \mathcal{K}_{4}(t,\varphi_{4},\nabla\varphi)=-\gamma(t)\varphi 4,\\ \mathcal{K}_{5}(t,\varphi_{5},\nabla\varphi)=\phi(t)\varphi_{3},\end{cases} \tag{3.3}\]
and \(X_{\alpha}:=\{\varphi\in W^{2\alpha,p}(\Omega)^{5}:d_{i}\partial_{n}\varphi_{ i|\partial\Omega}=0\}\) for some (fixed) \(\alpha\in(1/p+1/2,1)\) equipped with the norm \(\|\cdot\|_{0,\alpha}:=\|\cdot\|+\|\nabla\cdot\|+[\cdot]_{\zeta}\) where
\[[\varphi]_{\zeta}:=\left(\int_{\Omega\times\Omega}\frac{|\varphi(x)-\varphi( y)|^{p}}{|x-y|^{2+p\zeta}}dxdy\right)^{1/p},\quad\zeta=2\alpha-1.\]
Hence, \(\|\varphi\|_{\alpha}:=\sum_{i=1}^{5}\|\varphi_{i}\|_{0,\alpha}\) defines a norm on \(X_{\alpha}\) which make it a Banach space. Then from the Sobolev embedding, we have
\[X_{\alpha}\hookrightarrow C^{1}(\overline{\Omega})^{5}.\]
Define the boundary space \(\partial X:=W^{1-1/p,p}(\partial\Omega)^{5}\) which is equipped with the norm
\[\|\varphi\|_{\partial X}:=\sum_{i=1}^{5}|\varphi_{i}|_{p}\]
where
\[|\varphi|_{p}=\left(\int_{\partial\Omega}|\varphi(x)|^{p}dx+\int_{\partial \Omega\times\partial\Omega}\frac{|\varphi(x)-\varphi(y)|^{p}}{|x-y|^{p}}dxdy \right)^{1/p}.\]
Since \(1-2/p>0\), we obtain the continuous embedding \(\partial X\hookrightarrow C(\partial\Omega)^{5}\). Moreover, we define the boundary operator \(\mathcal{L}:Z\longrightarrow\partial X\) by
\[\mathcal{L}\varphi=\left(d_{1}\partial_{n}\varphi_{1},\cdots,d_{5}\partial_{n }\varphi_{5}\right)^{*}\quad\text{ on }\partial\Omega. \tag{3.4}\]
The nonlinear boundary term \(\mathcal{M}:X_{\alpha}\longrightarrow\partial X\) is given by
\[\mathcal{M}\varphi=\begin{cases}(0,0,0,0,0)^{*}&\text{on }\Gamma_{1}\\ (-v_{1,out}\rho_{1},-v_{2,out}\rho_{2}+\varphi_{2}V_{2}(\varphi),-v_{3,out}\rho_{ 3}+\varphi_{3}V_{3}(\varphi),-v_{4,out}\rho_{4},-v_{5,out}\rho_{5})^{*}&\text{ on }\Gamma_{2}.\end{cases} \tag{3.5}\]
Let the Banach space \(Z:=D(\mathcal{A})\) equipped with its usual norm, so the continuous embedding \(Z\hookrightarrow X_{\alpha}\) holds. Hence the linear operator \(\mathcal{A}:Z\longrightarrow X\) is bounded and \(\mathcal{L}:Z\longrightarrow\partial X\) is bounded and surjective (see [1, 2]). The initial conditions are given by the following vector
\[\rho_{0}=(0,0,0,\theta,0)^{*}. \tag{3.6}\]
Now we can write our boundary evolution system as
\[\begin{cases}u_{t}(t)=&\mathcal{A}u(t)+\mathcal{K}(t,u(t)),\qquad t\geq 0,\\ \mathcal{L}u(t)=&\mathcal{M}(u(t)),\qquad\qquad\qquad t\geq 0,\\ u(0)=&u_{0},\end{cases} \tag{3.7}\]
where
\[u(t):=(\rho_{1}(t,\cdot),\rho_{2}(t,\cdot),\rho_{3}(t,\cdot),\rho_{4}(t,\cdot ),\rho_{5}(t,\cdot))^{*},\]
and
\[u_{0}=\rho_{0}.\]
### Preliminary results
In this section, we give our preliminary results, the proofs are given for the sake of completeness for a curious reader, and will be available in Appendix A. In the following, we define \(\mathcal{A}_{0}:=\mathcal{A}_{|\ker(\mathcal{L})}\).
**Definition 3.1**.: _We recall that \(X\) is a Banach lattice._
_(i) A vector_ \(\varphi=(\varphi_{1},\cdots,\varphi_{5})^{*}\in X\) _is said to be positive, i.e.,_ \(\varphi(x)\geq 0\)_, if and only if,_ \(\varphi_{i}(x)\geq 0\) _for_ \(a.e.\,x\in\Omega\) _for all_ \(i=1,\cdots,5\)_. So that,_ \(X^{+}\) _denotes the positive cone of_ \(X\)_._
_(ii) A bounded operator_ \(\mathcal{T}\)_, in the Banach lattice_ \(X\)_, is said to be positive if and only if, for every_ \(\varphi\in X\)_,_ \(\varphi(x)\geq 0\) _implies_ \(\mathcal{T}\varphi(x)\geq 0\) _for_ \(a.e.\,\,x\in\Omega\)_._
_(iii) A semigroup_ \((\mathcal{T}(t))_{t\geq 0}\)_, in the Banach lattice_ \(X\)_, is said to be positive if and only if, for every_ \(\varphi\in X\)_,_ \(\varphi(x)\geq 0\) _implies_ \(\mathcal{T}(t)\varphi(x)\geq 0\) _for all_ \(t\geq 0\) _for_ \(a.e.\,\,x\in\Omega\)_._
**Proposition 3.2**.: _The following assertions hold:_
_(i) The closed operator_ \(\mathcal{A}_{0}\) _generates a contraction holomorphic_ \(C_{0}\)_-semigroup_ \((\mathcal{T}(t))_{t\geq 0}\) _on_ \(X\)_._
_(ii) The semigroup_ \((\mathcal{T}(t))_{t\geq 0}\) _generated by_ \(\mathcal{A}_{0}:=\mathcal{A}_{|\ker(\mathcal{L})}\) _is compact and positive (i.e.,_ \(\mathcal{T}(t)X^{+}\subset X^{+}\)_)._
_Moreover, the semigroup \((\mathcal{T}(t))_{t\geq 0}\) is given by the following matrix-valued operators_
\[\mathcal{T}(t)=diag(\mathcal{T}_{1}(t),\cdots,\mathcal{T}_{5}(t))^{*},\quad t \geq 0.\]
Now, we present the inter- and extrapolation spaces associated to the generator \(\mathcal{A}_{0}\). We define on \(X\) the norm \(\|x\|_{-1}=\|R(\lambda,\mathcal{A}_{0})x\|\), for \(x\in X.\) Then the completion of \((X,\|\cdot\|_{-1})\) is called the extrapolation space of \(X\) associated to \(\mathcal{A}_{0}\) and will be denoted by \(X_{-1}.\) This means that \(\mathcal{A}_{0}\) has a unique extension \(\mathcal{A}_{-1}:D(\mathcal{A}_{-1})=X\longrightarrow X_{-1}.\) Since for every \(t\geq 0\), \((\mathcal{T}(t))_{t\geq 0}\) commutes with the operator resolvent \(R(\lambda,\mathcal{A}_{0})\), the extensions of \((\mathcal{T}(t))_{t\geq 0}\) to \(X_{-1}\) exists and defines an
analytic semigroup \((\mathcal{T}_{-1}(t))_{t\geq 0}\) which is generated by \(\mathcal{A}_{-1}\). Let \(\alpha\in(0,1)\), we define the following interpolated extrapolation spaces by:
\[X_{\alpha-1}=\overline{X}^{\|\cdot\|_{\alpha-1}},\quad\text{where}\quad\|x\|_{ \alpha-1}:=\sup_{\omega>0}\|(\omega^{\alpha}R(\omega,\mathcal{A}_{-1}-\lambda) x\|.\]
Then, we have the following continuous embeddings:
\[D(\mathcal{A}_{0})\hookrightarrow X_{\alpha}\hookrightarrow X_{\beta}\hookrightarrow X\]
\[X\hookrightarrow X_{\alpha-1}\hookrightarrow X_{\beta-1}\hookrightarrow X_{-1},\]
for all \(0<\beta<\alpha<1\), where \(D(\mathcal{A}_{0})\) is equipped with the graph norm that makes it a Banach space.
**Remark 3.3**.: _(i) Note that the extrapolated spaces introduced here do not depend on any choice of \(\lambda\in\rho(\mathcal{A}_{0})\), which means that, any other choice of \(\lambda\) gives the same extrapolated space with an equivalent norm, this holds by virtue of the resolvent equation, see [4, 20]. We recall that, the spectrum of \(\mathcal{A}_{0}\) satisfies \(\sigma(\mathcal{A}_{0})\subset(-\infty,0]\). So that \(\mathbb{C}\setminus(-\infty,0]\subset\varrho(\mathcal{A}_{0})\), see [16]._
**Remark 3.4**.: _It follows from [43, Sections 4.3.3 and 4.6.1], that the spaces \(X_{\alpha}\) for \(0<\alpha<1\) introduced here coincide with real interpolation spaces (of order \(\alpha\)) between \(D(\mathcal{A}_{0})\) and \(X\). Moreover, the embedding \(Z\hookrightarrow X_{\alpha}\) also holds._
**Proposition 3.5**.: _For each \(0\leq\delta\leq 1\), \((\mathcal{T}_{\delta-1}(t))_{t\geq 0}\) is the unique extension semigroup of \((\mathcal{T}(t))_{t\geq 0}\) with the associated generator \(\mathcal{A}_{\delta-1}\) satisfying \(D(\mathcal{A}_{\delta-1})=X_{\delta}\). Moreover, the semigroup \((\mathcal{T}_{\delta-1}(t))_{t\geq 0}\) inherits all the properties of \((\mathcal{T}(t))_{t\geq 0}\). That is, \((\mathcal{T}_{\delta-1}(t))_{t\geq 0}\) is strongly continuous, analytic, compact and positive._
**Definition 3.6**.: _We define the positive cone of \(X_{\alpha}\) by_
\[X_{\alpha}^{+}=X^{+}\cap X_{\alpha},\]
_Similarly, the positive cone of \(X_{\beta-1}\) is defined by_
\[X_{\beta-1}^{+}=X^{+}\cap X_{\beta-1}.\]
Let \(0<a_{i}<+\infty\) for \(i=1,\cdots,5\) and \(\Lambda_{a}\subset\mathbb{R}^{5}\) be such that
\[\Lambda_{a}:=\Pi_{i=1}^{5}[0,a_{i}],\]
A function \(\varphi\in X\) belongs to \(\Lambda_{a}\) if and only if \(0\leq\varphi_{i}(x)\leq a_{i}\) for \(a.e.\)\(x\in\Omega,\,i=1,\cdots,5\). Moreover,
\[\Lambda_{+\infty}:=\Pi_{i=1}^{4}[0,+\infty).\]
In particular, if only \(a_{5}=+\infty\), we give
\[\Lambda_{a,+\infty}:=\Pi_{i=1}^{4}[0,a_{i}]\times[0,+\infty).\]
Using Definition 3.6, we may observe that
**Definition 3.7**.: _The bounded positive cones of \(X\), \(X_{\beta-1}\) and \(X_{\alpha}\) are defined respectively by:_
\[X^{\Lambda_{a}}=\{\varphi\in X:\varphi(x)\in\Lambda_{a},\;a.e. \;x\in\Omega\},\] \[X^{\Lambda_{a}}_{\beta-1}=\{\varphi\in X_{\beta-1}:\varphi(x)\in \Lambda_{a},\;a.e.\;x\in\Omega\},\]
_and_
\[X^{\Lambda_{a}}_{\alpha}=\{\varphi\in X_{\alpha}:\varphi(x)\in\Lambda_{a},\;a. e.\;x\in\Omega\}.\]
**Remark 3.8**.: _(i) Note that, since \(X_{\alpha}\hookrightarrow C^{1}(\overline{\Omega})^{5}\), the "\(a.e.\,x\in\Omega\)" in the definition of \(X_{\alpha}^{+}\) (resp. of \(X_{\alpha}^{\Lambda_{\alpha}}\)) becomes "for all \(\,x\in\Omega\)"._
_(ii) By definition, we have \(X^{+}=X^{\Lambda_{+\infty}}\), \(X_{\alpha}^{+}=X_{\alpha}^{\Lambda_{+\infty}}\) and \(X_{\beta-1}^{+}=X_{\beta-1}^{\Lambda_{+\infty}}\)._
In order to study the boundary evolution equation (2.14), we proceed as in [2] and later [17] (see [2, Section 12] and also [17, Section 4]), namely, the nonlinear boundary evolution problem (2.14) admits a solution, if and only if, the following semilinear Cauchy problem admits a solution
\[\begin{cases}u_{t}(t)=&\mathcal{A}_{\beta-1}u(t)+\tilde{\mathcal{K}}(t,u(t)),\qquad t\geq 0,\\ u(0)=&u_{0},\end{cases} \tag{3.8}\]
where \(\tilde{\mathcal{K}}:=\mathcal{K}+(\lambda-\mathcal{A}_{\beta-1})\mathcal{D} \mathcal{M}\) with \(\mathcal{D}\) is the Dirichlet map associated to the operator \((\lambda-\mathcal{A})\) i.e. \(v=\mathcal{D}w\) is the unique solution of the abstract boundary value problem
\[\begin{cases}(\lambda-\mathcal{A})v=0\\ \qquad\mathcal{L}v=w\end{cases} \tag{3.9}\]
for each \(w\in\partial X\) for some \(\lambda\in\varrho(\mathcal{A})\). In fact, let \(u\in X\) and \(w\in\partial X\). Then, the equation
\[\begin{cases}(\lambda-\mathcal{A})v=u\\ \qquad\mathcal{L}v=w\end{cases} \tag{3.10}\]
admits the solution \(v=R(\lambda,\mathcal{A})u+\mathcal{D}w\). This solution is unique in \(Z\) since \(\lambda-\mathcal{A}\) is injective on \(D(\mathcal{A}_{0}):=\ker(\mathcal{L})\).
This approach of studying the boundary evolution equation (3.7) by equivalently studying the Cauchy problem (3.8) was first introduced separately in [2, 22], and was later perfected in [17] and others (see the references therein). The conditions under which this approach is used are cited in all the references [2, 17, 22]. However, for the sake of completeness, we will cite these conditions as follows:
**(C1)** There exists a new norm \(|\,\cdot\,|_{m}\) which is finer than the norm of \(X\), such that the space \(Z:=(D(\mathcal{A}),|\,\cdot\,|_{m})\) is complete, i.e, \(Z\) is continuously embedded in \(X\) and \(\mathcal{A}\in L(Z,X)\).
**(C2)** The restriction operator \(\mathcal{A}_{0}=\mathcal{A}_{|ker(\mathcal{L})}\) generates a strongly continuous analytic semigroup.
**(C3)** The operator \(\mathcal{L}:Z\longrightarrow\partial X\) is bounded and surjective.
**(C4)**\(Z\) is continuously embedded in \(X_{\alpha}\), i.e., \(Z\hookrightarrow X_{\alpha}\) for some \(0<\alpha<1\).
**(C5)** The functions \(\mathcal{K}:[0,+\infty)\times X_{\alpha}\longrightarrow X\) and \(\mathcal{M}:[0,+\infty)\times X_{\alpha}\longrightarrow\partial X\) are locally integrable in the first variable and continuous with respect to the second one.
We mention that all the conditions **(C1)**-**(C5)** are satisfied.
**Lemma 3.9**.: _The operator \((\lambda-\mathcal{A}_{-1})\mathcal{D}\) is bounded from \(\partial X\) to \(X_{-1}\) with its norm denoted by \(\|(\lambda-\mathcal{A}_{-1})\mathcal{D}\|_{\partial X\to X_{-1}}\leq c\)._
**Proposition 3.10**.: _The function \(\tilde{\mathcal{K}}:[0,+\infty)\times X_{\alpha}\longrightarrow X_{\beta-1}\) is Lipschitzian in bounded sets i.e., for all \(R>0\) there exists \(L_{R}\geq 0\) such that_
\[\|\tilde{\mathcal{K}}(t,\rho)-\tilde{\mathcal{K}}(s,\upsilon)\|_{\beta-1}\leq L _{R}(|\,t-s\,\,|\,+\|\rho-\upsilon\|_{\alpha})\quad\text{ for all }\rho,\upsilon\in B(0,R)\,\text{for all }t,s\geq 0. \tag{3.11}\]
**Remark 3.11**.: _Notice that if \(1/p+1/2<\beta<\alpha<1\), then \(X_{\beta}\hookrightarrow C^{1}(\overline{\Omega})^{5}\). So, (3.11) holds also in \(X_{\beta}\)._
The following regularity Lemma is also necessary.
**Lemma 3.12**.: _Let \(0<\beta<1\) and \(\mathcal{B}:[0,T]\longrightarrow X_{\beta-1}\) such that there exist \(0<\eta\leq 1\) and \(l\geq 0\) satisfying_
\[\|\mathcal{B}(t)-\mathcal{B}(s)\|_{\beta-1}\leq l|t-s|^{\eta},\quad t,s\in[0,T]. \tag{3.12}\]
_Then,_
\[v(t)=\int_{0}^{t}\mathcal{T}_{\beta-1}(t-s)\mathcal{B}(s)ds\in D(\mathcal{A}_ {\beta-1})=X_{\beta}\quad\text{ for }0\leq t\leq T.\]
_Moreover, \(v\in C^{1}((0,T],X_{\beta})\)._
**Definition 3.13**.: _[_2, 17_]_ _Let \(u_{0}\in X_{\alpha}\) and \(T>0\). By a solution to equation (3.8), we mean a function \(u\in C([0,T],X_{\alpha})\cap C^{1}([0,T],X_{\beta-1})\), such that \(u(t)\in X_{\beta}\) for \(0\leq t\leq T\) and such that (3.8) is pointwisely satisfied. In particular, this solution must satisfy the following integral formula:_
\[u(t)=\mathcal{T}(t)u_{0}+\int_{0}^{t}\mathcal{T}_{\beta-1}(t-s)\left(\mathcal{ K}(s,u(s))+(\omega-\mathcal{A}_{\beta-1})\mathcal{DM}(u(s))\right)ds,\quad t\in[0,T]. \tag{3.13}\]
### Local existence and regularity
In this Section, we prove the local existence, uniqueness and regularity of solutions to equation (3.8) which yields the local well-posedness for the model (2.14)-(2.16).
**Theorem 3.14** (Local existence and regularity).: _For each \(u_{0}\in X_{\alpha}\) there exist a maximal time \(T(u_{0})>0\) and a unique maximal solution \(u(\cdot):=u(\cdot,u_{0})\in C([0,T(u_{0})),X_{\alpha})\cap C^{1}([0,T(u_{0})),X_{\beta})\) of equation (3.8) such that_
\[u(t)=\mathcal{T}(t)u_{0}+\int_{0}^{t}\mathcal{T}_{\beta-1}(t-s)\underbrace{( \mathcal{K}(s,u(s))+(\omega-\mathcal{A}_{\beta-1})\mathcal{DM}(u(s)))}_{= \hat{\mathcal{K}}(s,u(s))}ds,\quad t\in[0,T(u_{0})). \tag{3.14}\]
_Moreover the solution \(u\) satisfies the following blow-up property:_
\[T(u_{0})=+\infty\quad\text{or}\quad\limsup_{t\to T(u_{0})^{-}}\|u(t)\|=+\infty. \tag{3.15}\]
Proof.: Let \(u_{0}\in X_{\alpha}\). So, using Proposition 3.10, it yields from [33, Section 7.1] (by taking \(X_{\beta-1}\) instead of \(X\) and \(\mathcal{T}_{\beta-1}\) instead of \(\mathcal{T}\)), that there exist \(T>0\) (small enough) and a unique solution \(u\in C([0,T],X_{\alpha})\cap C^{1}([0,T],X_{\beta-1})\) of equation (3.8) satisfying (3.14). Note that, in our case, \(\mathcal{A}_{0}\) is densely defined in \(X\), so that the continuity at \(t=0\) holds. Now, to conclude, we use the integral formula of our solution (3.14) and Lemma 3.12 to prove that \(u\in C^{1}([0,T],X_{\beta})\).
First, note that \(u_{0}\in X_{\alpha}\hookrightarrow X_{\beta}\) implies that \(\mathcal{T}(t)u_{0}\in C^{1}([0,T],X_{\beta})\). Then, it suffices to prove that
\[t\mapsto v(t)=\int_{0}^{t}\mathcal{T}_{\beta-1}(t-s)\tilde{\mathcal{K}}(s,u(s ))ds\in C^{1}([0,T],X_{\beta}).\]
Remark that, \(u\) is Holder continuous in \(X_{\beta-1}\) (since it is \(C^{1}\)), i.e, there exist \(\tilde{l}\geq 0\) and \(0<\vartheta\leq 1\), such that
\[\|u(t)-u(s)\|_{\beta-1}\leq\tilde{l}|t-s|^{\vartheta},\quad t,s\in[0,T]. \tag{3.16}\]
Moreover, since \(u_{0}\in X_{\alpha}\hookrightarrow X_{\beta}\), it yields, using Remark 3.11, that
\[u\in C([0,T],X_{\beta})\cap C^{1}([0,T],X_{\beta-1}).\]
Hence, \(u\) is bounded in \(X_{\beta}\), since it is continuous. Furthermore, using the reiteration theorem, we obtain that \(X_{\beta}=(X_{\alpha},X_{\beta-1})_{\tilde{\theta}}\), with \(0<\tilde{\theta}<1\). That is,
\[\|u(t)-u(s)\|_{\beta}\leq c(\alpha,\beta)\|u(t)-u(s)\|_{\alpha}^{1-\tilde{ \theta}}\|u(t)-u(s)\|_{1-\beta}^{\tilde{\theta}},\quad t,s\in[0,T].\]
Therefore, we have
\[\|u(t)-u(s)\|_{\beta}\leq\tilde{c}(\alpha,\beta)|t-s|^{\tilde{\theta}\vartheta },\quad t,s\in[0,T].\]
Note that \(u\) is bounded in \(X_{\beta}\). Hence, by (3.16) and Remark 3.11 (using Proposition 3.10 for \(X_{\beta}\) instead of \(X_{\alpha}\)), we obtain that
\[\|\tilde{\mathcal{K}}(t,u(t))-\tilde{\mathcal{K}}(s,u(s))\|_{ \beta-1} \leq L_{R}(\mid t-s\mid+\|u(t)-u(s)\|_{\beta})\] \[\leq\tilde{L_{R}}(\mid t-s\mid+|t-s|^{\tilde{\theta}\vartheta}), \quad t,s\in[0,T].\]
This proves that \(\tilde{\mathcal{K}}(\cdot,u(\cdot))\) is Holder continuous in \(X_{\beta-1}\). Then, we conclude using Lemma 3.12, by taking \(\mathcal{B}(\cdot)=\tilde{\mathcal{K}}(\cdot,u(\cdot))\).
Henceforth, we can argue similarly as in [33, Proposition 7.1.8] to prove that the solution \(u\) can be extended continuously to a maximal interval \([0,T(u_{0}))\), where \(T(u_{0})>0\) is the maximal time, such that the property (3.15) is also satisfied.
**Remark 3.15**.: _We mention that, in [33, Theorem 7.1.2], the result of existence of a solution (without regularity) of equation (3.8) uses the fractional power space \(D(\mathcal{A}_{0}^{\alpha})\) as an intermediate space \(X_{\alpha}\). However, this fact does not affect our existence result since the proof can be given in a similar way for any intermediate Banach space, see the proof of [33, Theorem 7.1.2]._
### Positivity
This section aims to show the positivity of the solution of our model (2.14)-(2.16) obtained in Section (3.3). Results and the proofs of the present section are inspired from those in [28, Section 2], see also [12, Section 6.3] in the case of homogeneous boundary conditions. Let \(\varphi\in X_{\alpha}\), we define
\[[(\lambda-\mathcal{A}_{-1})\mathcal{DM}]\varphi(x)=([(\lambda-\mathcal{A}_{- 1})\mathcal{DM}]\varphi_{1}(x),\cdots,[(\lambda-\mathcal{A}_{-1})\mathcal{DM }]\varphi_{5}(x))^{*},\]
and, then
\[\tilde{\mathcal{K}}(t,\varphi)(x):=\tilde{\mathcal{K}}(t,\varphi (x))\] \[= \bigg{(}\mathcal{K}_{1}(t,\varphi_{1}(x),\nabla\varphi(x))+[( \lambda-\mathcal{A}_{-1})\mathcal{DM}]_{1}\varphi_{1}(x),\cdots,\mathcal{K}_{ 5}(t,\varphi_{5}(x),\nabla\varphi(x))+[(\lambda-\mathcal{A}_{-1})\mathcal{DM }]_{5}\varphi_{5}(x)\bigg{)}^{*},\]
for \(a.e.\)\(x\in\Omega\). Then, we have the following positivity result.
**Theorem 3.16** (Positivity).: _For each \(u_{0}\in X_{\alpha}^{+}\) equation (3.8) has a unique maximal solution \(u(\cdot,u_{0})\in C([0,T(u_{0})),X_{\alpha})\cap C^{1}([0,T(u_{0})),X_{\beta})\) such that \(u(t)\in X_{\alpha}^{+}\) for all \(t\in[0,T(u_{0}))\)._
Proof.: From Proposition 3.2, it is clear that \(\mathcal{T}(t)X_{\alpha}^{+}\subset X_{\alpha}^{+}\) for all \(t\geq 0\). Let \(\varphi\in X_{\alpha}^{+}\). So from [28, Corollary 4] it suffices to show that
\[\lim_{h\to 0}h^{-1}d(\varphi+h\tilde{\mathcal{K}}(t,\varphi);X_{\beta-1}^{+})=0 \quad\text{ for each }t\geq 0. \tag{3.17}\]
First, we prove (pointwisely) that
\[\lim_{h\to 0}h^{-1}d(\sup_{\omega>0}\omega^{\beta}(R(\omega,\mathcal{A}_{-1}- \lambda)\left(\varphi(x)+h[\tilde{\mathcal{K}}(t,\varphi)](x)\right);\Lambda_{+ \infty})=0\quad\text{ for each }t\geq 0,\,a.e.\,\,x\in\Omega. \tag{3.18}\]
Then, the formula (3.18) holds since the transformation \(\sup_{\omega>0}\omega^{\beta}R(\omega,\mathcal{A}_{-1}-\lambda)\) preserves the positivity, and due to [28, Remark 1.2] by the fact that \(\mathcal{K}_{i}(t,0)\geq 0\) and \([(\omega-\mathcal{A}_{-1})\mathcal{DM}]_{i}0=0\) for all \(t\geq 0\) which gives that \(\tilde{\mathcal{K}}_{i}(t,0)\geq 0\). Note that the operator \(R(\omega,\mathcal{A}_{-1}-\lambda)\) is positive (see [4]) which yields the positivity of \(\sup_{\omega>0}\omega^{\beta}R(\omega,\mathcal{A}_{-1}-\lambda)\). Hence, we aim to prove that (3.17) holds. Let \(|\cdot|_{p}\) be the \(p\)-norm in \(\mathbb{R}^{5}\) defined as \(|(x_{1},\cdots,x_{5})|_{p}=(\sum_{k=1}^{5}|x_{i}|^{p})^{\frac{1}{p}}\). Then, for \(\varphi\in X\), the norm
\[\|\varphi\|_{p}:=(\int_{x\in\Omega}\mid\varphi(x)\mid_{p}dx)^{\frac{1}{p}}\]
is equivalent to the norm on \(X\), this is due the fact that all the norms in \(\mathbb{R}^{5}\) are equivalent. Similarly, using this new norm \(\|\cdot\|_{p}\), we can define an associated equivalent norm for \(X_{\alpha}\), and the new equivalent norm on \(X_{\beta-1}\) which is given by
\[\|\cdot\|_{\beta-1,p}=\sup_{\omega>0}\|\omega^{\beta}R(\omega,\lambda- \mathcal{A}_{-1})\cdot\|_{p}.\]
Let us define the Euclidean projection onto \(\Lambda_{+\infty}\), \(\pi_{\Lambda}:\mathbb{R}^{5}\longrightarrow\Lambda_{0,+\infty}\), by
\[\mid x-\pi_{\Lambda}x\mid=d(x,\Lambda_{+\infty}).\]
Notice that the mapping \(\pi_{\Lambda}\) is well-defined and continuous on \(\mathbb{R}^{n}\) (eventually it is \(1\)-Lipschitzian). Let \(\varepsilon>0\) such that there exists \(\delta>0\) and define
\[\varphi_{h}(x):=\pi_{\Lambda}(\varphi(x)+h[\tilde{\mathcal{K}}(t,\varphi)](x) )\quad\text{for $t\geq 0$, $x\in\Omega$, $h>0$.}\]
So, \(\varphi_{h}\in X_{\beta-1}^{\Lambda_{+\infty}}\) and
\[d(\varphi+h\tilde{\mathcal{K}}(t,\varphi);X_{\beta-1}^{\Lambda_ {+\infty}})^{p} \leq\|\varphi+h\tilde{\mathcal{K}}(t,\varphi)-\varphi_{h}\|_{\beta -1}^{p}\] \[\leq\sup_{\omega>0}\omega^{\beta}\int_{x\in\Omega}\mid R(\omega, \lambda-\mathcal{A}_{-1})\left(\varphi(x)+h[\tilde{\mathcal{K}}(t,\varphi)](x )-\varphi_{h}(x)\right)\mid_{p}^{p}dx\] \[=\int_{x\in\Omega}d(\sup_{\omega>0}\omega^{\beta}R(\omega, \lambda-\mathcal{A}_{-1})\left(\varphi(x)+h[\tilde{\mathcal{K}}(t,\varphi)](x )\right);\Lambda_{0,+\infty})^{p}dx \tag{3.19}\]
Moreover, for \(0<h\leq\delta\) it follows in view of the convexity of the operator distance, by the continuity of \(\tilde{\mathcal{K}}\) and using (3.18), that
\[\int_{\Omega}d(\sup_{\omega>0}\omega^{\beta}(R(\omega,\mathcal{A}_{-1}-\lambda )\left(\varphi(x)+h[\tilde{\mathcal{K}}(t,\varphi)](x)\right);\Lambda_{0,+ \infty})^{p}dx\leq|\Omega|(h\varepsilon)^{p}.\]
Hence, by (3.19) we have
\[d(\varphi+h\tilde{\mathcal{K}}(t,\varphi);X_{\beta-1}^{+})\leq|\Omega|^{\frac{ 1}{p}}h\varepsilon\quad\text{for all $0<h\leq\delta$.}\]
Which proves the result.
### \(L^{1}\)-boundedness of the total population density
In this Section, we show the \(L^{1}\)-boundedness of the total population density of our model (2.14)-(2.16). Let \(u_{0}\in X_{\alpha}^{+}\) and let \(u(t,u_{0})=(\rho_{1}(t,\cdot),\cdots,\rho_{5}(t,\cdot))^{*}\) for all \(t\in[0,T(u_{0}))\) be the corresponding maximal solution. It is clear, from Theorem 3.14, that \(u(\cdot,u_{0})\in X_{\alpha}\hookrightarrow C^{1}(\overline{\Omega})^{5} \hookrightarrow L^{1}(\Omega)^{5}\). This means that each \(\rho_{i}\) is bounded with respect to \(\Omega\) and then it is \(L^{1}(\Omega)\). Hence, the following map
\[t\in[0,T(u_{0}))\longmapsto U(t):=\int_{\Omega}\left[\rho_{1}(t,x)+\cdots+ \rho_{5}(t,x)\right]dx\in\mathbb{R},\]
is well-defined. Furthermore, we have
**Proposition 3.17** (\(L^{1}\)-boundedness).: \[0\leq U(t)\leq 1\quad\text{ for all }t\in[0,T(u_{0})).\]
Proof.: By assumption, we have
\[U(0)=\int_{\Omega}\theta(x)dx=1.\]
In otherwise, the positivity result in Theorem 3.16 gives
\[U(t)\geq 0\quad\text{ for all }t\in[0,T(u_{0})).\]
Moreover, the mapping \(U\) is well-defined and it is continuously differentiable on \([0,T(u_{0}))\). Hence, we show that
\[\frac{d}{dt}U(t)\leq 0,\quad t\in[0,T(u_{0})).\]
Indeed, using the Green-Ostrogradski formula, we obtain that
\[\frac{d}{dt}U(t) =\int_{\Omega}\partial_{t}\left[\rho_{1}+\rho_{2}+\rho_{3}+\rho_{ 4}+\rho_{5}\right]dx\] \[=\sum_{i=1}^{5}d_{i}\int_{\Omega}\Delta\rho_{i}dx-\int_{\Omega} \nabla(\rho_{2}\vec{v}_{2}(\rho))dx-\int_{\Omega}\nabla(\rho_{3}\vec{v}_{3}( \rho))dx-\sum_{i=1}^{3}\delta_{i}\int_{\Omega}\rho_{i}\] \[=\sum_{i=1}^{5}d_{i}\int_{\partial\Omega}\nabla\rho_{i}\cdot ndx -\int_{\partial\Omega}\rho_{2}\vec{v}_{2}(\rho)\cdot ndx-\int_{\partial \Omega}\rho_{3}\vec{v}_{3}(\rho)\cdot ndx-\sum_{i=1}^{3}\delta_{i}\int_{\Omega }\rho_{i}dx\] \[=-\sum_{i=1}^{5}v_{i,out}\int_{\Gamma_{2}}\rho_{i}dx-\sum_{i=1}^{ 3}\delta_{i}\int_{\Omega}\rho_{i}dx\leq 0,\quad t\in[0,T(u_{0})).\]
The last estimate is a consequence of the positivity of the terms \(\rho_{i}\), \(i=1,\cdots,5\). That is, \(U\) is decreasing and then, we have
\[U(t)\leq U(0)=1\quad\text{ for all }t\in[0,T(u_{0})).\]
### Uniform boundedness and global existence
In this Section, using the method of positively invariant regions, we prove that the maximal solution of equation (3.8) have a bounded positive invariant region. This fact guarantees the uniform boundedness of the solution (see Theorem 3.18) that yields the global existence (see Corollary 3.21). We recall also that the result obtained in this section is new and generalize those in [28, Section 2] and [12, Section 6.3] in the case of inhomogeneous boundary type conditions. Moreover, we take \(a=(a_{1},\cdots,a_{5})^{*}\) is a constant with positive terms \(a_{i}\) such that \(\sum_{i=1}^{5}a_{i}>1\).
**Theorem 3.18** (Uniform boundedness).: _For each \(u_{0}\in X^{\Lambda_{a}}\) equation (3.8) has a unique maximal solution \(u(\cdot,u_{0})\in C([0,T(u_{0})),X_{\alpha})\cap C^{1}([0,T(u_{0})),X_{\beta})\) such that \(u(t)\in X^{\Lambda_{a}}\) for all \(t\in[0,T(u_{0}))\)._
\(t\in[0,T(u_{0}))\) provided that,_
\[\begin{cases}a_{4}\leq(b_{1}+b_{2}+\delta_{1})a_{1};\quad b_{3}\leq \alpha_{13}\xi(\frac{a_{3}}{a_{1}+\varepsilon})a_{1};\quad b_{4}\leq\alpha_{12} \xi(\frac{a_{2}}{a_{1}+\varepsilon})a_{1},\\ b_{2}a_{1}+\alpha_{32}\xi(\frac{a_{2}}{a_{3}+\varepsilon})a_{2}a_{3}\leq(b_{4 }+c_{1}+\delta_{2})a_{2};\quad c_{2}a_{3}+\alpha_{12}\xi(\frac{a_{2}}{a_{1}+ \varepsilon})a_{1}a_{2}\leq\alpha_{23}\xi(\frac{a_{3}}{a_{2}+\varepsilon})a_{2 }a_{3},\\ b_{1}a_{1}+\alpha_{23}\xi(\frac{a_{3}}{a_{2}+\varepsilon})a_{2}a_{3}\leq(b_{3 }+c_{2}+\delta_{3})a_{3};\quad c_{1}a_{2}+\alpha_{13}\xi(\frac{a_{3}}{a_{1}+ \varepsilon})a_{1}a_{3}\leq\alpha_{32}\xi(\frac{a_{2}}{a_{3}+\varepsilon})a_{2 }a_{3}.\end{cases} \tag{3.20}\]
Proof.: Let \(\varphi\in X^{\Lambda_{a}}\). First, we prove that \(\mathcal{T}(t)X^{\Lambda_{a}}\subset X^{\Lambda_{a}}\). That is, from the invariance result [36, Theorem 5.1], it suffices to show that \(R(\lambda,\mathcal{A}_{0})X^{\Lambda_{a}}\subset X^{\Lambda_{a}}\) for some \(\lambda\in\rho(\mathcal{A}_{0})\) large enough. This last follows immediately since we have
\[(\lambda-\mathcal{A}_{0})a=\lambda a\geq a,\]
for large \(\lambda\in\rho(\mathcal{A}_{0})\). Remark that \(R(\lambda,\mathcal{A}_{0})\) is a positive operator.
Furthermore, we show the invariance for the solution under the set
\[\Lambda_{a,\infty}:=\Pi_{i=1}^{4}[0,a_{i}]\times[0,\infty).\]
which leads to the invariance under the bounded region \(\Pi_{i=1}^{4}[0,a_{i}]\) for the vector components \((\rho_{1},\cdots,\rho_{4})^{*}\). For the case \(\rho_{5}\leq a_{5}\) we treat it separately. So it suffices to prove that
\[\lim_{h\to 0}h^{-1}d(\varphi+h\tilde{\mathcal{K}}(t,\varphi);X^{\Lambda_{a, \infty}}_{-1})=0, \tag{3.21}\]
which amounts, in view of the proof of Theorem 3.16, to proving that
\[\lim_{h\to 0}h^{-1}d(\sup_{\omega>0}\omega^{\beta}(R(\omega,\mathcal{A}_{-1}- \lambda)\left(\varphi(x)+h[\tilde{\mathcal{K}}(t,\varphi)](x)\right);\Lambda_{ a,\infty})=0\quad\text{ for all }t\geq 0,\,a.e.\,\,x\in\Omega. \tag{3.22}\]
So, in view of [24, Proposition 12], formula (3.22) holds if we check that for \(\rho=a\), we have
\[\tilde{\mathcal{K}}_{i}(t,a_{i},\nabla a)\leq 0.\]
Thus, it suffices to prove that \(\mathcal{K}_{i}(t,a_{i},\nabla a)+[(\lambda-\mathcal{A}_{-1})\mathcal{D} \mathcal{M}]_{i}a_{i}\leq 0\) for \(i=1,\cdots,4\). Note that, by construction, \(\tilde{\mathcal{K}}_{4}(t,a_{4},\nabla\beta)\leq 0\). Moreover, since \(\nabla a_{i}\cdot n=0\), for \(i=1,\cdots,5\), we obtain that
\[\mathcal{L}a=(d_{1}\nabla a_{1}\cdot n,d_{2}\nabla a_{2}\cdot n,d_{3}\nabla a_ {3}\cdot n,d_{4}\nabla a_{4}\cdot n,d_{5}\nabla a_{5}\cdot n)^{*}=(0,0,0,0,0)^ {*},\]
which yields that \(\mathcal{D}\mathcal{M}a=(0,0,0,0,0)^{*}\), and then, that
\[(\lambda-\mathcal{A}_{\beta-1})\mathcal{D}\mathcal{M}a=(0,0,0,0,0)^{*}.\]
Therefore, we need only to examine the terms \(\mathcal{K}_{i}(t,a_{i},\nabla a)\). That is, we have
\[\nabla(V_{i,max}a_{i}(1-\sum_{j=1}^{5}a_{j})\vec{\nu}(x))=V_{i,max}a_{i}(1-\sum _{j=1}^{5}a_{i})\nabla(\nu(x))\quad\text{for }i=2,3.\]
So, since by definition of \(\vec{\nu}\), we have \(\nabla(\vec{\nu}(x))\leq 0\), and then it follows that
\[\nabla(V_{i,max}a_{i}(1-\sum_{j=1}^{5}a_{j})\vec{\nu}(x))\geq 0\]
by the fact that \(\sum_{j=1}^{5}a_{j}>1\). Hence
If \((\rho_{1},\cdots,\rho_{4})=(a_{1},\cdots,a_{4})\). Then, we have
\[\mathcal{K}_{1}(t,a_{1},\nabla a) =-(b_{1}+b_{2}+\delta_{1})a_{1}+\gamma(t)a_{4}+b_{3}a_{3}-\mathcal{ F}(a_{1},a_{3})-\mathcal{G}(a_{1},a_{2})+b_{4}a_{2}\] \[=\underbrace{\gamma(t)a_{4}-(b_{1}+b_{2}+\delta_{1})a_{1}}_{C_{1} ^{1}}+\underbrace{\left[b_{3}-\alpha_{13}\xi(\frac{a_{3}}{a_{1}+\varepsilon})a _{1}\right]}_{C_{2}^{1}}a_{3}+\underbrace{\left[b_{4}-\alpha_{12}\xi(\frac{a_ {2}}{a_{1}+\varepsilon})a_{1}\right]a_{2}}_{C_{3}^{1}}.\]
Hence, \(C_{1}^{1}\leq 0\) if \(a_{4}\leq(b_{1}+b_{2}+\delta_{1})a_{1}\). Moreover, \(b_{3}\leq\alpha_{13}\xi(\frac{a_{3}}{a_{1}+\varepsilon})a_{1}\) implies that \(C_{2}^{1}\leq 0\), and \(C_{3}^{1}\leq 0\) if \(b_{4}\leq\alpha_{12}\xi(\frac{a_{2}}{a_{1}+\varepsilon})a_{1}\). Furthermore,
\[\mathcal{K}_{2}(t,a_{2},\nabla a) =-(b_{4}+c_{1}+\delta_{2})a_{2}+b_{2}a_{1}+c_{2}a_{3}-\nabla(V_{2, max}a_{2}(1-\sum_{j=1}^{5}a_{j})\nu(x))+\mathcal{G}(a_{1},a_{2})-\mathcal{H}(a_{2},a_ {3})\] \[\leq\underbrace{-(b_{4}+c_{1}+\delta_{2})a_{2}+b_{2}a_{1}+\alpha_ {32}\xi(\frac{a_{2}}{a_{3}+\varepsilon})a_{2}a_{3}}_{C_{1}^{2}}\] \[+\underbrace{c_{2}a_{3}+\alpha_{12}\xi(\frac{a_{2}}{a_{1}+ \varepsilon})a_{1}a_{2}-\alpha_{23}\xi(\frac{a_{3}}{a_{2}+\varepsilon})a_{2}a _{3}}_{C_{2}^{2}}.\]
So, \(C_{1}^{2}\leq 0\) if \((b_{4}+c_{1}+\delta_{2})a_{2}\geq b_{2}a_{1}+\alpha_{32}\xi(\frac{a_{2}}{a_{ 3}+\varepsilon})a_{2}a_{3}\), and \(c_{2}a_{3}+\alpha_{12}\xi(\frac{a_{2}}{a_{1}+\varepsilon})a_{1}\leq\alpha_{23 }\xi(\frac{a_{3}}{a_{2}+\varepsilon})a_{3}\) implies that \(C_{2}^{2}\leq 0\). Finally, we have
\[\mathcal{K}_{3}(t,a_{3},\nabla a) =-(b_{3}+c_{2}+\delta_{3})a_{3}+b_{1}a_{1}+c_{1}a_{2}-\psi(t)a_{3}- \nabla(V_{3,max}a_{3}(1-\sum_{j=1}^{5}a_{j})\nu(x))+\mathcal{F}(a_{1},a_{3})+ \mathcal{H}(a_{2},a_{3})\] \[\leq\underbrace{-(b_{3}+c_{2}+\delta_{3})a_{3}+b_{1}a_{1}+\alpha _{23}\xi(\frac{a_{3}}{a_{2}+\varepsilon})a_{2}a_{3}}_{C_{1}^{3}}\] \[+\underbrace{c_{1}a_{2}+\alpha_{13}\xi(\frac{a_{3}}{a_{1}+ \varepsilon})a_{1}a_{3}-\alpha_{32}\xi(\frac{a_{2}}{a_{3}+\varepsilon})a_{2}a _{3}}_{C_{2}^{3}}.\]
That is, we obtain that \((b_{3}+c_{2}+\delta_{3})a_{3}\geq b_{1}a_{1}+\alpha_{23}\xi(\frac{a_{3}}{a_{2} +\varepsilon})a_{2}a_{3}\) implies \(C_{1}^{3}\leq 0\) and \(c_{1}a_{2}+\alpha_{a}\xi(\frac{a_{3}}{a_{1}+\varepsilon})a_{1}\leq\alpha_{32} \xi(\frac{a_{2}}{a_{3}+\varepsilon})a_{2}\) implies \(C_{2}^{3}\leq 0\). Then, (3.22) and so (3.21) holds too uniformly in \(x\) in a similar way as in the proof of Theorem 3.16. On the other hand, to show that \(0\leq\rho_{5}(t)\leq a_{5}\) for some \(a_{5}>0\) we use the variation of constant formula i.e., (3.14). In fact, we have
\[\rho_{5}(t) =\int_{0}^{t}\mathcal{T}_{5}(t-s)\psi(s)\rho_{3}(s)ds\] \[\leq T(u_{0})a_{3}:=a_{5}.\]
This proves the result for \(a_{5}=T(u_{0})a_{3}\).
**Remark 3.19**.: _To verify conditions (3.20) and convince that they do not contradict each other, for simplicity and since the parameters \(a_{i}>0\) for \(i=1,\cdots,4\), we may take for example \(\varepsilon=0\)
\(a_{1}=m_{0}a_{4}=m_{1}a_{2}\) and \(a_{3}=a_{2}=\tilde{a}\) where \(m_{0}>0\) is chosen such that \(1\leq m_{0}(b_{1}+b_{2}+\delta_{1}),\) and \(m_{1}>0\) is such that:_
\[\left\{\begin{aligned} & b_{3}\leq\frac{m_{1}}{1+m_{1}^{2}} \alpha_{13}\tilde{a};\quad b_{4}\leq\frac{m_{1}}{1+m_{1}^{2}}\alpha_{12}\tilde {a}\\ & m_{1}b_{2}\leq c_{1};\quad\frac{\alpha_{32}}{2}\tilde{a}\leq b_{ 4};\quad c_{2}+\frac{m_{1}}{1+m_{1}^{2}}\alpha_{12}\tilde{a}\leq\frac{\alpha_{ 23}}{2}\tilde{a}\\ & m_{1}b_{1}\leq c_{2};\quad\frac{\alpha_{23}}{2}\tilde{a}\leq b_{ 3};\quad c_{1}+\frac{m_{1}}{1+m_{1}^{2}}\alpha_{13}\tilde{a}\leq\frac{\alpha_{ 32}}{2}\tilde{a}.\end{aligned}\right. \tag{3.23}\]
_Thus, for a special choice of the parameters \(m_{0}\), \(m_{1}\) and \(\tilde{a}\), the condition (3.23) holds where we may have \(c_{1}<c_{2}\), \(b_{1}<b_{2}\), \(\alpha_{32}>\alpha_{13}\) and \(\alpha_{12}>\alpha_{23}\) corresponding to the case of a population with low risk culture studied in our simulation results in Section 4._
**Remark 3.20**.: _Notice that the condition (3.20) is sufficient to obtain the pointwise subtangantial condition (3.22). That is, by construction, the different cases for \(\tilde{\mathcal{K}}(t,a_{i},0)\) where the \(a_{j}\) vanish for some (not all) \(j\neq i\) hold true under the condition (4)._
**Corollary 3.21** (Global existence).: _Equation (3.8) has a unique global bounded and positive solution \(u(\cdot,u_{0})\in C([0,T(u_{0})),X)\cap C^{1}((0,T(u_{0})),X)\)._
Proof.: From Theorem 3.14 we obtain that equation (3.8) has a unique positive maximal solution \(u(\cdot,u_{0})\in C([0,T(u_{0})),X_{\alpha})\cap C^{1}([0,T(u_{0})),X_{\beta})\) such that (3.15) holds. Therefore, from Theorem 3.18 the solution \(u\) is bounded in \(X_{\alpha}\) which yields from (3.15) that \(T(u_{0})=+\infty\).
## 4. Numerical Simulations
In this section we present several numerical simulations for different scenarios of an **for** evacuation of populations of during **in** a catastrophic event. In order to highlight the behavior of the populations in such event, we study the case of no back-to-daily population that corresponds to the case where
\[\phi(t)=0\quad\text{ for all }t\geq 0,\]
and, for simplicity, we take \(\gamma(t)=1\) for all \(t\geq 0\). In this case the system (3.8) is not time-dependent (it is autonomous) and all the results of Section 3 still hold.
Here the population is supposed with low risk culture. All the parameters of the spatio-temporal APC model (2.14)-(2.16) are set as in Table 1.
With regard to the diffusion process, we suppose that the crowd in a alert state hardly diffuses, since in a alert behavior, pedestrians are moving to look for information and to identify the hazard. Thus, the diffusion coefficient \(d_{1}\) should be considered small compared to the ones of the other populations. Moreover, here it is assumed that the most diffusive population is the panic population, since in panic behavior pedestrians move randomly in different directions, and thus \(d_{2}\) should be considered the largest diffusion coefficient.
Thus, for the diffusivity coefficients of the control, daily, and return-to-day behaviors, we assume \(d_{1}<d_{i}<d_{2}\) for \(i=3,4,5\).
We assume that the domain presents a target escape region (denoted by \(\Gamma_{2}\)), thus the desired direction vector is defined by (2.12) where \(\vec{\nu}(x)_{|\Omega}=(\nu_{x_{1}},\nu_{x_{2}})^{\prime}\) is given by:
\[\left\{\begin{array}{l}\nu_{x_{1}}=-\dfrac{x_{1}-x_{1}^{p}}{\sqrt{(x_{1}-x_{1} ^{p})^{2}+(x_{2}-x_{2}^{p})^{2}}},\\ \nu_{x_{2}}=-\dfrac{x_{2}-x_{2}^{p}}{\sqrt{(x_{1}-x_{1}^{p})^{2}+(x_{2}-x_{2}^{ p})^{2}}},\end{array}\right. \tag{4.1}\]
where \((x_{1}^{p},x_{2}^{p})\) is a centered point in \(\Gamma_{2}\) localized out from \(\overline{\Omega}\), see Figure 4. So, the desired direction in both situations of control and panic are supposed to be the same which is given by \(\vec{\nu}(x)\). For more details about the desired direction of pedestrians, we refer to the works of Hughes [25] and also to the references [14, 44].
In the following, we consider three different scenarios for the evacuation of a population whose aim is to escape by the unique exit \(\Gamma_{2}\) (see Figure 5):
**Scenario 1: Evacuation of one centered cluster population.** Here we consider a pedestrian population located in a single group within the domain.
**Scenario 2: Evacuation of a population subdivided into three groups.** Here we consider a population subdivided into three separated groups of pedestrians in different spatial localizations.
**Scenario 3: Evacuation of a population with an obstacle in front of the exit.** Here we take into account the situation in which an obstacle is located between the exit and the population concentrated in a single group in the center of the domain.
\begin{table}
\begin{tabular}{|c||c||} \hline
**Diffusion** & **Parameters** \\ \cline{2-3} \cline{3-3} \multirow{2}{*}{**Diffusion**} & \(d_{1}=0.001\) \\ \cline{2-3} \cline{3-3} & \(d_{2}=0.05\) \\ \cline{2-3} \cline{3-3} & \(d_{3}=0.01\) \\ \cline{2-3} \cline{3-3} & \(d_{4}=0.01\) \\ \hline \multirow{3}{*}{**Advection**} & \(V_{2,max}=0.3\) \\ \cline{2-3} \cline{3-3} & \(V_{3,max}=0.2\) \\ \hline \multirow{3}{*}{**Speed at the boundary**} & \(v_{1,out}=0.2\) \\ \cline{2-3} \cline{3-3} & \(v_{2,out}=0.1\) \\ \cline{2-3} \cline{3-3} & \(v_{3,out}=0.3\) \\ \cline{2-3} \cline{3-3} & \(v_{4,out}=0.2\) \\ \hline \end{tabular}
\begin{tabular}{|c||c||} \hline
**Imitation** & **Parameters** \\ \cline{2-3} \cline{3-3} & \(\alpha_{13}=0.6\) \\ \cline{2-3} \cline{3-3} & \(\alpha_{12}=0.7\) \\ \cline{2-3} \cline{3-3} & \(\alpha_{23}=0.6\) \\ \cline{2-3} \cline{3-3} & \(\alpha_{32}=0.7\) \\ \hline \multirow{3}{*}{**Intrinsic transitions**} & \(c_{1}=0.1\) \\ \cline{2-3} \cline{3-3} & \(c_{2}=0.4\) \\ \cline{2-3} \cline{3-3} & \(b_{1}=0.1\) \\ \cline{2-3} \cline{3-3} & \(b_{2}=0.2\) \\ \cline{2-3} \cline{3-3} & \(b_{3}=0.001\) \\ \cline{2-3} \cline{3-3} & \(b_{4}=0.001\) \\ \hline \end{tabular}
\end{table}
Table 1. Table of parameter values. Here, we choose \(d_{1}<d_{3}=d_{4}<d_{2}\), since the population in an alert state scarcely diffuse and the panic population is supposed to be the most diffusive one. Moreover, we are interested to consider a population with a low risk culture, so, for example, we take \(c_{2}>c_{1}\) and \(b_{2}>b_{1}\), as in [31].
First of all, in order to highlight the time evolution of the different human behaviors, namely, alert, panic, control and daily behaviors, in Figure 6 we present the simulation results of Scenario 1, since the dynamics is analogous in the other scenarios. The more the color goes from light blue to dark red, the higher the population density is.
We notice that at the beginning of the simulation, for \(t=50\), there is a majority of daily and alert populations rather than population in a state of panic and control. This dynamic depends on the structure of the APC model, described in Section 2.1: at \(t=0\) everyone is in a daily behavior, then everyone goes through the state of alert before becoming panicked or controlled. Moreover, since the diffusion coefficients for \(\rho_{1}\) and \(\rho_{2}\) are low, the position of the populations is still more or less the same as at the beginning, see \((a_{1})\)-\((a_{4})\) in Figure 6. For \(t=250\), for all scenarios, the dynamics of the APC is fully developed. Moreover, diffusion and advection phenomena are now visible: the whole population is in panic and control state, and they are concentrated near the exit, while the populations in alert state and in daily behavior are negligible, see \((e_{1})\)-\((e_{4})\) in Figure 6. Moreover, since (see Table 1) the population considered here has low risk culture, the dominated behavior is that of panic.
Figure 4. The direction vector \(\vec{\nu}(x_{1},x_{2})\) given in (4.1) describing the desired direction of pedestrians to reach the point \((x_{1}^{p},x_{2}^{p})\) which is located outside the domain \(\overline{\Omega}\) since the population looks to escape from the exit \(\Gamma_{2}\) towards this point.
Figure 5. **Initial conditions:** initial location of the population for each scenario: (a) the population is concentrated in a single group in the center of the domain; (b) the population is subdivided into three groups; (c) an obstacle is located between the exit and the population, which is concentrated in a single group within the domain. We recall that the exit is on the right of the domain, see Figure 4.
Comparing the evacuation of population in panic in the three different scenarios, see in Figures 7, (\(e_{1}\)) for Scenario 1, (\(e_{2}\)) for Scenario 2 and (\(e_{3}\)) for Scenario 3, one notices a strong congestion at the level of the exit in the first scenario. In the second scenario, splitting the initial population
Figure 6. In order to present the time evolution of the human behaviors (alert, panic, control and daily behaviors), we present the simulations results of this populations in Scenario 1 and with respect to captures times \(t=50,100,150,200\) and \(250\) respectively. So, each row represent a human behavior. We notice that at the beginning of the simulation, there is a majority of daily and alert populations (row 1 and row 4 respectively) rather than population in a state of panic and control (row 2 and row 3 respectively). This dynamic depends on the structure of the APC model, described in Sections 2: at t = 0 everyone is in a daily behavior, then everyone goes through the state of alert before becoming panicked or controlled.
into three clusters reduces this congestion. Finally, the presence of an obstacle as in the third scenario further reduces congestion.
## 5. Conclusion
In this work, we introduce a new spatio-temporal APC (alert, panic and control) model describing the evacuation of a population presented via different human behaviors during a catastrophic
Figure 7. Population in panic \(\rho_{2}\) over the three scenarios at the captures times \(t=50,100,150,200\) and \(250\) respectively. Notice that, each row represent the (time) evolution of the population at each scenario: at time \(t=50\) of each scenario (column \((a)\)), population in panic is small with respect to the initial population, since at the this time the majority of population is concentrated in alert, but over time, panicked ones become greater and greater, thanks to the APC dynamics, and the fact that the population has a low risk culture, in addition, the phenomena of diffusion and advection are now visible (the population in panic concentrated near the exit). Furthermore, by comparing captures at each column \((b)\), \((c)\), \((d)\) and \((e)\), we can observe that in Scenario 2 there is slightly less congestion near the exit than in Scenario 1, this is can be highlighted from time \(t=100\) to \(t=250\). Moreover, in Scenario 3, we observe that the congestion is less than in the previous cases. Indeed, the role of the obstacle is to facilitate the access to the exit.
event. First, using the first-order macroscopic crowd theory, we derive a new spatio-temporal APC model. It is a system of advection-diffusion-reaction equations with nonlinear Robin boundary conditions. Then, using a semigroup approach and abstract evolution equations, we prove the local existence and a regularity result of the solutions of our model. Moreover, we establish the positivity of the solution and the existence of positively bounded invariant sets which leads to the global existence and the boundedness of the solutions. As far as we know, the theoretical results established in this work are new. Finally, to illustrate our results, we present different numerical simulations of the population evacuation, using three different scenarios.
## Appendix A Proofs of the preliminary results of Section 3
**Proof of Proposition 3.2.****(i)** It is well-known, from [33, Section 3.1.1], that the realization of the Laplacian operator \(d_{i}\Delta\) on \(L^{p}(\Omega)\) with Neumann boundary conditions generates a contraction holomorphic \(C_{0}\)-semigroup \((\mathcal{T}_{i}(t))_{t\geq 0}\) of angle \(\pi/2\), for \(i=1,\cdots,5\), see also [16, Sections 1.4] for a similar result. So \(\mathcal{A}_{0}\) generates a contraction holomorphic \(C_{0}\)-semigroup of angle \(\pi/2\) on \(X\) as a diagonal matrix-valued operator.
**(ii)** The compactness of the semigroup \((\mathcal{T}(t))_{t\geq 0}\) (i.e. when \(T(t)\) is a compact operator for each \(t>0\)) follows from [16, Sections 1.6]. For the positivity of the semigroup \((\mathcal{T}(t))_{t\geq 0}\) on the Banach lattice \(X\) it suffices to prove that \(R(\omega,\mathcal{A}_{0})\in L(X)\) is a positive operator for \(\omega\in\varrho(\mathcal{A}_{0})\) large enough i.e., \(\varphi(x)\geq 0\) implies \(R(\omega,\mathcal{A}_{0})\varphi(x)\geq 0\) for all \(x\in\overline{\Omega}\), see [20, Chapter VI, Section 1.8], see also the result of invariance under closed sets [36, Theorem 5.1]. This is equivalent to prove that \(\psi\in D(\mathcal{A}_{0}):=\ker(\mathcal{L})\) which is the solution of \(\varphi(x)=(\omega-\mathcal{A}_{0})\psi(x)\) implies \(\psi\) is positive (it always exists, the question is about its positivity). That is, we have
\[\begin{cases}(\omega-\mathcal{A})\psi(x)=\varphi(x)\geq 0,&x\in\Omega\\ \qquad\mathcal{L}\psi(x)=0,&x\in\partial\Omega.\end{cases}\]
Then, the result holds directly from the maximum principle.
**Proof of Proposition 3.5.** Let \(0\leq\delta\leq 1\), the fact that the extension semigroup \((\mathcal{T}_{\delta-1}(t))_{t\geq 0}\) exists as a strongly continuous positive semigroup with generator \((\mathcal{A}_{\delta-1},D(\mathcal{A}_{\delta-1})=X_{\delta})\) is due to [4]. The analycity and compactness of \((\mathcal{T}_{\delta-1}(t))_{t\geq 0}\) follows from [22].
**Proof of Lemma 3.9.** This holds by definition of the Dirichlet map \(\mathcal{D}\).
**Proof of Proposition 3.10.** Since \((\lambda-\mathcal{A}_{\beta-1})\mathcal{D}\in L(\partial X_{\alpha},X_{\beta- 1})\), it suffices to examine the operators \(\mathcal{K}\) and \(\mathcal{M}\). Indeed, for \(\mathcal{K}\) we show term by term. Let \((\varphi_{1},\cdots,\varphi_{5})^{*}:=\varphi,(\upsilon_{1},\cdots,\upsilon_{5} ):=\upsilon\in X_{\alpha}\) and \(R>0\) be such that \(\|\varphi\|_{\alpha},\ \|\upsilon\|_{\alpha}\leq R\). By construction, the functions \(\mathcal{F},\ \mathcal{G}\) and \(\mathcal{H}\) are (pointwisely) Lipschitzian in bounded sets i.e.,
\[|\mathcal{F}(\rho_{1},\rho_{3})(x)-\mathcal{F}(\upsilon_{1},\upsilon_{3})(x)| \leq L_{R}^{1}\left(|\rho_{1}(x)-\upsilon_{1}(x)|+|\rho_{3}(x)-\upsilon_{3}( x)|\right),\quad x\in\Omega\]
\[|\mathcal{G}(\rho_{1},\rho_{2})(x)-\mathcal{G}(\upsilon_{1},\upsilon_{2})(x)| \leq L_{R}^{2}\left(|\rho_{1}(x)-\upsilon_{1}(x)|+|\rho_{2}(x)-\upsilon_{2}( x)|\right),\quad x\in\Omega\]
and
\[|\mathcal{H}(\rho_{2},\rho_{3})(x)-\mathcal{H}(\upsilon_{2},\upsilon_{3})(x)|\leq L _{R}^{3}\left(|\rho_{2}(x)-\upsilon_{2}(x)|+|\rho_{3}(x)-\upsilon_{3}(x)|\right), \quad x\in\Omega\]
for some \(L_{R}^{i}\geq 0\), \(i=1,2,3\). So, by passing to the \(L^{p}\)-norm, and using the continuous embedding \(W^{2\alpha,p}\hookrightarrow C^{1}(\overline{\Omega})\) we obtain that
\[\|\mathcal{F}(\rho_{1},\rho_{3})-\mathcal{F}(\upsilon_{1},\upsilon_{3})\|\leq |\Omega|L_{R}^{1}\left(\|\rho_{1}-\upsilon_{1}\|_{0,\alpha}+\|\rho_{3}- \upsilon_{3}\|_{0,\alpha}\right),\] (A.1)
\[\|\mathcal{G}(\rho_{1},\rho_{2})-\mathcal{G}(\upsilon_{1},\upsilon_{2})\|\leq |\Omega|L_{R}^{2}\left(\|\rho_{1}-\upsilon_{1}\|_{0,\alpha}+\|\rho_{2}- \upsilon_{2}\|_{0,\alpha}\right),\] (A.2)
and
\[\|\mathcal{H}(\rho_{2},\rho_{3})-\mathcal{H}(\upsilon_{2},\upsilon_{3})\|\leq |\Omega|L_{R}^{3}\left(\|\rho_{2}-\upsilon_{2}\|_{0,\alpha}+\|\rho_{3}- \upsilon_{3}\|_{0,\alpha}\right).\] (A.3)
Now, we show that the terms \(\nabla\cdot(\varphi_{2}V_{2}(\varphi)\nu))\), \(\nabla\cdot(\varphi_{3}V_{3}(\varphi)\nu)\) are Lipschitzian in bounded sets in \(X_{\alpha}\). So, a straightforward calculus yields that
\[\varphi_{2}V_{2}(\varphi)(x)-\upsilon_{2}V_{2}(\upsilon)(x)=V_{2,max}\left((1- \tilde{\varphi}(x))\left[\varphi_{2}(x)-\upsilon_{2}(x)\right]+\upsilon_{2}(x )\sum_{i=1}^{5}\left[\varphi_{i}(x)-\upsilon_{i}(x)\right]\right),\quad x\in\Omega.\]
Furthermore, using the regularity of \(\varphi\) and \(\upsilon\) and since the gradient operator \(\nabla\) is linear, we obtain that
\[\nabla\cdot(\varphi_{2}V_{2}(\varphi)\nu(x))-\nabla\cdot(\upsilon _{2}\upsilon_{2}(\upsilon)\nu(x))\] \[= \nabla\cdot(\varphi_{2}V_{2}(\varphi)\nu(x))-\nabla\cdot(\varphi _{2}\upsilon_{2}(\upsilon)\nu(x))+\nabla\cdot(\varphi_{2}\upsilon_{2}(\upsilon )\nu(x))-\nabla\cdot(\upsilon_{2}\upsilon_{2}(\upsilon)\nu(x))\] \[= \underbrace{\nabla\varphi_{2}(x)\cdot(\upsilon_{2}(\varphi)\nu(x ))-\nabla\varphi_{2}(x)\cdot(\upsilon_{2}(\upsilon)\nu(x))+\varphi_{2}(x) \nabla\cdot(\upsilon_{2}(\varphi)\nu(x))-\varphi_{2}(x)\nabla\cdot(\upsilon _{2}(\upsilon)\nu(x))}_{I_{1}(x)}\] \[+\underbrace{\nabla\varphi_{2}(x)\cdot(\upsilon_{2}(\upsilon) \nu(x))-\nabla\upsilon_{2}(x)\cdot(\upsilon_{2}(\upsilon)\nu(x))+\varphi_{2} (x)\nabla\cdot(\upsilon_{2}(\upsilon)\nu(x))-\upsilon_{2}(x)\nabla\cdot( \upsilon_{2}(\upsilon)\nu(x))}_{I_{2}(x)},\quad x\in\Omega.\]
Then, we have
\[|\;I_{1}(x)\;|\leq M_{V_{2}}\left(|\;\nabla\varphi_{2}(x)\;|\;\sum_{i=1}^{5} \;|\;\varphi_{i}(x)-\upsilon_{i}(x)\;|+|\;\varphi_{2}(x)\;|\;\sum_{i=1}^{5}\;| \;\nabla(\varphi_{i}(x)-\upsilon_{i}(x))\;|\right),\quad x\in\Omega,\]
\[|\;I_{2}(x)\;|\leq M_{V_{2}}\left(|\;\nabla(\varphi_{2}(x)-\upsilon_{2}(x))\;| \;\sum_{i=1}^{5}(1+|\;\varphi_{i}(x)\;|)+|\;\varphi_{2}(x)-\upsilon_{2}(x)\;| \;\sum_{i=1}^{5}\;|\;\nabla(\upsilon_{i}(x)\;|\right),\quad x\in\Omega,\]
where \(M_{V_{2}}=V_{2,max}\sup_{x\in\overline{\Omega}}|\nu(x)|\). Hence, we have
\[|\;\nabla\cdot(\varphi_{2}V_{2}(\varphi)\nu(x))-\nabla\cdot( \upsilon_{2}\upsilon_{2}(\upsilon)\nu(x))\;|\leq\] \[M_{V_{2}}\left(|\;\nabla\varphi_{2}(x)\;|\;\sum_{i=1}^{5}|\; \varphi_{i}(x)-\upsilon_{i}(x)\;|+|\;\varphi_{2}(x)\;|\;\sum_{i=1}^{5}\;|\; \nabla(\varphi_{i}(x)-\upsilon_{i}(x))\;|\right)\] \[+M_{V_{2}}\left(|\;\nabla(\varphi_{2}(x)-\upsilon_{2}(x))\;|\; \sum_{i=1}^{5}(1+|\;\varphi_{i}(x)\;|)+|\;\varphi_{2}(x)-\upsilon_{2}(x)\;| \;\sum_{i=1}^{5}\;|\;\nabla(\upsilon_{i}(x)\;|\right),\quad x\in\Omega.\]
Therefore, by passing to the norms, we have
\[\|\nabla\cdot(\varphi_{2}V_{2}(\varphi)\nu)-\nabla\cdot(\upsilon_{2} V_{2}(\upsilon)\nu)\| \leq |\Omega|V_{2,max}\left((1+\|\varphi\|)\,\|\nabla(\varphi_{2}-\upsilon_{2}) \|_{\infty}+\|\nabla\varphi\nu\|\|\varphi_{2}-\upsilon_{2}\|_{\infty}\right)\] \[+|\Omega|V_{2,max}\left(\|\varphi-\upsilon\|\|\nabla\upsilon_{2} \nu\|_{\infty}+\|\upsilon_{2}\|_{\infty}\|\nabla(\varphi-\upsilon)\nu\|\right).\]
This leads to
\[\|\nabla\cdot(\varphi_{2}V_{2}(\varphi)\nu)-\nabla\cdot(\upsilon_{2} V_{2}(\upsilon)\nu\| \leq |\Omega|L_{R}^{4}\|\varphi-\upsilon\|_{\alpha}\] (A.4)
Similarly, we obtain that and
\[\|\nabla\cdot\varphi_{3}V_{3}(\varphi)\nu-\nabla\cdot\upsilon_{3 }V_{3}(\upsilon)\nu\| \leq |\Omega|V_{3,max}\left((1+\|\varphi\|)\,\|\nabla(\varphi_{3}- \upsilon_{3})\|_{\infty}+\|\nabla\varphi\nu\|\|\varphi_{3}-\upsilon_{3}\|_{ \infty}\right)\] \[+|\Omega|V_{3,max}\left(\|\varphi-\upsilon\|\|\nabla\upsilon_{3} \nu\|_{\infty}+\|\upsilon_{3}\|_{\infty}\|\nabla(\varphi-\upsilon)\nu\|\right),\]
and
\[\|\nabla\cdot\varphi_{3}V_{3}(\varphi)\nu-\nabla\cdot\upsilon_{3 }V_{3}(\upsilon)\nu\| \leq |\Omega|L_{R}^{5}\|\varphi-\upsilon\|_{\alpha}.\] (A.5)
Now, we show that, there exists \(L_{R}^{9}\geq 0\) such that
\[\|\mathcal{M}\varphi-\mathcal{M}\upsilon\|_{\partial X}\leq L_{R}^{9}\|\varphi -\upsilon\|_{\alpha}.\]
To obtain that, we use the embedding \(W^{1,p}(\partial\Omega)^{5}\hookrightarrow\partial X\) and we prove that
\[\|\mathcal{M}\varphi-\mathcal{M}\upsilon\|_{1,p}\leq L_{R}^{9}\|\varphi- \upsilon\|_{\alpha}.\]
Indeed, we have,
\[|\varphi_{2}V_{2}(\varphi)(x)-\upsilon_{2}V_{2}(\upsilon)(x)\mid\leq V_{2, max}\left((1+\mid\tilde{\varphi}(x)\mid)\mid\varphi_{2}(x)-\upsilon_{2}(x)\mid+ \mid\upsilon_{2}(x)\mid\sum_{i=1}^{5}\mid\varphi_{i}(x)-\upsilon_{i}(x)\mid \right),\;x\in\Omega.\]
Hence, using the corresponding norms, we have
\[|\varphi_{2}V_{2}(\varphi)-\upsilon_{2}V_{2}(\upsilon)|_{p}\leq|\Omega|V_{2, max}\left((1+\|\varphi\|)\|\varphi_{2}-\upsilon_{2}\|_{\infty}+\|\upsilon_{2}\|_{ \infty}\|\varphi-\upsilon\|\right),\]
That is using the embedding \(X_{\alpha}\hookrightarrow C^{1}(\overline{\Omega})^{5}\), we obtain that
\[|\varphi_{2}V_{2}(\varphi)-\upsilon_{2}V_{2}(\upsilon)|_{p}\leq L_{R}^{6}\| \varphi-\upsilon\|_{\alpha}.\]
Arguing similarly, we obtain that
\[|\varphi_{3}V_{3}(\varphi)-\upsilon_{3}V_{3}(\upsilon)|_{p}\leq L_{R}^{7}\| \varphi-\upsilon\|_{\alpha}).\]
In otherwise, by estimating, this time the gradient of the corresponding terms, we obtain that
\[|\nabla\varphi_{2}V_{2}(\varphi)-\nabla\upsilon_{2}V_{2}(\upsilon)|_{p}\leq L _{R}^{8}\|\varphi-\upsilon\|_{\alpha}.\]
Arguing similarly, we obtain that
\[|\nabla\varphi_{3}V_{3}(\varphi)-\nabla\upsilon_{3}V_{3}(\upsilon)|_{p}\leq L _{R}^{9}\|\varphi-\upsilon\|_{\alpha}.\]
Thus, Lemma 3.9 yields that,
\[\|(\omega-\mathcal{A}_{\beta-1})\mathcal{D}(\mathcal{M}\varphi- \mathcal{M}\upsilon)\|_{X_{\beta-1}} \leq c\|\mathcal{M}\varphi-\mathcal{M}\upsilon\|_{\partial X}\] \[\leq cL_{R}^{10}\|\varphi-\upsilon\|_{\alpha}.\]
Furthermore, the fact that the functions \(\gamma\) and respectively \(\phi\) are \(l_{\gamma}\)-Lipschitzian and \(l_{\phi}\)-Lipschitzians with respect to \(t\) and \(0\leq\gamma(t),\phi(t)\leq 1\) yields
\[\|\gamma(t)\varphi_{4}-\gamma(s)\upsilon_{4}\|\leq|\Omega|l_{\gamma}(\|\varphi_ {4}\|_{\infty}\mid t-s\mid+\|\varphi_{4}-\upsilon_{4}\|_{\alpha}),\quad t,s \geq 0,\] (A.8)
and through the same argument, we have
\[\|\phi(t)\varphi_{3}-\phi(s)\upsilon_{3}\|\leq|\Omega|l_{\phi}(\|\varphi_{3} \|_{\alpha}\mid t-s\mid+\|\varphi_{3}-\upsilon_{3}\|_{\alpha}),\quad t,s\geq 0,\] (A.9)
Consequently, from (A.1)-(A.9) we can find \(L_{R}\geq 0\) such that
\[\|\tilde{\mathcal{K}}(t,\rho)-\tilde{\mathcal{K}}(s,\upsilon)\|_{\beta-1} \leq L_{R}(\mid t-s\mid+\|\rho-\upsilon\|_{\alpha})\quad\text{ for all }t,s\geq 0\text{ and }\varphi,\upsilon\in X_{\alpha}.\]
This proves the result.
**Proof of Lemma 3.12.** Let us define
\[v(t)=\int_{0}^{t}\mathcal{T}_{\beta-1}(t-s)\mathcal{B}(t)ds+\int_{0}^{t} \mathcal{T}_{\beta-1}(t-s)\left[\mathcal{B}(s)-\mathcal{B}(t)\right]ds:=v_{1} (t)+v_{2}(t),\quad 0\leq t\leq T.\]
It is clear that \(v_{1}\in C^{1}((0,T],X_{\beta})\). So it suffices to prove that \(v_{2}(t)\in X_{\beta}\) and \(\mathcal{A}_{\beta-1}v_{2}(\cdot)\) is continuous. To this end, let \(\varepsilon>0\) and consider
\[v_{2}^{\varepsilon}(t)=\begin{cases}\int_{0}^{t-\varepsilon} \mathcal{T}_{\beta-1}(t-s)\left[\mathcal{B}(s)-\mathcal{B}(t)\right]ds&\text{ for }t\geq\varepsilon\\ 0&\text{ for }t<\varepsilon.\end{cases}\]
So, the analyticity of the extension semigroup \((\mathcal{T}_{\beta-1}(t))_{t\geq 0}\) on \(X_{\beta-1}\) yields that
\[\mathcal{T}_{\beta-1}(t-s)\left[\mathcal{B}(s)-\mathcal{B}(t)\right]\in D( \mathcal{A}_{\beta-1})=X_{\beta}\quad\text{ for }0\leq s\leq t-\varepsilon.\]
Then, \(v_{2}^{\varepsilon}(t)\in D(\mathcal{A}_{\beta-1})\). Moreover, \(v_{2}^{\varepsilon}(t)\) converges to \(v_{2}(t)\) as \(\varepsilon\to 0\). Since the operator \(\mathcal{A}_{\beta-1}\) is closed, to conclude, we only need to show that \(\mathcal{A}_{\beta-1}v_{2}^{\varepsilon}(t)=\int_{0}^{t-\varepsilon} \mathcal{A}_{\beta-1}\mathcal{T}_{\beta-1}(t-s)\left[\mathcal{B}(s)-\mathcal{ B}(t)\right]ds\) converges in \(X_{\beta-1}\) as \(\varepsilon\to 0\). That is, by the closedness of \(\mathcal{A}_{\beta-1}\) we have
\[\mathcal{A}_{\beta-1}v_{2}^{\varepsilon}(t)-\int_{0}^{t}\mathcal{A}_{\beta-1 }\mathcal{T}_{\beta-1}(t-s)\left[\mathcal{B}(s)-\mathcal{B}(t)\right]ds=\int_ {t-\varepsilon}^{t}\mathcal{A}_{\beta-1}\mathcal{T}_{\beta-1}(t-s)\left[ \mathcal{B}(s)-\mathcal{B}(t)\right]ds.\]
Furthermore, the analytitcity of \((\mathcal{T}_{\beta-1}(t))_{t\geq 0}\) yields that
\[\|\mathcal{A}_{\beta-1}\mathcal{T}_{\beta-1}(t)\|_{L(X_{\beta-1})}\leq l_{0}\ t^{-1},\quad t>0\] (A.10)
for some \(l_{0}\geq 0\). Hence, using (3.12)-(A.10), we conclude that
\[\|\mathcal{A}_{\beta-1}v_{2}^{\varepsilon}(t)-\int_{0}^{t}\mathcal{A}_{\beta- 1}\mathcal{T}_{\beta-1}(t-s)\left[\mathcal{B}(s)-\mathcal{B}(t)\right]ds\|_{ \beta-1}\leq ll_{0}\int_{0}^{\varepsilon}\sigma^{\eta-1}d\sigma\to 0\text{ as } \varepsilon\to 0.\]
This proves that \(v_{2}(t)\in X_{\beta}\) and that \(\mathcal{A}_{\beta-1}v_{2}(t)=\int_{0}^{t}\mathcal{A}_{\beta-1}\mathcal{T}_{ \beta-1}(t-s)\left[\mathcal{B}(s)-\mathcal{B}(t)\right]ds\) for \(0<t\leq T\). So, it is clear that \(\mathcal{A}_{\beta-1}v\) is continuous.
## Appendix B Tables of the functions and the parameters of the APC model
In the sequel, we briefly recall the functions and the parameters of the APC model (2.1).
## Acknowledgment
This work has been supported by the French government, through the National Research Agency (ANR) under the Societal Challenge 9 "Freedom and security of Europe, its citizens and residents" with the reference number ANR- 17-CE39-0008, co-financed by French Defence Procurement Agency (DGA) and The General Secretariat for Defence and National Security (SGDSN).
|
2302.04401 | Minimal entropy production in the presence of anisotropic fluctuations | Anisotropy in temperature, chemical potential, or ion concentration, provides
the fuel that feeds dynamical processes that sustain life. At the same time,
anisotropy is a root cause of incurred losses manifested as entropy production.
In this work we consider a rudimentary model of an overdamped stochastic
thermodynamic system in an anisotropic temperature heat bath, and study minimum
entropy production when driving the system between thermodynamic states in
finite time. While entropy production in isotropic temperature environments can
be expressed in terms of the length (in the Wasserstein-2 metric) traversed by
the thermodynamic state of the system, anisotropy complicates substantially the
mechanism of entropy production since, besides dissipation, seepage of energy
between ambient anisotropic heat sources by way of the system dynamics is often
a major contributing factor. A key result of the paper is to show that in the
presence of anisotropy, minimization of entropy production can once again be
expressed via a modified Optimal Mass Transport (OMT) problem. However, in
contrast to the isotropic situation that leads to a classical OMT problem and a
Wasserstein length, entropy production may not be identically zero when the
thermodynamic state remains unchanged (unless one has control over
non-conservative forces); this is due to the fact that maintaining a
Non-Equilibrium Steady-State (NESS) incurs an intrinsic entropic cost that can
be traced back to a seepage of heat between heat baths. As alluded to, NESSs
represent hallmarks of life, since living matter by necessity operates far from
equilibrium. Therefore, the question studied herein, to characterize minimal
entropy production in anisotropic environments, appears of central importance
in biological processes and on how such processes may have evolved to optimize
for available usage of resources. | Olga Movilla Miangolarra, Amirhossein Taghvaei, Tryphon T. Georgiou | 2023-02-09T01:58:42Z | http://arxiv.org/abs/2302.04401v1 | # Minimal entropy production in the presence of anisotropic fluctuations
###### Abstract
Anisotropy in temperature, chemical potential, or ion concentration, provides the fuel that feeds dynamical processes that sustain life. At the same time, anisotropy is a root cause of incurred losses manifested as entropy production. In this work we consider a rudimentary model of an overdamped stochastic thermodynamic system in an anisotropic temperature heat bath, and study minimum entropy production when driving the system between thermodynamic states in finite time.
While entropy production in _isotropic_ temperature environments can be expressed in terms of the length (in the Wasserstein \(W_{2}\) metric) traversed by the thermodynamic state of the system, anisotropy complicates substantially the mechanism of entropy production since, besides dissipation, seepage of energy between ambient anisotropic heat sources by way of the system dynamics is often a major contributing factor. A key result of the paper is to show that in the presence of anisotropy, minimization of entropy production can once again be expressed via a modified Optimal Mass Transport (OMT) problem. However, in contrast to the isotropic situation that leads to a classical OMT problem and a Wasserstein length, entropy production may not be identically zero when the thermodynamic state remains unchanged (unless one has control over non-conservative forces); this is due to the fact that maintaining a Non-Equilibrium Steady-State (NESS) incurs an intrinsic entropic cost that can be traced back to a seepage of heat between heat baths.
As alluded to, NESSs represent hallmarks of life, since living matter by necessity operates far from equilibrium. Therefore, the question studied herein, to characterize minimal entropy production in anisotropic environments, appears of central importance in biological processes and on how such processes may have evolved to optimize for available usage of resources.
Stochastic thermodynamics, Entropy production, Dissipation, Anisotropy, Stochastic control
## I Introduction
Life on Earth is possible thanks to the temperature difference between the hot Sun and the cold stary sky. This difference provides "negative entropy" that organisms and complex biochemical processes feed upon [1]. Indeed, the Sun provides photons at around 6000 Kelvin that are absorbed and then re-emitted back to the cosmos at about 300 Kelvin, a twenty fold decrease. The "hard currency" paid along the way is a positive entropy rate for the universe as a whole, while at the same time, the _anisotropy_ in the thermal environment powers biological engines that make life possible [2, 3].
Our goal in the present work is to quantify the entropy rate of thermodynamic processes while operating in anisotropic temperature environments. Specifically, we study a rudimentary model for a thermodynamic system operating far from equilibrium, in contact with heat baths of different temperatures. The salient feature of the arrangement is that the flux of heat between the heat baths, mediated by the system dynamics, is responsible for maintaining the system far from equilibrium at the cost of a positive entropy rate. Our interest is in minimizing entropy production during thermodynamic transitions, a problem cast within the frame of stochastic control and stochastic thermodynamics.
Thermodynamics was born more than a century ago with the foundational work of Carnot and Clasius [4]. It has since impacted almost every corner of science: chemistry, physics, astronomy, biology... And yet, great many puzzles, rooted at the core notions of the subject such as irreversibility and the second law (see for example Loschmidt's paradox or Maxwell's demon), continued to be debated until the closing of the 20th century. At that time, a new level of understanding started forming. The catalyst was the discovery of a number of fluctuation theorems [5] (Evans-Searles, Jarzynski, Crooks) and a framework [6] (stochastic energetics) to study thermodynamic systems that are made up of possibly only a few particles and/or operate fast and far from equilibrium. To this end probability theory and stochastic control proved enabling. Thermodynamic systems are modeled probabilistically. They interact with externally specified conditions that serve as inputs to steer the state according to specifications, while keeping a lid on entropy and energy budgets. Thus, this emergent subject of _stochastic thermodynamics_ falls squarely within the purview of stochastic control.
The presentation in this work focuses on mesoscopic systems modeled by Langevin stochastic differential equations. In this context, it turns out that _dissipation_ can be expressed as a quadratic cost functional that, in geometric terms, is precisely the Wasserstein length traversed by the state of the system [7, 8, 9]. Important new insights have been gained in recent years regarding optimal control laws that extract work while alternating contact with different heat sources in the Carnot model [10], quantifying natural time-constants and establishing uncertainty relations [11, 12, 13], and balancing work with dissipation in finite-time thermodynamic transitions [14, 15]. Within this evolving landscape the study of entropy production in the presence of temperature gradients remains largely unexplored, and constitutes the theme of the present work.
The structure and contributions of the paper are as follows: In Section II we summarize the basic framework of stochastic thermodynamics in continuous space and time. In particular, we express thermodynamic quantities of interest as integral control costs. In Section III we introduce thermo
dynamic states as probability densities and their tangents as gradient vectors. A weighted inner product is defined on the tangent space to account for anisotropy, leading to a decomposition of velocity fields into gradient and divergence-free parts, _a la_ Helmholtz, that constitutes the backbone of our work.
Section IV considers the problem of minimizing entropy production using non-conservative forces (control inputs). It is shown that the minimal entropic cost for transitioning between states can be expressed as a modified Wasserstein metric, extending earlier results for the isotropic thermal environment of a single heat bath and constituting our first main result. Section V considers the case where actuation takes the form of time-dependent potentials (potential forcing). Our second main result consists of a geometric decomposition of entropy production, based on the weighted inner product introduced in Section III, that generalizes previous works. In this decomposition, the contributions of the gradient and divergence-free components of the velocity field are separated into _excess_ and _housekeeping_ entropy production. Further, divergence-free components are split into steady-state and dynamic contributions. This turns out to be key in our search for optimal protocols to minimize entropy production. To this end, we characterize the set of tangent directions with vanishing _housekeeping entropy_ production, and determine the one direction that in addition has the least _excess entropy_ rate. Moreover, we determine protocols that minimize total entropy production _rate_ and show that, when transitioning between product states, excess and housekeeping entropy production are simultaneously minimized by geodesics in the weighted Wasserstein space.
Finally, Section VI specializes results of the previous sections to quadratic controlling potentials, while in Section VII we give explicit expressions for the entropy production and optimizing controls for a 2-dimensional example. Specifically, we show that i) nontrivial trajectories with vanishing housekeeping entropy production exist, ii) we characterize directions that minimize the production rate for the total entropy, and iii) we explicitly describe trajectories, close to equilibrium, that minimize total entropy production. Lastly, iv) we display closed cycles that generate minimal amount of entropy, which, in contrast to the isotropic case, do not remain static and gravitate towards an equilibrium state.
## II Stochastic Thermodynamic Systems
Stochastic Thermodynamics [6, 16, 17] has been successful in modeling mesoscopic thermodynamic processes that evolve both in discrete as well as in continuous state space, utilizing Master Equations or Langevin Stochastic Differential equations, respectively; herein we restrict our attention to the latter. A thermodynamic system, at a mesoscopic scale, can be conceptualized as a collection of particles in contact with heat baths, modeled as sources of stochastic excitation, while driven under the influence of external forces. These forces that may represent control inputs can be conservative (gradients) or non-conservative, and the system dynamics may or may not include inertial effects.
In the present work we focus on overdamped dynamics, that is, we neglect inertial effects. Such models are typical when considering colloidal mesoscopic particle systems and models of biological processes [16, 18, 19]. While conventional overdamped systems represent a collection of particles in contact with a single heat bath at any time, we consider a more general setting, where the thermodynamic system has \(n\) coupled degrees of freedom which are subject to fluctuations of different intensities; that is, the system is in contact with multiple heat baths at the same time. Thus, our basic model is the following Langevin system of stochastic differential equations
\[dX_{t}=-\gamma^{-1}\nabla U(t,X_{t})dt+\gamma^{-1}f(t,X_{t})dt+\sqrt{2D}dB_{t}, \tag{1}\]
where \(X_{t}\in\mathbb{R}^{n}\), \(t\in\mathbb{R}\) represents time, \(\nabla U(t,X_{t})\) represents the conservative forces of the drift term (typically constituting our control), \(f\) represents non-conservative forces, \(B_{t}\in\mathbb{R}^{n}\) is a standard Brownian motion, and \(D\) represents the diffusion tensor that abides by the Einstein relation
\[D=k_{B}\gamma^{-1}T,\]
with
\[T=\operatorname{diag}\left(T_{1},T_{2},\ldots,T_{n}\right),\]
a diagonal matrix with entries the value of temperature (in Kelvin) along the specified \(n\) degrees of freedom, \(\gamma\) a scalar friction coefficient, and \(k_{B}\) the Boltzmann constant1. The degrees of freedom in (1) may represent voltages in an electrical circuit with resistors subject to different Johnson-Nyquist thermal noise [20], or a single particle subject to radiation of different intensity from different directions [21].
Footnote 1: The Boltzmann constant has dimensions [energy/degree Kelvin], the same as the units of entropy as is the convention in the physics literature, cf. Section II-B, Equation (6).
The state of the thermodynamic system is represented by the probability density function \(\rho(t,x)\) that satisfies the Fokker-Planck equation2
Footnote 2: As is common, for \(\dot{a}_{t}\coloneqq\frac{\partial}{dt_{i}}\), \(\nabla\coloneqq(\dot{a}_{i},\quad\ldots,\quad\dot{a}_{n})^{\prime}\) denotes the gradient and “\(\nabla\). ” the divergence.
\[\partial_{t}\rho(t,x)+\nabla\cdot J(t,x)=0, \tag{2}\]
with \(x=(x_{1},\ldots,x_{n})^{\prime}\in\mathbb{R}^{n}\) and _probability current_
\[J(t,x) =-\rho(t,x)\gamma^{-1}\left(\nabla U(t,x)-f(t,x)+k_{B}T\nabla \log\rho(t,x)\right)\] \[=:\rho(t,x)v(t,x). \tag{3}\]
Thereby, \(v(t,x)\) defined above may be seen to represent the _velocity field_ of an ensemble of particles. For the most part (Section V and on), we will assume the absence of non-conservative forces, i.e., that \(f=0\).
### _The first law_
During thermodynamic transitions, energy is continuously exchanged between particles, external actuation and the surrounding heat bath through heat and work [6].
The incremental change in internal energy \(E_{t}=U(t,X_{t})\) of a single particle at location \(X_{t}\) can be expressed as
\[dE_{t}=\partial_{t}U(t,X_{t})dt+\nabla U(t,X_{t})^{\prime}\circ dX_{t}\]
where \(\circ\) denotes Stratonovich integration. The heat exchanged at the scale of a single particle is due to forces3
Footnote 3: Below, \(\frac{d\Omega}{dt}\) is formal and represents white noise and similarly for \(\frac{dX}{dt}\), but these can also be interpreted e.g., as in [22].
\[-\gamma\frac{dX_{t}}{dt}+\sqrt{2k_{B}\gamma T}\frac{dB_{t}}{dt}\]
applied by the heat bath (see [23, page 19], [17, page 63]); the term \(-\gamma\frac{dX_{t}}{dt}\) is due to dissipation, while \(\sqrt{2k_{B}\gamma T}\frac{dB_{t}}{dt}\) is due to fluctuations. Thus, the energy exchange between the heat bath and the particle is formally expressed as forces times displacement,
\[dq=\Big{(}-\gamma\frac{dX_{t}}{dt}+\sqrt{2k_{B}\gamma T}\frac{dB_{t}}{dt} \Big{)}^{\prime}\circ dX_{t},\]
which, in combination with (1), takes the precise mathematical form
\[dq=(\nabla U(t,X_{t})-f(t,X_{t}))^{\prime}\circ dX_{t}.\]
The incremental work on the other hand, effected through interaction of the particle with the potential \(U\) and, possibly, an external nonconservative force \(f\), is
\[dw=\partial_{t}U(t,X_{t})dt+f(t,X_{t})^{\prime}\circ dX_{t}.\]
Direct inspection validates the first law of thermodynamics, that energy is conserved,
\[dE=d\,q+d\,w.\]
Note that in the above, \(d\) designates an inexact differential, in that \(\int d\,w\) along a curve depends on the choice of curve and not only the endpoints. Also, note that the sign convention is chosen such that both work and heat are positive when supplied to the particle.
The total heat differential is the sum of contributions from heat baths, namely,
\[d\,q=\sum_{i}d\,q_{i}=\sum_{i}(\partial_{t}U(t,X_{t})-f_{i}(t,X_{t}))\circ(dX _{t})_{i}. \tag{4}\]
Upon using the Ito rule to express this as an Ito differential and taking the expectation, the heat rate that flows into the thermodynamic _system_ from the \(i\)-th reservoir becomes4
Footnote 4: Throughout, \(dx\) is a short for the volume form \(dx_{1}\dots dx_{n}\).
\[\dot{Q}_{i}=\int(\partial_{t}U(t,x)-f_{i}(t,x))J_{i}(t,x)dx. \tag{5}\]
Here, the influx of heat from the \(i\)-th reservoir takes place along the \(i\)-th degree of freedom, which is coupled to the rest via the potential \(U\).
### _The second law_
During thermodynamic transitions, the entropy production includes two terms, entropy production within the system and entropy change in the environment,
\[\dot{S}_{\rm tot}=\dot{S}_{\rm sys}+\dot{S}_{\rm env}.\]
The entropy of the system \(\dot{S}_{\rm sys}=-k_{B}\int\log(\rho)\rho dx\) changes with the rate5,
Footnote 5: Throughout, we use the notation \(\langle v_{1},v_{2}\rangle=v_{1}^{\prime}v_{2}\), for the standard Euclidean inner product, where \(\cdots\)” denotes transpose. Also, we use the notation \(\langle v_{1},v_{2}\rangle_{2}=v_{1}^{\prime}Mv_{2}\) for weighted inner product with a symmetric matrix \(M\) and, accordingly, \(\|v\|_{2}^{2}:=\langle v,v\rangle_{2}^{M}\) for the corresponding norm.
\[\dot{S}_{\rm sys}=-k_{B}\int\log(\rho)\partial_{t}\rho\,dx=-k_{B}\int\langle J,\nabla\log\rho\rangle dx \tag{6}\]
where the second equality follows from \(\partial_{t}\rho=-\nabla\cdot J\) and integration by parts. The entropy of the environment changes due to the heat exchange according to
\[\dot{S}_{\rm env}=-\sum_{i}\frac{\dot{Q}_{i}}{T_{i}}=-\int\langle J,T^{-1}( \nabla U-f)\rangle dx, \tag{7}\]
where we have used (5). The minus sign is due to positive heat rate \(\dot{Q}_{i}\) being taken out of the environment and into the system.
Together (6) and (7) give the total entropy production
\[\dot{S}_{\rm tot} =-\int\langle J,T^{-1}\,(\nabla U-f+k_{B}T\nabla\log\rho)\rangle dx\] \[=\int\frac{1}{\rho}\|J\|_{\gamma T^{-1}}^{2}dx,\] (8a) using ( 3 ). Therefore, a non-vanishing probability current \[J\] irreversibly increases the total entropy; this constitutes the second law of thermodynamics, i.e. \[\Delta_{\rm tot}\geq 0\]. Alternatively, the entropy production rate expressed in terms of \[v\] is \[\dot{S}_{\rm tot}=\gamma\int\rho\|v\|_{T^{-1}}^{2}dx. \tag{8b}\]
Non-vanishing probability currents arise during thermodynamic transitions, but also at certain steady-states termed _non-equilibrium steady-states_ (NESS). At a NESS (i.e., where \(\nabla\cdot J=0\) but \(J\neq 0\)) the probability current mediates heat transfer between thermal baths and an increase in the entropy of the environment. The condition \(J=0\), where entropy production vanishes, is referred to as _detailed balance_ or _micro-canonical reversibility_, and the state as an equilibrium steady-state.
### _A fluctuation theorem_
Early achievements that helped launch the subject of stochastic thermodynamics took the form of fluctuation theorems that quantified the probability of thermodynamic transitions as a function of entropy production, see e.g., [5, 24]. In a similar spirit, we herein present a fluctuation theorem that applies in the context of anisotropic environments, where the temperature \(T\) is a two-tensor.
Consider a single random realization \(\{X_{t}\in\mathbb{R}^{n};t\in[0,t_{f}]\}\), interpreted as the trajectory of a particle that is
an element of an ensemble distributed according to \(\rho(t,x)\). Much like internal energy, heat, and work that can be defined at the level of individual particles, the entropy of the system too can be defined for an individual particle at \(X_{t}\) within the ensemble as \(-k_{B}\log\rho(t,X_{t})\). Thus, the entropy of the system at the level of the ensemble, \(-k_{B}\int\rho(t,x)\log\rho(t,x)dx\), can be interpreted as the mean entropy of particles.
Therefore, the change in the entropy of the system, at the level of a single particle, is
\[\Delta s_{\mathrm{sys}} =-k_{B}\log\rho(t_{f},X_{t_{f}})+k_{B}\log\rho(0,X_{0}),\] \[=-\int_{0}^{t_{f}}k_{B}\nabla\log(\rho(t,X_{t}))^{\prime}\circ dX _{t}-\int_{0}^{t_{f}}k_{B}\frac{\partial_{t}\rho(t,X_{t})}{\rho(t,X_{t})}dt.\]
Similarly, the change in the entropy of the environment is
\[\Delta s_{\mathrm{env}} =-\int_{0}^{t_{f}}\sum_{i=1}^{n}\frac{dq_{i}}{T_{i}}\] \[=-\int_{0}^{t_{f}}(T^{-1}\nabla U(t,X_{t})-T^{-1}f(t,X_{t}))^{ \prime}\circ dX_{t}.\]
**Proposition 1**: _The total entropy production for a single trajectory can be expressed as_
\[\Delta s_{\mathrm{tot}} =\Delta s_{\mathrm{env}}+\Delta s_{\mathrm{sys}}\] \[=\int_{0}^{t_{f}}\gamma T^{-1}v(t,X_{t})^{\prime}\circ dX_{t}+k_ {B}\int_{0}^{t_{f}}\frac{\nabla\cdot J(t,X_{t})}{\rho(t,X_{t})}dt,\]
_and satisfies the identity_
\[\mathbb{E}\left[\exp\left(-\frac{\Delta s_{\mathrm{tot}}}{k_{B}}\right)\right] =1. \tag{9}\]
The proof of this statement is given in the Appendix, where we also prove a detailed fluctuation theorem.
**Remark 1**: _The fluctuation theorem (9) provides a stochastic description of the second law of thermodynamics, highlighting the fact that the decrease of total entropy at the level of a single trajectory is possible, even if unlikely. One can use Jensen's inequality to derive the standard second law, \(\Delta S_{\mathrm{tot}}\geq 0\), that is, the total entropy production at the level of the ensemble cannot decrease. \(\Box\)_
## III Thermodynamic Space and Decomposition of Vector Fields
Thermodynamic states are represented by probability distributions on \(\mathbb{R}^{n}\). Throughout they are assumed to have finite variance and to be absolutely continuous with respect to the Lebesgue measure, thus represented by density functions6. We additionally consider the following mild assumptions on thermodynamic states and corresponding vector-fields:
Footnote 6: The theory that follows can be developed for spaces of probability measures \(\rho\)[25, Sec. 8]; we specialize to probability densities for simplicity of the exposition.
**Assumption A1:** For all thermodynamic states \(\rho(x)\) the following hold:
1. \(\|\nabla^{2}\log\rho\|_{\infty}<\infty\)__
2. \(\|\nabla\log\rho(x)\|\to\infty\) _as_ \(\|x\|\to\infty\)_._
**Assumption A2:** Vector-fields \(v(t,x)\) are Lipschitz.
Assumption A1 ensures that \(\rho\) satisfies the Poincare inequality7[26], i.e. that there exists a positive constant \(C>0\) such that
Footnote 7: The Poincaré inequality essentially expresses that \(0\) is an isolated eigenvalue of a corresponding Laplace operator.
\[\int|h-\bar{h}|^{2}\rho dx\leq C\int\|\nabla h\|^{2}\rho dx\ \ \text{for all}\ h\in H_{\rho}^{1},\]
where \(\bar{h}=\int h\rho dx\) and \(H_{\rho}^{1}\) is the Sobolev space of functions where the function and its derivative, defined in the weak sense, are square integrable with respect to \(\rho\), i.e., that both \(\|h\|_{\rho}^{2}:=\int h^{2}\rho dx\), and \(\int\|\nabla h\|^{2}\rho dx\) are bounded. The Poincare inequality is of importance as it provides a sufficient condition for existence and uniqueness of the solution \(\phi\in H_{\rho}^{1}\) to the (weighted) Poisson equation
\[\mathcal{L}_{\rho}\phi=\rho h\]
where \(\mathcal{L}_{\rho}(\cdot):=\nabla\cdot(\rho\nabla(\cdot))\) is the (weighted) Laplacian and \(\|h\|_{\rho}<\infty\)[26].
The space of thermodynamic states \(\rho\) is denoted by \(P_{2}(\mathbb{R}^{n})\) (or, \(P_{2}\) for simplicity). Interestingly, this space admits a very rich structure that renders it almost a Riemannian manifold [25, page 168]. Much of what follows to a large degree can be traced to this fact.
Our starting point is the following inner-product between vector fields on \(\mathbb{R}^{n}\)
\[\langle v_{1},v_{2}\rangle_{\rho}=\int\langle v_{1}(x),v_{2}(x)\rangle\rho(x)\,dx, \tag{10}\]
with induced norm \(\|v\|_{\rho}:=\sqrt{\langle v,v\rangle_{\rho}}\). The inner-product defines an orthogonal decomposition
\[v=\Pi_{\rho}\,v+\chi,\]
where \(\Pi_{\rho}\) is the projection operator given by
\[\Pi_{\rho}v:=\arg\min_{w}\{\|w\|_{\rho}\ |\ w=v-\chi\ \text{and}\ \nabla\cdot(\rho\chi)=0\}.\]
The projection is unique and belongs to the closure of the space of vector fields of gradient form with respect to the \(\|\cdot\|_{\rho}\) topology [25, Lemma 8.4.2]. Under assumptions A1 and A2, the projection is exactly of gradient form \(\nabla\phi\), where \(\phi\in H_{\rho}^{1}\) solves the (weighted) Poisson equation \(\mathcal{L}_{\rho}\phi=\nabla\cdot(\rho v)\). The decomposition \(v=\nabla\phi+\chi\), into the gradient and divergence-free parts, is known as the _Helmholtz-Hodge decomposition_[27, 28].
This decomposition is used to construct the tangent space at any \(\rho\in P_{2}\). In particular, for an admissible rate of change \(\dot{\rho}=-\nabla\cdot(\rho v)\), induced by the vector field \(v\), there is a corresponding gradient vector field \(\nabla\phi=\Pi_{\rho}\,v\), which constitutes the tangent vector. The correspondence between \(\dot{\rho}\) and \(\nabla\phi\) is used to equip \(P_{2}\) with the Riemannian metric
\[\langle\dot{\rho}_{1},\dot{\rho}_{2}\rangle_{gp}:=\langle\nabla\phi_{1}, \nabla\phi_{2}\rangle_{\rho}.\]
The Riemannian metric allows computing length of paths between densities; the smallest distance (geodesic) between
any two given densities \(\rho_{0}\) and \(\rho_{f}\) is known as the _Wasserstein metric_\(W_{2}(\rho_{0},\rho_{f})\). This is given by
\[W_{2}(\rho_{0},\rho_{f})^{2}=\min_{\rho}\int_{0}^{1}\|\dot{\rho}\|_{g_{\rho}}^{2 }dt \tag{11}\]
subject to \(\rho(0)=\rho_{0}\), \(\rho(1)=\rho_{f}\).
The geometrical construction described above is generalized by replacing the inner-product (10) by a weighted inner-product
\[\langle v_{1},v_{2}\rangle_{\rho M}=\int\rho(x)\langle v_{1}(x),v_{2}(x) \rangle_{M}dx,\]
where \(M\) is a symmetric positive-definite \(n\times n\) matrix, and \(\langle v_{1}(x),v_{2}(x)\rangle_{M}:=v_{1}(x)^{\prime}Mv_{2}(x)\). This is important, in light of (8b), where a weighted inner-product with \(M=\gamma T^{-1}\) characterizes the entropy production. In a similar manner as before, vector fields can be decomposed into gradient and divergence-free parts,
\[v=M^{-1}\nabla\phi+\chi, \tag{12}\]
where \(M^{-1}\nabla\phi=\Pi_{\rho M^{\prime}}\) is now the projection with respect to the weighted metric \(\langle\cdot,\cdot\rangle_{\rho M}\), and \(\nabla\cdot(\rho\chi)=0\). Analogously, we introduce the Riemannian metric
\[\langle\dot{\rho}_{1},\dot{\rho}_{2}\rangle_{g_{\rho M}}:=\langle M^{-1} \nabla\phi_{1},M^{-1}\nabla\phi_{2}\rangle_{\rho M},\]
and define the weighted Wasserstein metric
\[W_{2,M}^{2}(\rho_{0},\rho_{f})=\int_{0}^{1}\|\dot{\rho}\|_{g_{\rho M}}^{2}dt \tag{13}\]
subject to \(\rho(0)=\rho_{0}\), \(\rho(1)=\rho_{f}\).
## IV Non-conservative actuation
We start by considering the problem of minimizing entropy production under the full authority of non-conservative actuation, that is, with control actuation that entails both a gradient \(\nabla U\) of a potential as well as a non-zero term \(f\) in (1) contributing with a divergence-free component. This amounts to full control authority over the velocity vector-field \(v\). It turns out that the minimal entropy production relates to a suitably weighted Wasserstein distance between states. This result extends the geometric characterization of entropy production in [7] to the case of anisotropic thermal environment.
### _Entropy production as a weighted Wasserstein length_
The entropy production can be expressed in terms of the weighted Wasserstein distance between states as follows.
**Proposition 2**: _It holds that_
\[\min_{\rho,v}\int_{0}^{t_{f}}\dot{S}_{\mathrm{tot}}\,dt=\frac{1}{t_{f}}W_{2,M} ^{2}(\rho_{0},\rho_{f}) \tag{14}\]
_where \(M=\gamma T^{-1}\) and the optimization is subject to the continuity equation \(\partial_{t}\rho+\nabla\cdot(\rho v)=0\) together with the end-points \(\rho_{0}\), \(\rho_{f}\) of a path \(\rho(t,\cdot)\), \(t\in[0,t_{f}]\), reflected via a control that includes gradient \(\nabla U\) as well as divergence free \(f\) component._
It readily follows by comparing the expression for the least entropy production (8b) over paths \(\rho(t,\cdot)\), \(t\in[0,t_{f}]\), between end-point states, with the definition of the weighted Wasserstein metric (13).
**Remark 2**: _If a bound is imposed on the total entropy production \(S_{\mathrm{tot}}\), then (14) provides a lower bound (speed limit) on the time needed for traversing a path that joins \(\rho_{0}\) to \(\rho_{f}\), namely,_
\[t_{f}\geq\frac{W_{2,M}^{2}(\rho_{0},\rho_{f})}{S_{\mathrm{tot}}}.\ \Box\]
For computational purposes it is useful to relate the weighted Wasserstein distance \(W_{2,M}\) to an un-weighted (corresponding to the identity matrix as weight) Wasserstein distance. Thereby, the entropy production can likewise be expressed in terms of the (unweighted) Wasserstein length. This is given below.
**Proposition 3**: _It holds that_
\[\min_{\rho,v}\int_{0}^{t_{f}}\dot{S}_{\mathrm{tot}}\,dt=\frac{1}{t_{f}}\frac{ \gamma}{\sqrt[4]{\det(T)}}W_{2}^{2}(\mathbf{T}^{-\frac{1}{2}}\#\rho_{0}, \mathbf{T}^{-\frac{1}{2}}\#\rho_{f}),\]
_where \(\mathbf{T}=T/\sqrt[4]{\det(T)}\) is volume preserving and the optimization is subject to the continuity equation \(\partial_{t}\rho+\nabla\cdot(\rho v)=0\) together with the end-point conditions \(\rho(0)=\rho_{0}\) and \(\rho(t_{f})=\rho_{f}\)._
The statement follows immediately after we express the weighted Wasserstein distance in terms of the ordinary (unweighted) distance. To this end, we first invoke the fact that an optimal transportation plan requires constancy of the velocity along paths (in Lagrangian view point); this is standard and follows using the Cauchy-Schwartz inequality. Thus, for a mass element (particle) that starts at location \(x\) and terminates at \(y\) over the time interval \([0,t_{f}]\), the optimal velocity remains constant and equal to
\[v(X(x,t),t)=(y-x)/t_{f}\]
with the path traversed by the particular particle being the line segment \(X(x,t)=x+tv\) for \(t\in[0,t_{f}]\). This well-known fact turns the dynamic optimal transport (13) into a static (Kantorovich-type) problem, so as to be subsequently cast as an unweighted transport problem via a change of variables, as follows.
Let \(\pi\) be a distribution on the product space \((x,y)\in\mathbb{R}^{n}\times\mathbb{R}^{n}\) that represents the law of pairing origin \(x\) to destination \(y\), under a transport policy. Thus \(\pi\) is a coupling of random variables \(X(x,0)\) and \(X(y,t_{f})\), with probability density functions \(\rho_{0}(x)\) and \(\rho_{f}(y)\), respectively; these are marginal distributions of \(\pi\) and this is the only condition for \(\pi\) to be a "coupling." Then,
\[W_{2,M}^{2}(\rho_{0},\rho_{f}) =\min_{\pi}\int\|x-y\|_{M}^{2}d\pi\] \[=\min_{\pi}\int\|M^{\frac{1}{2}}x-M^{\frac{1}{2}}y\|^{2}d\pi\] \[=W_{2}^{2}(M^{\frac{1}{2}}\#\rho_{0},M^{\frac{1}{2}}\#\rho_{f})\]
where, with a slight abuse of notation, \(M^{\frac{1}{2}}\#\rho_{0}\) denotes the push-forward with the map8\(g:x\mapsto M^{\frac{1}{2}}x\). Using standard theory [29], the optimal transport map for unweighted transport is given by the gradient of a convex function \(\varphi\), and hence we now have \(x\mapsto y=M^{-\frac{1}{2}}\nabla\varphi(M^{\frac{1}{2}}x)\); here, \(\nabla\varphi\) is the optimal transport map between \(M^{\frac{1}{2}}\#\rho_{0}\) and \(M^{\frac{1}{2}}\#\rho_{f}\) for unweighted cost. In light of (14), we arrive at the claimed expression.
Footnote 8: The density of the push-forward \(\rho_{1}=g\sharp\rho_{0}\) for a differentiable \(g:x\mapsto y=g(x)\) is \(\rho_{1}(y)=\sum_{x\in g^{-1}}\frac{\rho_{0}(y)}{\rho_{0}(g\#\rho_{0})}\).
A geometrical procedure to find the optimal transportation is to "warp" the space according to \(\mathbf{T}^{-1/2}\), identify the optimal transport in the usual way, and then "warp" back.
### _Dissipation for Gaussian thermodynamic states._
In general, for standard optimal mass transport problems, explicit solutions are hard to come by and need to be computed numerically. One exception is the case where the transport traces paths on the submanifold of Gaussian distributions driven by a quadratic potential. In such cases the Wasserstein distance can be written down explicitly.
For later reference, we provide here the expression for the weighted Wasserstein-2 distance between two normal distributions:
\[W_{2,M}(\rho_{0},\rho_{f}) =\Big{[}\|\mu_{0}-\mu_{f}\|_{M}^{2}+\text{trace}\left\{\Sigma_{0} M+\Sigma_{f}M\right.\] \[\left.-2(\Sigma_{f}^{1/2}M\Sigma_{0}M\Sigma_{f}^{1/2})^{1/2}\right\} \Big{]}^{1/2},\]
where \(\rho_{0}=\mathcal{N}(\mu_{0},\Sigma_{0})\) and \(\rho_{f}=\mathcal{N}(\mu_{f},\Sigma_{f})\) are Gaussian with mean \(\mu_{0}\) and \(\mu_{f}\), and covariance \(\Sigma_{0}\) and \(\Sigma_{f}\), respectively. The corresponding optimal trajectory (displacement interpolation) is \(\{\rho(t)=\mathcal{N}(\mu_{t},\Sigma(t))\mid t\in[0,t_{f}]\}\), where
\[\mu_{t} =\mu_{0}+\frac{t}{t_{f}}(\mu_{f}-\mu_{0}), \tag{15a}\] \[\Sigma(t) =\left(\Big{(}1-\frac{t}{t_{f}}\Big{)}\text{Id}+\frac{t}{t_{f}}A \right)\Sigma_{0}\bigg{(}\Big{(}1-\frac{t}{t_{f}}\Big{)}\text{Id}+\frac{t}{t_ {f}}A\bigg{)}^{\prime}, \tag{15b}\]
with \(A=\Sigma_{f}^{1/2}\big{(}\Sigma_{f}^{1/2}M\Sigma_{0}M\Sigma_{f}^{1/2}\big{)}^ {-1/2}\Sigma_{f}^{1/2}M\), and \(\text{Id}\) is the identity matrix. One can derive these results from the standard (unweighted) Gaussian expressions, as explained in Proposition 3.
## V Conservative actuation
Herein we consider entropy production when the control actuation is limited to conservative forces, i.e., \(f=0\) and the forcing is exerted solely by varying the potential \(U\). This is physically more meaningful and easier to realize experimentally.
We note that even if the control is so constrained, it is possible to steer the thermodynamic system between arbitrary thermodynamic states by controlling the potential function \(U\). To see this, note that a selection of \(\nabla U\) can specify the gradient part of
\[v=-(\gamma^{-1}\nabla U+D\nabla\log\rho), \tag{16}\]
and therefore, any value for \(\partial_{i}\rho\) in (2). Specifically, for any \(\partial_{i}\rho=\dot{\rho}\) with \(\|\frac{\rho}{\dot{\rho}}\|_{\rho}^{2}<\infty\), the equation
\[\nabla\cdot(\rho\gamma^{-1}(\nabla U+k_{B}T\nabla\log\rho))=\dot{\rho}\]
is a Poisson equation and has a (unique) solution for \(U\).
### _Geometric decomposition of entropy production_
We now study the source of entropy production by identifying the contribution of the orthogonal components of the velocity field that drives the thermodynamic states. This will prove helpful in identifying optimal protocols in what follows.
We start by considering a trajectory \(\rho(t,\cdot)\in P_{2}\) that connects end-point states \(\rho_{0},\rho_{f}\), and we let \(\dot{\rho}:=\partial_{i}\rho\). According to the discussion presented in Section III, any vector-field \(v\) that realizes the trajectory, i.e. that \(\nabla\cdot(\rho v)+\dot{\rho}=0\), admits an orthogonal decomposition, with respect to the metric \(\langle\cdot,\cdot\rangle_{\rho T^{-1}}\), as
\[v=T\nabla\phi+\chi, \tag{17}\]
where the gradient part \(T\nabla\phi\) is the projection \(\Pi_{\rho T^{-1}}v\) and \(\nabla\cdot(\rho\chi)=0\). The orthogonal decomposition implies that the entropy production \(\gamma\int_{0}^{t_{f}}\|v\|_{\rho T^{-1}}^{2}dt\) can be expressed as
\[\gamma\int_{0}^{t_{f}}\|T\nabla\phi\|_{\rho T^{-1}}^{2}dt+\gamma\int_{0}^{t_{f }}\|\chi\|_{\rho T^{-1}}^{2}dt. \tag{18}\]
The first of these two contributions represents the minimal entropy production that is attainable when we allow non-conservative actuation to drive the system over the specified time interval between the two states (cf. Section IV) - it is precisely the Wasserstein action integral for the space equipped with the \(\langle\cdot,\cdot\rangle_{g_{\mathcal{B}}T^{-1}}\) Riemannian metric. Thus, it constitutes a lower bound to the total entropy production (18). It can be thought of as the entropic cost related to driving the thermodynamic system between the two states in finite time and will be denoted by \(S_{\text{ex}}\) (for _excess_ cost).
The second term in (18) represents a contribution to the entropy production that is due to circulation in the velocity field. Such circulation, for instance, is needed to sustain a non-equilibrium steady-state (NESS). Thus, under conservative actuation, minimal entropy production can no longer be expressed in terms of a distance between end-point states, since maintaining a stationary state incurs in positive entropy production. Contribution to entropy production due to circulatory currents that are generated when steering a system out of equilibrium by non-conservative forcing in a uniform heat bath [11, 13, 30] has been referred to as "housekeeping entropy production." We will follow a similar convention and label the second term in (18) as _housekeeping_, and denote it by \(S_{\text{hk}}\).
Thus, the decomposition of entropy production in (18) can be seen as a generalization to anisotropic temperature fields of analogous decompositions discussed in earlier works [11, 13, 30, 31]; these works consider non-conservative forcing and heat bath with uniform temperature (a single heat bath, with \(T\) scalar). At the time the present work was being completed, Yoshimura etal. [32] proposed a decomposition of entropy production into housekeeping and excess entropy terms for applications to chemical reactions that is analogous to the one presented herein, albeit developed in [32] for discrete spaces of chemical reactants.
The nature of \(S_{\rm{hk}}\) when the thermodynamic system is steered in the presence of thermal anisotropy is considerably more involved than when sustaining a NESS9. While a significant component is due to leakage of heat between the heat baths by way of the coupling between the degrees of freedom, the dynamics of the system also mediate such leakage.
Footnote 9: When maintaining a NESS \(\rho\) with two degrees of freedom (subject to “hot” and “cold” thermal excitation, respectively), \(S_{\rm{hk}}=Q(\frac{T}{L}-\frac{T}{L})\), for \(Q\) the heat that flows from the hot to the cold heat bath, at temperatures \(T_{h}\) and \(T_{c}\), respectively.
We make this more precise by expressing the circulation as summation of circulation due to transition and the circulation necessary to maintain a NESS. The definition of the velocity field (16) and its orthogonal decomposition (17) imply the relationship
\[v=-\gamma^{-1}\nabla U-D\nabla\log\rho=T\nabla\phi+\chi. \tag{19}\]
Consider a steady-state, for which \(T\nabla\phi=0\), and let \(U_{\rm{ss}}\) and \(\chi_{\rm{ss}}\) denote the potential function and circulation at steady-state. Then, we have the identity
\[-D\nabla\log\rho=\gamma^{-1}\nabla U_{\rm{ss}}+\chi_{\rm{ss}},\]
implying that \(U_{\rm{ss}}\) and \(\chi_{\rm{ss}}\) are the terms in the Helmholtz-Hodge decomposition of \(-D\nabla\log\rho\), i.e. that
\[\chi_{\rm{ss}} =(\Pi_{\rho}-{\rm{Id}})D\nabla\log\rho,\] \[\nabla U_{\rm{ss}} =-\gamma\Pi_{\rho}D\nabla\log\rho.\]
In general, when \(T\nabla\phi\neq 0\), the potential function and circulation have additional terms; they are given by
\[\chi =\chi_{\rm{ss}}+(\Pi_{\rho}-{\rm{Id}})T\nabla\phi, \tag{20a}\] \[\nabla U =\nabla U_{\rm{ss}}-\gamma\Pi_{\rho}T\nabla\phi. \tag{20b}\]
This concludes the decomposition of the circulation to contributions from transitioning and maintaining a steady-state. The following lemma states that the circulation at steady-state is zero (implying equilibrium) if and only if all degrees of freedom are independent.
**Lemma 1**: _Let \(T_{i}\neq T_{j}\) for all \(i\neq j\). The entropy production rate for sustaining steady-state at \(\rho\) vanishes if and only if_
\[\rho(x)=\prod_{i=1}^{n}\rho_{i}(x_{i}), \tag{21}\]
_that is, all degrees of freedom are independent._
The entropy production is zero at steady-state iff \(\chi_{\rm{ss}}=0\), implying \(D\nabla\log\rho\) is of gradient form. As a result, the orthogonality condition,
\[\langle T\nabla\log\rho,\chi\rangle_{\rho}=0,\quad\forall\chi\quad{\rm{s.t.}} \quad\nabla\cdot(\rho\chi)=0,\]
holds. Let \(\chi=\frac{1}{\rho}\Omega\nabla\psi\) for arbitrary skew-symmetric matrix \(\Omega\) and function \(\psi\). The divergence-free condition is satisfied because
\[\nabla\cdot(\rho\chi) =\nabla\cdot(\Omega\nabla\psi)=\sum_{i,j=1}^{n}\partial_{i}( \Omega_{ij}\partial_{j}\psi)\] \[=\frac{1}{2}\sum_{i,j=1}^{n}(\Omega_{ij}+\Omega_{ji})\partial_{ ij}\psi=0.\]
The orthogonality condition implies
\[\int(\nabla\log\rho)^{\prime}T\Omega\nabla\psi dx=-\int{\rm{trace}}\,(\Omega T \nabla^{2}\log\rho)\psi dx=0\]
Requiring this identity to hold for any choice of \(\psi\) and \(\Omega\) concludes \(T\nabla^{2}\log\rho\) to be symmetric, which is true iff \(\partial_{ij}\log(\rho)=0\). This implies \(\log(\rho(x))=\sum_{i=1}^{n}\log\rho_{i}(x_{i})\) concluding that the distribution is of product form and all degrees of freedom are mutually independent. Conversely, if \(\rho\) is of product form, then \(T\nabla\log\rho(x)=\nabla(\sum_{i=1}^{n}T_{i}\log\rho_{i}(x_{i}))\), thus orthogonal to any divergence free vector-field \(\chi\).
**Remark 3**: _The statement of the lemma can be easily extended to the case where some degrees of freedom correspond to the same temperature. In that case, \(\rho(x)\) factors as \(\prod_{i=1}^{n}\rho_{i}(\tilde{x}_{i})\) where \(m\) is the number of different temperatures and \(\tilde{x}_{i}\) is the collection of degrees of freedom corresponding to \(T_{i}\). \(\Box\)_
### _Directions of vanishing housekeeping entropy production_
Unlike the steady-state circulation which is zero only if the distribution is of product form, the total circulation can vanish by steering the system in specific directions. These directions result in a dynamical contribution to \(\chi\) that cancels out the steady-state circulation \(\chi_{ss}\) (see (20)). In fact, there are infinitely many such directions where the circulation vanishes. We characterize these next.
**Proposition 4**: _A choice of potential \(U(x)\) results in zero housekeeping entropy production (i.e., such that \(\chi=0\)) if and only if \(\nabla U\) lies in the range of \(\Pi_{\rho T^{-1}}\). Specifically, if \(T_{i}\neq T_{j}\) for all \(i\neq j\),_
\[\nabla U=-\gamma T\nabla\psi, \tag{22}\]
_where \(\psi\) is any function of the form \(\psi(x)=\sum_{i=1}^{n}\psi_{i}(x_{i})\)._
Setting \(\chi=0\) in (19) gives that
\[\gamma^{-1}\nabla U=-D\nabla\log\rho-T\nabla\phi.\]
Since both \((\Pi_{\rho T^{-1}}-{\rm{Id}})D\nabla\log\rho\) and \((\Pi_{\rho T^{-1}}-{\rm{Id}})T\nabla\phi\) are zero, \(\gamma^{-1}\nabla U=-T\nabla\psi\) for a suitable \(\psi\). It follows that if \(T_{i}\neq T_{j}\) for all \(i\neq j\), \(\psi\) must be such that \(T\nabla\psi=\nabla\psi_{T}\) where \(\psi_{T}(x)=\sum_{i=1}^{n}T_{i}\psi_{i}(x_{i})\).
A potential \(U\) as in the proposition leads to tangent directions of the form
\[T\nabla\phi=-D\nabla\log\rho+T\nabla\psi, \tag{23}\]
having vanishing housekeeping entropy production. Among such directions, it is interesting to characterize the one with minimum excess entropy production:
\[\min\,\dot{S}_{\text{ex}}=\min_{\nabla\phi}\gamma\|T\nabla\phi\|_{\rho T^{-1}}^ {2},\quad\text{s.t.}\quad\chi=0. \tag{24}\]
This is the content of the following proposition.
**Proposition 5**: _Let \(T_{i}\neq T_{j}\) for all \(i\neq j\). The minimum excess entropy production rate with zero circulation,_
\[\gamma^{-1}k_{B}^{2}\|\nabla\log\rho-\nabla\log\rho\|_{\rho T}^{2}, \tag{25}\]
_is attained using \(\nabla U=-\gamma T\nabla\psi\) with_
\[\psi=\gamma^{-1}k_{B}\sum_{i=1}^{n}\log\rho_{i}(x_{i}). \tag{26}\]
_Here, \(\rho_{i}(x_{i})=\int\rho(x)dx_{/i}\), with \(dx_{/i}\) denoting integration with respect to all coordinates except \(x_{i}\), is the \(i\)-th marginal of \(\rho\), and \(\dot{\rho}(x)=\prod_{i=1}^{n}\rho_{i}(x_{i})\)._
The formula for tangent directions with vanishing circulation (23) implies that
\[\gamma\|T\nabla\phi\|_{\rho T^{-1}}^{2} =\gamma\|\gamma^{-1}k_{B}T\nabla\log\rho-T\nabla\psi\|_{\rho T^{-1}}^ {2}\] \[=\gamma^{-1}k_{B}^{2}\|\nabla\log\rho-\nabla\psi\|_{\rho T}^{2},\]
with \(\tilde{\psi}=k_{B}^{-1}\gamma\psi\). Minimizing the squared-norm
\[\|\nabla\log\rho-\nabla\tilde{\psi}\|_{\rho T}^{2}=\sum_{i=1}^{n}T_{i}\int( \partial_{i}\log\rho(x)-\partial_{i}\tilde{\psi}_{i}(x_{i}))^{2}\rho(x)dx,\]
involves \(n\) independent minimizations, each over \(\partial_{i}\tilde{\psi}_{i}(x_{i})\) for \(i=1,2,\ldots,n\). The solution to each minimization problem is the conditional expectation
\[\partial_{i}\tilde{\psi}_{i}(x_{i})=\int\partial_{i}\log\rho(x)\rho(x_{/i}|x_ {i})\,dx_{/i},\]
where \(\rho(x_{/i}|x_{i})\) is the conditional density of \(x_{/i}\) given \(x_{i}\). Upon using the definition of conditional density \(\rho(x_{/i}|x_{i})=\rho(x)/\rho_{i}(x_{i})\),
\[\partial_{i}\tilde{\psi}_{i}(x_{i})= \int\frac{\partial_{i}\rho(x)}{\rho(x)}\frac{\rho(x)}{\rho_{i}(x _{i})}\,dx_{/i}=\frac{1}{\rho_{i}(x_{i})}\partial_{i}\int\rho(x)dx_{/i}\] \[= \partial_{i}\log\rho_{i}(x_{i}),\]
concluding the result.
**Remark 4**: _Vanishing minimum excess entropy production in (25) is achieved if and only if \(\rho\) is of product form, i.e., as in Lemma 1. This stands to reason, since \(\chi_{ss}\neq 0\) implies that the only way to make \(\chi\) vanish in (20) is through a non-zero velocity \(T\nabla\phi\), leading to non-zero excess entropy. A similar remark applies to the statement of the following subsection. \(\Box\)_
### _Direction of least entropy production_
Leaving trajectories with vanishing housekeeping entropy production aside, directions minimizing total entropy production rate can be identified. These are of interest, not only for standard energetic considerations, but also because gradient flows that minimize entropy rate are envisioned to have physical and biological significance. Thus, below we identify the potential \(U\) that minimizes instantaneous entropy production _rate_.
**Proposition 6**: _The vector-field \(\nabla U\) that minimizes the entropy production rate (8b) is given in terms of the orthogonal projection of \(-k_{B}\nabla\log(\rho)\) with respect to \(\langle\cdot,\cdot\rangle_{\rho T}\) onto gradient fields (cf. (12)), i.e.,_
\[\nabla U=-k_{B}T\Pi_{\rho T}(\nabla\log(\rho)). \tag{27}\]
_The potential function \(U\in H^{1}(\rho)\) is the unique solution to the Poisson equation_
\[\mathcal{L}_{\rho T^{-1}}U=-k_{B}\Delta\rho, \tag{28}\]
_where \(\mathcal{L}_{\rho M}(\cdot):=\nabla\cdot(\rho M\nabla(\cdot))\)._
Using the decomposition
\[k_{B}\nabla\log\rho=T^{-1}\nabla\psi+\chi,\]
_where \(T^{-1}\nabla\psi=k_{B}\Pi_{\rho T}\nabla\log\rho\) and \(\nabla\cdot(\rho\chi)=0\), the entropy production rate_
\[\gamma\dot{S}_{\text{tot}} =\|\nabla U+k_{B}T\nabla\log\rho\|_{\rho T^{-1}}^{2}\] \[=\|T^{-1}\nabla U+k_{B}\nabla\log\rho\|_{\rho T}^{2}\] \[=\|T^{-1}\nabla U+T^{-1}\nabla\psi\|_{\rho T}^{2}+\|\chi\|_{\rho T} ^{2}\]
_is minimized by \(\nabla U=-\nabla\psi=-k_{B}T\Pi_{\rho T}\nabla\log\rho\). The Poisson equation follows from the definition of the projection and the fact that \(\nabla\cdot(\rho\chi)=0\). Existence and uniqueness of a (weak) solution to the Poisson equation follows from the Poincare inequality and finiteness of \(\|\frac{\Delta\rho}{\rho}\|_{\rho}^{2}\), which hold under Assumption A1 [26]._
### _Minimal entropy production between end-points_
We now consider minimizing entropy production along _a path_ between two distributions driven by conservative actuation.
We first rewrite the rate of entropy production as
\[\dot{S}_{\text{tot}}= \gamma^{-1}\int\|\nabla U+k_{B}T\nabla\log\rho\|_{T^{-1}}^{2}\rho dx\] \[= \gamma^{-1}\Big{[}\int\|\nabla U\|_{T^{-1}}^{2}\rho dx-k_{B}^{2} \int\|\nabla\log\rho\|_{T}^{2}\rho dx\Big{]}+2\dot{S}_{\text{sys}},\]
using that \(\dot{S}_{\text{sys}}=-k_{B}\int\partial_{i}\rho\log\rho dx=-k_{B}\int(\nabla \log\rho)^{\prime}\nu\rho dx\) via integration by parts. Since \(\int_{0}^{t}\dot{S}_{\text{sys}}dt=S_{\text{sys}}(\rho(t_{f}))-S_{\text{sys}}( \rho(0))\) only depends on the end-point distributions, minimizing entropy production over the transition amounts to solving
\[\min_{U,\rho}\gamma^{-1}\int_{0}^{t_{f}}\int\Big{[}\|\nabla U\|_{T^{-1}}^{2}-k_{B }^{2}\|\nabla\log\rho\|_{T}^{2}\Big{]}\rho dxdt, \tag{29}\]
subject to the continuity equation (2) and the end-point conditions. Necessary conditions for optimality are stated next, expressed as equations (30a-30c).
**Proposition 7**: _A path \(\rho(t,\cdot)\) between specified terminal states, along with the corresponding control protocol \(U(t,\cdot)\) that solve_
\[\min_{U,\rho}\Big{\{}\int_{0}^{t_{f}}\dot{S}_{\text{tot}}dt\mid(2)\text{ and }\rho(0)= \rho_{0},\ \rho(t_{f})=\rho_{f}\Big{\}},\]
_satisfies the equations_
\[\gamma\partial_{t}\rho= \nabla\cdot(\rho(\nabla U+k_{B}T\nabla\log\rho)), \tag{30a}\] \[\gamma\partial_{t}\lambda= \|\nabla U\|_{T^{-1}}^{2}+\frac{2k_{B}^{2}}{\rho}\nabla\cdot(T \nabla\rho)-k_{B}^{2}\|\nabla\log\rho\|_{T}^{2}\] \[+(\nabla\lambda,\nabla U+k_{B}T\nabla\log\rho)-\frac{1}{\rho} \nabla\cdot(\rho k_{B}T\nabla\lambda),\] (30b) \[0= \mathcal{L}_{\rho T^{-1}}U+\frac{1}{2}\nabla\cdot(\rho\nabla \lambda). \tag{30c}\]
_with boundary values \(\rho(0)=\rho_{0}\) and \(\rho(t_{f})=\rho_{f}\)._
We use the expression in (29) to write the following augmented Lagrangian:
\[J= \gamma^{-1}\int_{0}^{t_{f}}\int\Big{[}(\nabla U)^{t}T^{-1}\nabla U -k_{B}^{2}(\nabla\log\rho)^{t}T\nabla\log\rho\Big{]}\rho dxdt\] \[+\int_{0}^{t_{f}}\int\lambda\Big{[}\partial_{t}\rho-\gamma^{-1} \nabla\cdot((\nabla U+k_{B}T\nabla\log\rho))\Big{]}dxdt,\]
with \(\lambda\) a Lagrange multiplier. The first variation is
\[\delta J= \gamma^{-1}\int_{0}^{t_{f}}\int\Big{[}2(\nabla\delta_{U})^{t}T^{ -1}\nabla U\rho+(\nabla U)^{t}T^{-1}\nabla U\delta_{\rho}\] \[-2k_{B}^{2}(\nabla\frac{\delta_{U}}{\rho})^{t}T\nabla\log\rho\rho- k_{B}^{2}(\nabla\log\rho)^{t}T\nabla\log\rho\delta_{\rho}\] \[+\lambda\big{\{}\gamma\partial_{t}\delta_{p}-\nabla\cdot((\nabla U +k_{B}T\nabla\log\rho)\delta_{p})\] \[-\nabla\cdot((\nabla\delta_{U}+k_{B}T\nabla\frac{\delta_{p}}{ \rho})\rho)\big{\}}\] \[+\delta_{\lambda}\big{(}\gamma\partial_{t}\rho-\nabla\cdot(( \nabla U+k_{B}T\nabla\log\rho))\big{)}\Big{]}dxdt.\]
Integrating by parts and setting this to zero for all perturbations \(\delta_{U},\delta_{p},\delta_{\lambda}\) we obtain (30).
The equations (30) have the structure of a coupled system of partial differential equations and, in general, need to be solved numerically. However, closed-form solutions to this problem can be obtained for the following special case.
### _Minimal entropy production between product states_
We herein focus on minimizing entropy production while transitioning between two states with mutually independent degrees of freedom, i.e.
\[\min_{U,\rho}\Big{\{}\int_{0}^{t_{f}}\dot{S}_{\text{tot}}dt\mid(2),\ \rho(0)=\prod_{i=1}^{n}\rho_{i}^{0}(x_{i}),\ \rho(t_{f})=\prod_{i=1}^{n}\rho_{i}^{f}(x_{i})\Big{\}}. \tag{31}\]
It turns out that this is equivalent to minimizing excess entropy production, and that the thermodynamic state retains independence along degrees of freedom, resulting in the following statement.
**Proposition 8**: _The solution to (31) coincides with the solution to the unconstrained problem (14) with end-points \(\rho(0)\) and \(\rho(t_{f})\) as above._
From Section V-A we know that the minimal excess entropy production \(S_{\text{ex}}\), when transitioning between \(\rho_{0}=\prod_{i=1}^{n}\rho_{i}^{0}(x_{i})\) and \(\rho_{f}=\prod_{i=1}^{n}\rho_{i}^{f}(x_{i})\), is
\[\min_{\rho,\nu}\int_{0}^{t_{f}}\dot{S}_{\text{ex}}\,dt=\frac{1}{t_{f}}W_{2M}^{ 2}(\rho_{0},\rho_{f}),\]
for \(M=\gamma T^{-1}\). We have that
\[W_{2,M}^{2}(\rho_{0},\rho_{f})=\min_{\pi}\int\|x-y\|_{M}^{2}d\pi(x,y),\]
where \(\pi(x_{1},x_{2},\ldots,y_{1},y_{2},\ldots)\) is a coupling having marginals \(\rho_{i}^{0}(x_{i})\) and \(\rho_{i}^{f}(y_{i})\), with \(x_{i},y_{i}\) denoting coordinates along the same \(i\)-th degree of freedom at the start and end of the transition, respectively. We note that the cost \(\|x-y\|_{M}^{2}\) splits as \(\sum_{i}^{n}\|x_{i}-y_{i}\|_{M}^{2}\) leading to \(n\) uncoupled weighted OMT problems,
\[W_{2,M}^{2}(\rho_{0},\rho_{f})=\sum_{i}^{n}\min_{\pi_{i}}\int\|x_{i}-y_{i}\|_ {M}^{2}d\pi_{i}(x_{i},y_{i}),\]
where now \(\pi_{i}\) is a coupling between the starting and ending marginals of the \(i\)-th degree of freedom \(\rho_{i}^{0}\) and \(\rho_{i}^{f}\). Thus, the optimal velocity field along the \(i\)-th degree of freedom is of the form \(T_{i}\partial_{t}\phi_{i}(x_{i})\), concluding in an optimal velocity field for the original problem of the form \(T\nabla\phi=\nabla\phi_{f}\) where \(\phi_{T}(x)=\sum_{i=1}^{n}T_{i}\phi_{i}(x_{i})\). Moreover, the resulting path of thermodynamic states \(\rho(t,x)\) retains the product structure of the starting and ending marginals. This path can be realized by a control of the form \(\nabla U=-\gamma T\nabla\psi\) as in Proposition 4, resulting in vanishing housekeeping entropy production. Therefore, we have shown that the optimal protocol for minimizing excess entropy also minimizes housekeeping entropy, as claimed.
## VI Control via a quadratic potential
We now specialize to the case where the controlling potential is quadratic, namely, \(U(t,x)=x^{\prime}K(t)x/2\) with \(x\in\mathbb{R}^{n}\) and \(K(t)=K(t)^{\prime}\). Starting from an initial zero-mean Gaussian distribution, the thermodynamic state traces a path on the submanifold of Gaussian distributions
\[\rho(t,x)=\frac{1}{(2\pi)^{n/2}\det(\Sigma(t))^{1/2}}e^{-\frac{1}{2}\|x\|_{2(t) ^{-1}}^{2}}, \tag{32}\]
where the covariance \(\Sigma\) satisfies the differential Lyapunov equation (corresponding to (2))
\[\gamma\dot{\Sigma}(t)=-K(t)\Sigma(t)-\Sigma(t)K(t)+2k_{B}T. \tag{33}\]
In this case, the velocity field (16) takes form
\[v(t,x)=-\gamma^{-1}K(t)x+D\Sigma(t)^{-1}x, \tag{34}\]
linear in \(x\), since \(\nabla\log\rho(t,x)=-\Sigma(t)^{-1}x\).
When the potential remains constant, with \(K(t)=K_{c}\) symmetric and positive definite, the system reaches a steady-state distribution, which is Gaussian \(\mathcal{N}(0,\Sigma_{ss})\) with the steady-state covariance \(\Sigma_{ss}\) satisfying the algebraic Lyapunov equation
\[K_{c}\Sigma_{ss}+\Sigma_{ss}K_{c}=2k_{B}T. \tag{35}\]
The solution of (35) is unique and can be expressed as
\[\Sigma_{ss}=2k_{B}\int_{0}^{\infty}e^{-\tau K_{c}}Te^{-\tau K_{c}}d\tau=L_{K_{c} }(2k_{B}T),\]
where
\[X\mapsto L_{A}(X):=\int_{0}^{\infty}e^{-\tau A}Xe^{-\tau A^{\prime}}d\tau,\]
is a linear operator that depends on \(A\).
It is seen that the detailed balance condition (\(J=0\)) is special in that it requires that \(K_{c}\) and \(T\) commute. This holds when \(K_{c}\) is diagonal, since \(T\) is already diagonal. In this case, \(\Sigma_{ss}=k_{B}TK_{c}^{-1}\) is also diagonal and results in zero probability current according to (34); heat cannot transfer between the degrees of freedom. When \(K_{c}\) and \(T\) do not commute, detailed balance breaks down and a non-vanishing probability current materializes leading to a non-equilibrium steady-state with non-vanishing heat transfer between the heat baths.
We now depart from this steady-state analysis and focus on entropy production in a dynamic setting, for the special case of Gaussian distributions.
### _Geometric decomposition of entropy production_
The geometric decomposition of entropy production of Section V-A, specialized to the case of a Gaussian path of distributions, is as follows. Always assuming zero-mean, the path corresponds to a curve of covariance matrices \(\{\Sigma(t):t\in[0,t_{f}]\}\), while the entropy production is given by
\[\int_{0}^{t_{f}}\dot{S}_{\text{tw}}dt=\gamma\int_{0}^{t_{f}}\text{trace}\left[ V^{\prime}T^{-1}V\Sigma\right]dt.\]
Here, following (17), \(v=Vx=-\left(\gamma^{-1}K-D\Sigma^{-1}\right)x\) admits the orthogonal decomposition
\[Vx=\underbrace{TAx}_{T\nabla\phi}+\underbrace{\Omega\Sigma^{-1}x}_{x}, \tag{36}\]
where \(A\) is a symmetric matrix and \(\Omega\) is skew-symmetric, possibly time-varying. Accordingly, thanks to the orthogonality condition \(\text{trace}\left[A\Omega\right]=0\), the entropy production decomposes into two parts
\[S_{\text{ex}}+S_{\text{hk}}=\gamma\int_{0}^{t_{f}}\text{trace}\left[AT\Delta \Sigma\right]dt+\gamma\int_{0}^{t_{f}}\text{trace}\left[\Omega^{T}T^{-1} \Omega\Sigma^{-1}\right]dt, \tag{37}\]
in agreement with (18).
Echoing the development in Section V, for the special case of Gaussian densities and quadratic potential, we decompose the circulation and potential into their steady-state and dynamical components. At steady-state (where \(A=0\)), the decomposition (36) implies that
\[D\Sigma^{-1}-\gamma^{-1}K_{ss}=\Omega_{ss}\Sigma^{-1}.\]
Using the fact that the right hand side of \(\gamma^{-1}K_{ss}=D\Sigma^{-1}-\Omega_{ss}\Sigma^{-1}\) must be symmetric (since \(K_{ss}\) is), we obtain
\[\Omega_{ss}\Sigma^{-1}+\Sigma^{-1}\Omega_{ss}=D\Sigma^{-1}-\Sigma^{-1}D,\]
and hence,
\[\Omega_{ss}=L_{\Sigma^{-1}}(D\Sigma^{-1}-\Sigma^{-1}D).\]
Similarly, since \(\Omega_{ss}=D-\gamma^{-1}K_{ss}\Sigma\) must be skew-symmetric, we obtain
\[K_{ss}=2k_{B}L_{\Sigma}(T).\]
When not at steady-state, \(A\neq 0\) in (36) introduces an extra term leading to
\[\Omega =\Omega_{ss}-L_{\Sigma^{-1}}(TA-AT),\] \[K =K_{ss}-\gamma L_{\Sigma}(TA\Sigma+\Sigma AT),\]
similarly to (20).
### _Directions of vanishing housekeeping entropy production_
We now explain the content of Proposition 4, that characterizes directions with vanishing housekeeping entropy production, as it pertains to the Gaussian case.
In light of (36), for zero housekeeping entropy production (\(\chi=0\)), the velocity field must be of the form \(v=TAx\), with \(A\) symmetric. At the same time, \(v=-(\gamma^{-1}K-k_{B}T\Sigma^{-1})x\). Thus, \(K\) which is symmetric, must be such that \(T^{-1}K\) is also symmetric. (This argumentation recapitulates the reasoning that leads to \(\nabla U\) being in the range of \(\Pi_{\partial T^{-1}}\).) It follows that \(K\) must be diagonal. In conclusion, the system is steered in a direction with vanishing housekeeping entropy production if and only if \(K\) is diagonal.
We can further identify choices of \(K\) that, besides ensuring \(\dot{S}_{\text{hk}}=0\), minimize excess entropy production _rate_ echoing Proposition 5. In the present case where \(\nabla U=Kx\) with \(K\) diagonal,
\[\dot{S}_{\text{ex}} =\gamma\|D\nabla\log\rho+\gamma^{-1}\nabla U\|_{\rho T^{-1}}^{2}\] \[=\gamma^{-1}\|k_{B}T\Sigma^{-1}x-Kx\|_{\rho T^{-1}}^{2}\] \[=\gamma^{-1}\text{trace}\left(k_{B}^{2}\Sigma^{-1}T-2k_{B}K+KT^{- 1}K\Sigma\right).\]
This is minimized for \(K_{ii}=k_{B}T_{i}(\Sigma_{ii})^{-1}\), in agreement with (26).
### _Direction of least entropy production rate_
We can readily obtain the potential function (i.e., the gain \(K(t)\)) that minimizes the entropy production _rate_. Indeed, with \(U(x)=x^{\prime}Kx/2\) and Gaussian distribution (32), equation (28) translates into
\[T^{-1}K\Sigma+\Sigma KT^{-1}=2k_{B}I,\]
or, equivalently,
\[K\Sigma T+T\Sigma K=2k_{B}T^{2},\]
with unique solution
\[K=2k_{B}L_{\Sigma}(T^{2}).\]
### _Minimal entropy production between end-points_
Next we specialize the first-order optimality condition (30) for _transition between end-point states_ to Gaussian states and transition path. We adopt the ansatz that the Lagrange multiplier is of the form
\[\lambda(t,x)=\frac{1}{2}x^{\prime}\Lambda(t)x+c(t).\]
The optimal \(K,\Sigma\) and \(\Lambda\) satisfy
\[\gamma\dot{\Sigma}= -K\Sigma-\Sigma K+2k_{B}T,\] \[\gamma\dot{\Lambda}= \Lambda K+K\Lambda+2KT^{-1}K+2k_{B}^{2}\Sigma^{-1}T\Sigma^{-1},\] \[\gamma\dot{c}= -2k_{B}^{2}\operatorname{trace}{(T\Sigma^{-1})}-k_{B} \operatorname{trace}{[T\Lambda]},\] \[K= -\frac{1}{2}L_{T\Sigma}(T\Lambda\Sigma T+T\Sigma\Lambda T),\]
translating (30) to the quadratic actuation case. This is a set of coupled algebraic-differential equations with two-point boundary conditions, that can be solved numerically (e.g., via a shooting method).
### _Minimal entropy production between Gaussian product states_
We now focus on minimizing entropy production while transitioning between two Gaussian states with mutually independent degrees of freedom, i.e., with diagonal covariances. In analogy with Section V-E, this is equivalent to solving the unconstrained problem (14), subject to a quadratic potential with end-points \(\rho_{0}=\mathcal{N}(0,\Sigma_{0})\) and \(\rho_{f}=\mathcal{N}(0,\Sigma_{f})\) with \(\Sigma_{0}\) and \(\Sigma_{f}\) diagonal. To see this, note that the optimal solution to (14) with Gaussian end-points is given by the Gaussian interpolation (15), where \(\Sigma(t)\) remains diagonal at all times, as \(\Sigma_{0}\) and \(\Sigma_{f}\) are diagonal (recall that \(M\) in (15b) is diagonal). Consequently, \(K(t)\) in (33) must also be diagonal. Since \(\Sigma(t)\) and \(K(t)\) are diagonal, the matrix \((-\gamma^{-1}K(t)+D\Sigma(t)^{-1})\) that defines the velocity field is also diagonal. Hence, the velocity field has no circulation (\(\Omega=0\)) and thus, the housekeeping entropy production vanishes. Evidently, this is due to the fact that \(T\), \(K\) and \(\Sigma\) are "aligned."
The above considerations tell us very little about optimal trajectories that start or end at non-equilibrium steady-states. Next, we provide insight to such a scenario for a two-dimensional setting.
## VII A two-dimensional case
We now consider that \(n=2\), where we may picture a particle with two degrees of freedom subject to respective stochastic excitations that correspond to temperatures \(T_{1}>T_{2}\). This system has attracted considerable attention in works that have focused on quantifying heat transfer and torque produced in stationary states [33, 34, 20, 35, 36]. The goal of the present section is to characterize entropy production, resulting from heat transfer as well as the dynamics in a non-stationary setting, and to determine time-varying control that precisely minimizes this entropy production.
### _Explicit expressions of \(S_{\mathrm{ex}}\) and \(S_{\mathrm{hk}}\)_
Starting from (37), the total entropy production can be expressed as
\[\gamma\int_{0}^{t_{f}}\operatorname{trace}{[AT\Delta\Sigma]}dt+\gamma\int_{0}^ {t_{f}}\operatorname{trace}{[\Sigma^{1/2}T^{-1}\Sigma^{1/2}\tilde{\Omega} \tilde{\Omega}^{\prime}]},\]
where \(\tilde{\Omega}=\Sigma^{-1/2}\Omega\Sigma^{-1/2}\) is still a skew-symmetric matrix. Since we have defined our decomposition so that
\[V\Sigma=-\gamma^{-1}K\Sigma+\gamma^{-1}k_{B}T=TA\Sigma+\Omega,\]
we write
\[\mathcal{V}\Sigma+\gamma\Sigma V^{\prime}=-K\Sigma-\Sigma K+2k_{B}T=\gamma T \Lambda\Sigma+\gamma\Sigma AT.\]
Using this expression, the Lyapunov equation (33) becomes
\[\dot{\Sigma} =T\Lambda\Sigma+\Sigma AT,\ \mathrm{or},\] \[\dot{\tilde{\Sigma}} =\tilde{\Lambda}\tilde{\Sigma}+\tilde{\Sigma}\tilde{\Lambda}, \tag{40}\]
for \(\tilde{A}:=T^{1/2}AT^{1/2}\) and \(\tilde{\Sigma}:=T^{-1/2}\Sigma T^{-1/2}\). The excess entropy production can now be written as
\[S_{\mathrm{ex}} =\frac{\gamma}{2}\int_{0}^{t_{f}}\operatorname{trace}{[\dot{ \Sigma}A]}dt=\frac{\gamma}{2}\int_{0}^{t_{f}}\operatorname{trace}{[\dot{ \tilde{\Sigma}}\tilde{\Lambda}]}dt\] \[=\frac{\gamma}{2}\int_{0}^{t_{f}}\operatorname{trace}{[L_{\tilde{ \Sigma}}(\dot{\tilde{\Sigma}})\dot{\tilde{\Sigma}}]}dt.\]
To obtain the second equality above we have used the identity \(\tilde{A}=L_{\tilde{\Sigma}}(\dot{\tilde{\Sigma}})\) as the solution to (40).
To simplify calculations we introduce the parametrization
\[\tilde{\Sigma}(r,\theta)=R\Big{(}-\frac{\theta}{2}\Big{)}\sigma^{2}(r)R\Big{(} \frac{\theta}{2}\Big{)}, \tag{41}\]
where
\[R(\vartheta)=\begin{bmatrix}\cos(\vartheta)&\sin(\vartheta)\\ -\sin(\vartheta)&\cos(\vartheta)\end{bmatrix}\ \text{and}\ \sigma^{2}(r)=\frac{l_{c}^{2}}{ \sqrt{T_{1}T_{2}}}\begin{bmatrix}e^{r}&0\\ 0&e^{-r}\end{bmatrix},\]
are matrices, orthogonal and diagonal, respectively, and where \(l_{c}=\sqrt[4]{\det(\Sigma(t))}\) is a (constant) _characteristic length_ of the system. With this parametrization, one can explicitly express the excess part of entropy production as (see [14] for similar computations and more details)
\[S_{\mathrm{ex}}=k_{B}\tau\int_{0}^{t_{f}}\left(\cosh(r)\dot{r}^{2}+\sinh(r) \tanh(r)\dot{\theta}^{2}\right)dt, \tag{42}\]
where \(\tau=\gamma t_{c}^{2}/(2k_{B}\sqrt{T_{1}T_{2}})\) is a _characteristic time_ constant in that it is the average time that a Brownian motion with intensity \(\sqrt{2\gamma^{-1}k_{B}\sqrt{T_{1}T_{2}}}\) needs to traverse a distance \(l_{c}\). Note that (42) is quadratic in the velocities \((\dot{r},\dot{\theta})\) and therefore vanishes as \(t_{f}\to\infty\). It is precisely the weighted Wasserstein action integral, expressed in terms of \(r\) and \(\theta\).
Let us look back at the housekeeping term and define \(\omega\) through
\[\tilde{\Omega}=\omega\hat{\Omega}\ \text{with}\ \hat{\Omega}=\left[\begin{array}{cc}0&-1 \\ 1&0\end{array}\right].\]
Thereby, \(\tilde{\Omega}\tilde{\Omega}^{\prime}=\omega^{2}I\) and the housekeeping entropy production can be written as
\[S_{\mathrm{hk}} =\gamma\int_{0}^{t_{f}}\omega^{2}\operatorname{trace}{[\Sigma^{1/2} T^{-1}\Sigma^{1/2}]}dt=\gamma\int_{0}^{t_{f}}\omega^{2}\operatorname{trace}{[\tilde{ \Sigma}]}dt\] \[=\frac{2\gamma t_{c}^{2}}{\sqrt{T_{1}T_{2}}}\int_{0}^{t_{f}} \omega^{2}\cosh(r)dt,\]
where \(\omega\) is to be determined so that
\[K=-\gamma TA-\omega\gamma\Sigma^{1/2}\hat{\Omega}\Sigma^{-1/2}+k_{B}T\Sigma^{-1}\]
is symmetric. Imposing \(\operatorname{trace}\left[K\hat{\Omega}\right]=0\), which is equivalent to \(K\) being symmetric, we obtain
\[\omega=\frac{\Delta T}{2}\frac{\left(\dot{r}+\tau^{-1}\sinh(r)\right)\sin \theta+\dot{\theta}\tanh(r)\cos\theta}{\dot{T}\cosh(r)+\Delta T\sinh(r)\cos \theta},\]
where \(\Delta T:=T_{1}-T_{2}\) and \(\dot{T}=T_{1}+T_{2}\). Hence,
\[S_{\text{hk}}=k_{B}\tau\Delta T^{2}\int_{0}^{r_{f}}\frac{1}{\cosh(r)}\left( \frac{\sigma(r,\dot{r},\theta,\dot{\theta})}{\dot{T}+\Delta T\tanh(r)\cos \theta}\right)^{2}dt, \tag{43}\]
with
\[\sigma(r,\dot{r},\theta,\dot{\theta})=\left(\dot{r}+\tau^{-1}\sinh(r)\right) \sin\theta+\dot{\theta}\tanh(r)\cos\theta.\]
### _Directions of vanishing housekeeping entropy production_
Having derived explicit expressions of \(S_{\text{ex}}\) and \(S_{\text{hk}}\), equations (42) and (43), it is illuminating to consider points and trajectories on which \(S_{\text{hk}}\) vanishes. Clearly this happens when \(\theta(t)=0\), \(\theta(t)=\pi\), or \(r(t)=0\). This should come as no surprise since these parameters render the covariance \(\Sigma(t)\) diagonal. However, this is not the only trajectory for which \(S_{\text{hk}}=0\).
**Proposition 9**: _Trajectories corresponding to vanishing housekeeping entropy \(S_{\text{hk}}\) are given by solutions to the system of equations_
\[\dot{\theta} =\tan(\theta)u(t) \tag{44a}\] \[\dot{r} =-\tau^{-1}\sinh(r)-\tanh(r)u(t), \tag{44b}\]
_for any choice of function \(u(t)\)._
Housekeeping entropy vanishes iff \(\sigma(r,\dot{r},\theta,\dot{\theta})=0\). Separating the variables \(r\) and \(\theta\) concludes
\[\frac{\dot{\theta}}{\tan\theta}=-\tau^{-1}\cosh(r)-\frac{\dot{r}}{\tanh(r)}.\]
Setting both sides equal to an arbitrary function of time \(u(t)\) leads to (44).
It is interesting to study solutions of (44), and thereby flows that maintain \(S_{\text{hk}}=0\). We explore this next.
We first note that the choice \(u(t)=\frac{\kappa}{\tan(\theta_{0}+\kappa)}\), with \(\theta_{0}=\theta(0)\) and \(\kappa\geq 0\), reduces (44a) to \(\dot{\theta}=\kappa\), while (44b) becomes
\[\dot{r}=-\tau^{-1}\sinh(r)-\frac{\kappa}{\tan(\theta_{0}+\kappa t)}\tanh(r) \quad r(0)=r_{0}. \tag{45}\]
This equation has a unique solution as long as \(\theta_{0}+\kappa t\in(0,\pi)\operatorname{mod}\pi\). If we fix \(\dot{\theta}=0\), this is, \(u(t)=0\), then \(\dot{r}=-\tau^{-1}\sinh(r)\) is always negative (but for when \(r=0\)). Therefore, these trajectories always point towards the equilibrium point at \(r=0\). Similarly, one can choose \(u(t)\) to keep \(\dot{r}\) constant. If \(\dot{r}\) is kept at \(0\), then \(\dot{\theta}=-\tau^{-1}\cosh(r)\tan(\theta)\). These trajectories point towards either the \(\theta=0\) or \(\theta=\pi\) equilibrium points depending on the initial state. The two sets of trajectories (with \(\dot{\theta}=0\) and \(\dot{r}=0\)) are displayed in Figure 1 for different initial conditions.
Thus, interestingly, from any initial state one can find trajectories with vanishing housekeeping entropy production that steer the system to any of the possible equilibrium states (\(\theta\in\{0,\pi\}\), or \(r=0\)). On the other hand, steering between states (not necessarily equilibrium states) while maintaining \(S_{\text{hk}}=0\) impinges upon the controllability of the control affine system (44) viewing \(u(t)\) as a control input. It is clear that both \(\theta=\{0,\pi\}\) and \(r=0\) constitute obstructions to controllability of (44), since the right hand sides of (44a) and (44b) vanish, respectively. Moreover, from any state \((r,\theta)\), with \(\theta\notin\{0,\pi\}\) and \(r\neq 0\), a flow that maintains \(S_{\text{hk}}=0\) can be selected within a cone of \(\pi\) radians. Specifically, accessible directions \(\tan^{-1}(r\dot{\theta}/\dot{r})\) from \((r,\theta)\) are within the interval \([\alpha,\alpha+\pi]\), for
\[\alpha=\tan^{-1}\left(-\frac{r\tan(\theta)}{\tanh(r)}\right).\]
In light of (44), directions and therefore trajectories with vanishing housekeeping entropy production can not have arbitrarily small velocity fields; this can be traced to the fact that to eliminate circulation, the dynamical component in the decomposition (20) needs to cancel the steady-state component. This leads unavoidably to positive excess entropy production (\(S_{\text{ex}}\)).
While maintaining \(\dot{S}_{\text{hk}}=0\), we seek tangent directions \((\dot{r},\dot{\theta})\) that also minimize excess entropy, i.e.,
\[\min_{u}\;k_{B}\tau\int_{0}^{\dot{r}}\left(\cosh(r)\dot{r}^{2}+\sinh(r)\tanh(r )\dot{\theta}^{2}\right)dt,\]
with \(\dot{r}\) and \(\dot{\theta}\) as in (44). This leads to the optimal choice of \(u(t)\),
\[u^{s}(t)=-\tau^{-1}\frac{\cosh(r)}{1+(\tan\theta)^{2}}.\]
Corresponding trajectories are displayed in the left subplot of Figure 2, for a choice of parameters, and converge to a \(\theta\in\{0,\pi\}\) equilibrium state.
Fig. 1: Trajectories of vanishing \(S_{\text{hk}}\) in the \((x,y)\) plane, where \(x=r\cos(\theta)\) and \(y=r\sin(\theta)\), and \(\tau=1\) in (44b). Left subplot: \(u(t)\) is chosen such that \(\dot{\theta}=0\). Right subplot: \(u(t)\) is chosen such that \(\dot{r}=0\).
### _Direction of least entropy production rate_
We characterize the direction of minimal entropy production _rate_ in the following proposition.
**Proposition 10**: _For any given \((r,\theta)\in[0,\infty)\times[0,2\pi)\), the directions \((\dot{r},\dot{\theta})\) that minimize the entropy production rate \(\dot{S}\) are given by_
\[\dot{r} =-\tau^{-1}h(r,\theta)\sinh(r)\sin(\theta), \tag{46a}\] \[\dot{\theta} =-\tau^{-1}h(r,\theta)\cosh(r)\cos(\theta), \tag{46b}\]
_where_
\[h(r,\theta)=\frac{(\tanh(r))^{2}\sin(\theta)}{(\frac{T}{\Delta T}+\cos( \theta)\tanh(r))^{2}(\sinh(r))^{2}+(\tanh(r))^{2}}.\]
From (42) and (43), the entropy rates for \(\dot{S}_{\text{ex}}\) and \(\dot{S}_{\text{hk}}\) are
\[\dot{S}_{\text{ex}} =k_{B}\tau\left(\cosh(r)\dot{r}^{2}+\sinh(r)\tanh(r)\dot{\theta}^ {2}\right),\] \[\dot{S}_{\text{hk}} =k_{B}\tau\frac{\Delta T^{2}}{\cosh(r)}\left(\frac{\sigma(r,\dot{ r},\theta,\dot{\theta})}{T+\Delta T\tanh(r)\cos\theta}\right)^{2}.\]
Then, letting \(\xi=\left[\dot{r},\dot{\theta}\right]^{\prime}\), \(\dot{S}_{\text{tot}}=\dot{S}_{\text{ex}}+\dot{S}_{\text{hk}}\) can be written as
\[(k_{B}\tau)^{-1}\dot{S}_{\text{tot}}=\xi^{\prime}A\xi+b^{\prime}\xi+c,\]
where
\[c =\tau^{-2}\Delta T^{2}\tanh(r)\sinh(r)(\sin\theta)^{2}/d(r, \theta),\] \[b =\frac{2\tau^{-1}\Delta T^{2}\tanh(r)\sin(\theta)}{d(r,\theta)} \left[\begin{array}{c}\sin\theta\\ \tanh(r)\cos(\theta)\end{array}\right]\]
and
\[A=\left[\begin{array}{cc}\cosh(r)+\frac{\Delta T^{2}(\sin(\theta))^{2}}{ \cosh(r)d(r,\theta)}&\frac{\Delta T^{2}\sin(\theta)\cos(\theta)\tanh(r)}{\cosh (r)d(r,\theta)}\\ \frac{\Delta T^{2}\sin(\theta)\cos(\theta)\tanh(r)}{\cosh(r)d(r,\theta)}& \frac{(\sinh r)^{2}}{\cosh(r)}+\frac{\Delta T^{2}(\cos(\theta)\tanh(r))^{2}} {\cosh(r)d(r,\theta)}\end{array}\right],\]
with \(d(r,\theta)=(\tilde{T}+\Delta T\tanh(r)\cos\theta)^{2}\). Since \(A\) is positive definite for all \(r\) and \(\theta\), we obtain that \(\xi=-A^{-1}b^{\prime}/2\) minimizes \(S_{\text{tot}}\) over all possible \(\xi\).
Optimal trajectories for different initial conditions are drawn in the right subplot in Figure 2. It is worth noting that these streamlines are similar _in form_ to those to the left, that correspond to trajectories that minimize excess entropy production while vanishing housekeeping entropy production.
### _Minimal entropy production close to equilibrium_
We now return to the problem of minimizing total entropy production over a path that joins two (possibly non-equilibrium) end-points in finite time. To gain insight as to the nature of optimal trajectories, we consider staying close to the equilibrium that corresponds to \(r=0\).
To this end, we let \(r(t)=\dot{e}\dot{r}(t)\), \(\dot{x}=\dot{r}\cos\theta\) and \(\dot{y}=\dot{r}\sin\theta\), and expand the expression for \(S\) in terms of \(\varepsilon>0\), assumed small, to obtain
\[S_{\text{tot}} =\varepsilon^{2}k_{B}\tau\int_{0}^{t_{f}}\left(\dot{r}^{2}+\dot{ r}^{2}\dot{\theta}^{2}+\Big{(}\frac{\Delta T}{\tilde{T}}\Big{)}^{2}\big{(}( \dot{r}+\tau^{-1}\dot{r})\sin\theta\right.\] \[\left.\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\dot{ \theta}\cos\theta\dot{r}\big{)}^{2}\right)dt+O(\varepsilon^{3})\] \[=\underbrace{\varepsilon^{2}k_{B}\tau\int_{0}^{t_{f}}\Big{(}\dot {x}^{2}+\dot{y}^{2}+\Big{(}\frac{\Delta T}{\tilde{T}}\Big{)}^{2}\big{(}\dot{y }+\tau^{-1}\dot{y}\big{)}^{2}\Big{)}dt}_{S_{\varepsilon}(\dot{x},\dot{y})}+O (\varepsilon^{3}).\]
Thus, the entropy production up to second order in \(\varepsilon\), \(S_{\varepsilon}(\dot{x},\dot{y})\), is as specified above.
**Proposition 11**: _The trajectory that minimizes entropy production between two states close to the equilibrium at \(r=0\), up to second order,_
\[(\dot{x}^{*},\dot{y}^{*})=\operatorname{argmin}\{S_{\varepsilon}( \dot{x},\dot{y})\mid\dot{x}(0)=\dot{x}_{0},\ \ \dot{y}(0)=\dot{y}_{0}, \tag{47}\] \[\dot{x}(t_{f})=\dot{x}_{t_{f}},\ \dot{y}(t_{f})=\dot{y}_{t_{f}}\},\]
_is of the form_
\[\dot{x}^{*}(t)=\dot{x}_{0}+\frac{t}{t_{f}}(\dot{x}_{t_{f}}-\dot{x }_{0}), \tag{48a}\] \[\dot{y}^{*}(t)=c_{+}e^{t/\dot{t}}+c_{-}e^{-t/\dot{t}}. \tag{48b}\]
_where_
\[\dot{\tau}=\tau\big{(}1+\big{(}\frac{T}{\Delta T}\big{)}^{2}\big{)}^{1/2}\text{ and }c_{\pm}=\frac{\dot{y}_{0}-\hat{y}_{t_{f}}e^{\pm t_{f}/\dot{t}}}{1-e^{\pm 2\hat{y}_{f}/ \dot{t}}}.\]
The Euler-Lagrange equations for minimizing \(S_{\varepsilon}\) take the form
\[\ddot{\ddot{x}} =0,\quad\dot{x}(0)=\dot{x}_{0},\ \ \dot{x}(t_{f})=\dot{x}_{t_{f}}, \tag{49}\] \[\ddot{\ddot{y}} =\frac{1}{\dot{\tau}^{2}}\dot{y},\quad\dot{y}(0)=\dot{y}_{0},\ \ \dot{y}(t_{f})=\dot{y}_{t_{f}}. \tag{50}\]
Solving these equations and imposing the boundary conditions we obtain the sought result.
It is also insightful to consider fixing the starting state at \((\dot{x}_{0},\dot{y}_{0})\), near the equilibrium at \(r=0\) as before, and consider
Fig. 2: Left: Trajectories that minimize excess entropy production while vanishing housekeeping entropy production. Right: Trajectories that minimize entropy production rate \(\dot{S}_{\text{tot}}\). All trajectories are displayed in coordinates \(x=r\cos(\theta)\), \(y=r\sin(\theta)\) for the following choice of parameters: \(\tau=1=\tilde{T}\) and \(\Delta T=0.1\).
the trajectory departing from this state with terminal time \(t_{f}\rightarrow\infty\) and the final state unconstrained. The optimal solution is then given by \(\hat{x}(t)=\hat{x}_{0}\) and \(\hat{y}(t)=\hat{y}_{0}e^{-t/\tau}\), as any other \(\hat{y}(t)\) would lead to infinite entropy production. Therefore, as long as \(r\) is small enough, trajectories minimizing entropy production over an unbounded time interval end up at one of the \(\theta\in\{0,\pi\}\) equilibrium states (as opposed to the one corresponding to \(r=0\)).
**Remark 5**: _It is interesting to observe that trajectories minimizing total entropy production over an infinite time window, for \(r\) small enough, share some resemblance with those of vanishing housekeeping entropy production while minimizing excess entropy rate, as well as those minimizing total entropy production rate, portrayed in the left and right subplots of Figure 2, respectively. Indeed, they approach equilibrium for \(\theta\in\{0,\pi\}\) almost vertically and have \(\tau\) as a natural time constant. \(\Box\)_
### _Entropy minimizing cycles_
In this last section we turn to cycles that generate minimal entropy production. The expressions we obtain did not allow for closed-form solutions, and hence we have resorted to numerically computing optimal trajectories. Valuable insights are gained in that we observe a natural tendency of trajectories to gravitate towards an equilibrium state, as much as time allows, before returning to their starting point. This is in stark contrast to the isotropic case with a single heat bath, in which optimal trajectories are trivially constant.
We selected as the starting and ending point \((r_{0},\theta_{0})=(1,1)\), and computed closed trajectories, portrayed in Figure 3, of different periods. The natural tendency is to gravitate towards an equilibrium state where \(S_{\rm hk}=0\), interestingly, one that corresponds to \(\theta=0\). We have seen the same tendency in the analysis of small \(r\), where entropy minimizing trajectories over an infinite time window converge to an equilibrium at \(y=0\Leftrightarrow\theta=0\); this is apparent in Figure 4.
Figure 4 compares the analytic solution when minimizing entropy production for \(r\) vanishingly small (47), to the numerical solution in Figure 3 that is computed for \(r\) large. At the outset, there is no apparent reason why these should compare. Yet, it is observed that the \(\theta\) component of trajectories, _whether \(r\) is small or not_, follows a similar path. To see this, we drew in Figure 4 the \(\theta\) component of the analytic solution (48) as a function of time (continuous curve), this is \(\theta(t)=\tan^{-1}\left(\frac{\hat{y}^{*}(t)}{\hat{x}^{*}(t)}\right)\) with \(\hat{x}^{*}\) and \(\hat{y}^{*}\) as in (48). A very good agreement with the numerical solution is readily seen (marked with \(\times\)'s). The \(r\)-components cannot be compared and thus respective plots are omitted.
An additional reason for focusing on the \(\theta\) component of trajectories is that this component reveals an apparent natural time constant \(\hat{\tau}\) that dictates an optimal balance between \(S_{\rm hk}\) and \(S_{\rm ex}\), as the trajectories approach a suitable point of equilibrium at \(\theta=0\). Indeed, when the period is sufficiently large, trajectories spend any "extra" time allocated near the equilibrium, as seen in Figure 4. Interestingly, within the time-window that the trajectory stays near equilibrium (near \(\theta=0\)), assuming \(t_{f}\) is sufficiently large, \(r\) linearly decreases as seen in Figure 3. Apparently this helps in reducing the housekeeping cost during the return leg of the trajectory back to \((r,\theta)=1\).
A final important point is that entropy minimizing cycles are work consuming. Specifically, to steer the thermodynamic state along these closed trajectories, the controlling input (actuating potential) needs to supply work to the system. This is due to the fact that these trajectories are traversed clockwise (as can be seen from the asymmetry in the \(r\) plot), which based on previous work [14] indicates work consumption. We do not further expand on this point since it is tangential to the present work. However, we would like to underscore the importance of controlling the thermodynamic system for positive work output, balancing the work produced with total entropy generated. This optimal control problem remains open at present.
Fig. 4: Solid lines represent \(\theta\) component of entropy minimizing cycles in the limit of vanishing \(r\) for different final times \(t_{f}\). Crosses denote the \(\theta\) component of entropy minimizing cycles obtained numerically (not requireing \(r\) small). The rest of the parameters are set to \(1\), but for \(\Delta T=0.1\).
Fig. 3: Entropy minimizing cycles starting and ending at \((r,\theta)=(1,1)\), with final times \(t_{f}=10,25,50\) and \(150\) seconds, respectively. The rest of the parameters are set to \(1\), but for \(\Delta T=0.1\).
## VIII Conclusions
In these pages we sought to understand how to limit entropy production that is inherent to stochastic systems with anisotropic thermal excitation. We highlighted the necessity of non-conservative forcing for stalling entropy production, identified sources that contribute to entropy production, and characterized control actions that minimize them. In doing this, the structure inherited from a naturally weighted inner product has taken a central role shaping optimal protocols and trajectories. We explicitly worked out a two-dimensional case so as to illustrate the problem's intrinsic quirks, such as the existence of zero-housekeeping entropy producing trajectories and the propensity towards equilibrium of entropy minimizing cycles.
Several theoretical issues remain, such as the construction of general optimal entropy-minimizing controls, the robustness of control protocols to uncertainty in the constituents and the environment and, most importantly, the study of trade-offs between work extraction and entropy production. In our previous works [14, 15] we focused on steering a thermodynamic system, similarly subject to anisotropic temperatures, over a cycle that maximizes work extraction. In light of the present results, it is worth considering maximizing work output subject to a bound on the entropy production over a cycle. Such a bound is natural, especially in biological engines, where sources that sustain chemical gradients are taxed by their entropy production. Optimal control of thermodynamic systems for the purpose of trading off entropy for work may prove essential in the understanding of biological mechanisms that enable life.
## Acknowledgments
The research was supported in part by the AFOSR under grant FA9550-20-1-0029, and ARO under W911NF-22-1-0292. O.M.M was supported by "la Caixa" Foundation (ID 100010434) with code LCF/BQ/AA20/11820047.
|
2303.13254 | Paraconsistent Transition Systems | Often in Software Engineering, a modeling formalism has to support scenarios
of inconsistency in which several requirements either reinforce or contradict
each other. Paraconsistent transition systems are proposed in this paper as one
such formalism: states evolve through two accessibility relations capturing
weighted evidence of a transition or its absence, respectively. Their weights
come from a specific residuated lattice. A category of these systems, and the
corresponding algebra, is defined as providing a formal setting to model
different application scenarios. One of them, dealing with the effect of
quantum decoherence in quantum programs, is used for illustration purposes. | Ana Cruz, Alexandre Madeira, LuÃ-Ã-s Soares Barbosa | 2023-03-23T13:37:49Z | http://arxiv.org/abs/2303.13254v1 | # Paraconsistent Transition Systems+
###### Abstract
Often in Software Engineering a modelling formalism has to support scenarios of inconsistency in which several requirements either reinforce or contradict each other. Paraconsistent transition systems are proposed in this paper as one such formalism: states evolve through two accessibility relations capturing weighted evidence of a transition or its absence, respectively. Their weights come from a specific residuated lattice. A category of these systems, and the corresponding algebra, is defined providing a formal setting to model different application scenarios. One of them, dealing with the effect of quantum decoherence in quantum programs, is used for illustration purposes.
## 1 Introduction
Dealing with application scenarios where requirements either reinforce or contradict each other is not uncommon in Software Engineering. One such scenarios comes from current practice in quantum computation in the context of NISQ (_Noisy Intermediate-Scale Quantum_) technology [12] in which levels of decoherence of quantum memory need to be articulated with the length of the circuits to assess program quality.
In a recent paper [8], the authors introduced a new kind of weighted transitions systems which records, for each transition, a positive and negative weight which, informally, capture the degree of effectiveness (_'presence'_) and of impossibility (_'absence'_) of a transition. This allows the model to capture both _vagueness_, whenever both weights sum less than 1, as usual e.g. in fuzzy systems, and _inconsistency_, when their sum exceeds 1. This last feature motivates the qualifier _paraconsistent_ borrowed from the work on paraconsistent logic [10, 6], which accommodates inconsistency in a controlled way, treating inconsistent information as potentially informative. Such logics were originally developed in Latin America in the decades of 1950 and 1960, mainly by F. Asenjo and Newton da Costa. Quickly, however, the topic attracted attention in the international community and the original scope of mathematical applications broadened out, as witnessed in a recent book emphasizing the engineering potential of paraconsistency [3]. In particular, a number of applications to themes from quantum mechanics and quantum information theory have been studied by D. Chiara [5] and W. Carnielli and his collaborators [2, 7].
This paper continues such a research program in two directions. First it introduces a suitable notion of morphism for paraconsistent labelled transition systems (PLTS) leading to the definition of the corresponding category and its algebra. Notions of simulation, bisimulation and trace for PLTS are also discussed. On a second direction, the paper discusses an application of PLTS to reason about the effect of quantum decoherence in quantum programs.
Paper structure.After recalling the concept of a PLTS and defining their morphisms in section 2, section 3 discusses suitable notions of simulation, bisimulation and trace. Compositional construction of (pointed) PLTS are characterised in section 4 by exploring the relevant category, following G. Winskel and M. Nielsen's'recipe' [13]. Section 5 illustrates their use to express quantum circuits with decoherence. Finally, section 6 concludes and points out a number of future research directions.
## 2 Paraconsistent labelled transition systems
A _paraconsistent labelled transition system_ (PLTS) incorporates two accessibility relations, classified as positive and negative, respectively, which characterise each transition in opposite ways: one represents the evidence of its presence and other the evidence of its absence. Both relations are weighted by elements of a residuated lattice \(\Sigma=\langle\wedge,\vee,\odot,\rightarrow,1,0\rangle\), where, \(\langle A,\wedge,\vee,1,0\rangle\) is a lattice, \(\langle A,\odot,1\rangle\) is a monoid, and operation \(\odot\) is residuated, with \(\rightarrow\), i.e. for all \(a,b,c\in A\), \(a\odot b\leq c\Leftrightarrow b\leq a\to c\). A Godel algebra \(G=\langle[0,1],min,max,min,\rightarrow,0,1\rangle\) is an example of such a structure, that will be used in the sequel. Operators _max_ and _min_ retain the usual definitions, whereas implication is given by
\[a\to b=\begin{cases}1,\text{ if }a\leq b\\ b,\text{ otherwise}\end{cases}.\]
Our constructions, however, are, to a large extent, independent of the particular residuated lattice chosen. The definition below extends the one in reference [7] to consider labels in an explicit way. Thus,
**Definition 1**.: _A **paraconsistent labelled transition system** (PLTS) over a residuated lattice \(A\) and a set of atomic actions \(\Pi\) is a structure \(\langle W,R,\Pi\rangle\) where, \(W\) is a non-empty set of states, \(\Pi\) is a set of labels, and \(R\subseteq W\times\Pi\times W\times A\times A\) characterises its dynamics, subjected to the following condition: between two arbitrary states there is at most one transition involving label \(a\), for every \(a\in\Pi\). Each tuple \((w_{1},a,w_{2},\alpha,\beta)\in R\) represents a transition from \(w_{1}\) to \(w_{2}\) labelled by \((a,\alpha,\beta)\), where \(\alpha\) is the degree to which the action a contributes to a transition from \(w_{1}\) to \(w_{2}\), and \(\beta\), dually, expresses the degree to which it prevents its occurrence._
The condition imposed in the definition above makes it possible to express relation \(R\) in terms of a _positive_ and a _negative_ accessibility relation \(r^{+},r^{-}:\Pi\longrightarrow A^{W\times W}\), with
\[r^{+}(\pi)(w,w^{\prime})=\begin{cases}\alpha\text{ if }(w,\pi,w^{\prime}, \alpha,\beta)\in R\\ 0\text{ otherwise}\end{cases}\]
and \(r^{-}\) defined similarly. These two relations jointly express different behaviours associated to a transition:
* _inconsistency_, when the positive and negative weights are contradictory, i.e. they sum to some value greater then 1; this corresponds to the upper triangle in the picture below, filled in grey.
* _vagueness_, when the sum is less than 1, corresponding to the lower, periwinkle triangle in the same picture;
* _consistency_, when the sum is exactly 1, which means that the measures of the factors enforcing or preventing a transition are complementary, corresponding to the red line in the picture.
Morphisms between PLTS respect, as one would expect, the structure of both accessibility relations. Formally,
**Definition 2**.: _Let \(T_{1}=\langle W_{1},R_{1},\Pi\rangle\), \(T_{2}=\langle W_{2},R_{2},\Pi\rangle\) be two PLTSs defined over the same set of actions \(\Pi\). A **morphism** from \(T_{1}\) to \(T_{2}\) is a function \(h:W_{1}\to W_{2}\) such that_
\[\forall_{a\in\Pi},\;r_{1}^{+}(a)(w_{1},w_{2})\leq r_{2}^{+}(a)(hw_{1},hw_{2}) \text{ and }r_{1}^{-}(a)(w_{1},w_{2})\geq r_{2}^{-}(a)(hw_{1},hw_{2})\]
**Example 1**.: _Function \(h=\{w_{1}\mapsto v_{1},w_{2}\mapsto v_{2},w_{3}\mapsto v_{3}\}\) is a morphism from \(M_{1}\) to \(M_{2}\), over \(\Pi=\{a,b,c,d\}\), depicted below_
## 3 Simulation and Bisimulation for PLTS
Clearly, PLTSs and their morphisms form a category, with composition and identities borrowed from Set. To compare PLTSs is also useful to define what simulation and bisimulation mean in this setting. Thus, under the same assumptions on \(T_{1}\) and \(T_{2}\),
**Definition 3**.: _A relation \(S\subseteq W_{1}\times W_{2}\) is a **simulation** provided that, for all \(\langle p,q\rangle\in S\) and \(a\in\Pi\),_
\[p\xrightarrow{(a,\alpha,\beta)}_{T_{1}}p^{\prime}\Rightarrow\langle\exists_{ q^{\prime}\in W_{2}}.\exists_{\gamma,\delta\in[0,1]}.\ q\xrightarrow{(a,\ \gamma,\ \delta)}_{T_{2}}q^{\prime}\ \wedge\ \langle p^{\prime},q^{\prime}\rangle\in S\ \wedge\ \gamma\geq\alpha\ \wedge\ \delta\leq\beta\rangle\]
_which can be abbreviated to_
\[p\xrightarrow{(a,\alpha,\beta)}_{\gamma_{1}}p^{\prime}\Rightarrow\langle \exists_{q^{\prime}\in W_{2}}.\ q\xrightarrow{(a,\ \gamma\not\succeq\alpha\,\ \delta:\ \delta\leq\beta)}_{T_{2}}q^{\prime}\ \wedge\ \langle p^{\prime},q^{\prime}\rangle\in S\rangle\]
_Two states \(p\) and \(q\) are **similar**, written \(p\lesssim q\), if there is a simulation \(S\) such that \(\langle p,q\rangle\in S\)._
Whenever one restricts in the definition above to the existence of values \(\gamma\) (resp. \(\delta\)) such that \(\gamma\geq\alpha\) (resp. \(\delta\leq\beta\)), the corresponding simulation is called _positive_ (resp. _negative_).
**Example 2**.: _In the PLTSs depicted below, \(w_{1}\lesssim v_{1}\), witnessed by_
\[S=\{\langle w_{1},v_{1}\rangle,\langle w_{2},v_{2}\rangle,\langle w_{3},v_{2} \rangle,\langle w_{4},v_{3}\rangle,\langle w_{5},v_{4}\rangle\}\]
Finally,
**Definition 4**.: _A relation \(B\subseteq W_{1}\times W_{2}\) is a **bisimulation** if for \(\langle p,q\rangle\in B\) and \(a\in\Pi\)_
\[p\xrightarrow{(a,\alpha,\beta)}_{M_{1}}p^{\prime} \Rightarrow\langle\exists q^{\prime}\in W_{2}:q\xrightarrow{(a, \alpha,\beta)}_{M_{2}}q^{\prime}\wedge\langle p^{\prime},q^{\prime}\rangle \in B\rangle\] \[q\xrightarrow{(a,\alpha,\beta)}_{M_{2}}q^{\prime} \Rightarrow\langle\exists p^{\prime}\in W_{1}:p\xrightarrow{(a, \alpha,\beta)}_{M_{1}}p^{\prime}\wedge\langle p^{\prime},q^{\prime}\rangle \in B\rangle\]
_Two states \(p\) and \(q\) are **bisimilar**, written \(p\sim q\), if there is a bisimulation \(B\) such that \(\langle p,q\rangle\in B\)._
**Example 3**.: _Consider the two PLTSs depicted below. Clearly, \(w_{1}\sim v_{1}\)._
\[\xy(0,0)*{(a,0.5,0.3)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.2,0.3)}\xy(-0,0)*{ (c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)} \xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)} \xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)} \xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)} \xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)} \xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)} \xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)} \xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)} \xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)} \xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)} \xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)} \xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)} \xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)} \xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)} \xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)} \xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.4,0.5)} \xy(-0,0)*{(c,0.4,0.5)}\xy(-0,0)*{(c,0.
**Example 4**.: _Consider again the two PLTSs given in Example 2. The weighted traces from \(w_{1}\) are \(\{t_{1}=\langle[a,b],0.2,0.8\rangle,t_{2}=\langle[a,c],0.2,0.9\rangle\}\) and the ones from \(v_{1}\) are \(\{t^{\prime}_{1}=\langle[a,b],0.5,0.5\rangle,t^{\prime}_{2}=\langle[a,c],0.5,0. 5\rangle\}\). Clearly, \(t_{1}\) (resp. \(t_{2}\)) is a weighted subtrace of \(t^{\prime}_{1}\) (resp. \(t^{\prime}_{2}\))._
**Lemma 2**.: _Consider two PLTSs, \(T_{1}=\langle W_{1},R_{1}\rangle\) and \(T_{2}=\langle W_{2},R_{2}\rangle\). If two states \(p\in W_{1}\) and \(q\in W_{2}\) are similar (resp. bisimilar), i.e., \(p\lesssim q\) (resp. \(p\sim q\)), then the set of weighted traces from \(p\), \(X\), and the set of weighted traces from \(q\), \(Y\), are such that \(X\sqsubseteq Y\) (resp. coincide)._
Proof.: If \(p\lesssim q\) each trace \(t\) from \(p\) is a prefix of trace \(t^{\prime}\) from \(q\). Let \([\alpha_{1},\alpha_{2},...,\alpha_{m}]\) and \([\beta_{1},\beta_{2},...,\beta_{m}]\) be the sequences of positive and negative weights associated to \(t\). Similarly, let \([\alpha^{\prime}_{1},\alpha^{\prime}_{2},...,\alpha^{\prime}_{n}]\) and \([\beta^{\prime}_{1},\beta^{\prime}_{2},...,\beta^{\prime}_{n}]\) be the corresponding sequences for \(t^{\prime}\); of course \(m\leq n\). As \((p,q)\) belongs to a simulation, \(\alpha^{\prime}_{i}\geq\alpha_{i}\) and \(\beta^{\prime}_{i}\leq\beta_{i}\), for all \(i\leq n\). So, \(Min[\alpha^{\prime}_{1},\alpha^{\prime}_{2},...,\alpha^{\prime}_{m}]\geq Min[ \alpha_{1},\alpha_{2},...,\alpha_{m}]\) and \(Max[\alpha^{\prime}_{1},\alpha^{\prime}_{2},...,\alpha^{\prime}_{m}]\leq Max[ \alpha_{1},\alpha_{2},...,\alpha_{m}]\). Note that \(Min\) and \(Max\) correspond to \(\bigwedge\) and \(\bigvee\) in a Godel algebra. Thus,
\[\langle t,Min[\alpha_{1},\alpha_{2},...,\alpha_{n}],Max[\alpha_{1},\alpha_{2},...,\alpha_{n}]\rangle\]
is a weighted subtrace of \(\langle t^{\prime}|_{m},Min[\alpha^{\prime}_{1},\alpha^{\prime}_{2},..., \alpha^{\prime}_{n}],Max[\alpha^{\prime}_{1},\alpha^{\prime}_{2},...,\alpha^{ \prime}_{n}]\rangle\), where \(t^{\prime}|_{m}\) is the subsequence of \(t\) with \(m\) elements. The statement for \(\sim\) follows similarly.
Note that the converse of this lemma does not hold, as shown by the following counterexample.
**Example 5**.: _Consider the PLTS depicted below._
\(X=\{\langle[a],0.5,0.3\rangle,\langle[a,b],0.5,0.3\rangle\}\) _is the set of weighted traces from \(w_{1}\). Similarly,_
\(Y=\{\langle[a],0.7,0.2\rangle,\langle[a,b],0.5,0.3\rangle\}\) _is the corresponding set from \(w_{2}\). Clearly \(\langle[a],0.5,0.3\rangle\) is a weighted subtrace of \(\langle[a],0.7,0.2\rangle\). Thus \(X\sqsubseteq Y\). However, \(w_{1}\nucleq w_{2}\)._
## 4 New PLTS from old
New PLTS can be built compositionally. This section introduces the relevant operators by exploring the structure of the category of \(\mathsf{Pt}\) of _pointed_ PLTS, i.e. whose objects are PLTSs with a distinguished initial state, i.e. \(\langle W,i,R,\Pi\rangle\), where \(\langle W,R,\Pi\rangle\) is a PLTS and \(i\in W\). Arrows in \(\mathsf{Pt}\) are allowed between PLTSs with different sets of labels, therefore generalizing Definition 2 as follows:
**Definition 7**.: _Let \(T_{1}=\langle W_{1},i_{1},R_{1},\Pi\rangle\) and \(T_{2}=\langle W_{2},i_{2},R_{2},\Pi^{\prime}\rangle\) be two pointed PLTSs. A morphism in \(\mathsf{Pt}\) from \(T_{1}\) to \(T_{2}\) is a pair of functions \((\sigma:W_{1}\to W_{2}\), \(\lambda:\Pi\rightarrow_{\perp}\Pi^{\prime})\) such that1\(\sigma(i_{1})=i_{2}\), and, if \((w,a,w^{\prime},\alpha,\beta)\in R_{1}\) then \((\sigma(w),\lambda(a),\sigma(w^{\prime}),\alpha^{\prime},\beta^{\prime})\in R _{2}\)2, with \(\alpha\leq\alpha^{\prime}\) and \(\beta^{\prime}\leq\beta\), where, for an accessibility relation \(R\), \(R^{\perp}=R\cup\{(w,\perp,w,1,0)\mid w\in W\}\) denotes \(R\) enriched with \(\mathsf{idle}\) transitions in each state._
Footnote 1: Notation \(\lambda:\Pi\rightarrow_{\perp}\Pi^{\prime}\) stands for the totalization of a partial function by mapping to \(\perp\) all elements of \(\Pi\) for which the function is undefined.
Clearly \(\mathsf{Pt}\) forms a category, with composition inherited from \(\mathsf{Set}\) and \(\mathsf{Set}_{\perp}\), the later standing for the category of sets and partial functions, with \(T_{nil}=\langle\{*\},*,\emptyset,\emptyset\rangle\) as both the initial and final object. The corresponding unique morphisms are \(!\,:T\to T_{nil}\), given by \(\langle\underline{*},()\rangle\), and \(?:T_{nil}\to T\), given by \(\langle\underline{i},()\rangle\), where \(()\) is the empty map and notation \(\underline{x}\) stands for the constant, everywhere \(x\), function.
An algebra of PLTS typically includes some form of parallel composition, disjoint union, restriction, relabelling and prefixing, as one is used from the process algebra literature [3]. Accordingly, these operators are defined along the lines proposed by G. Winskel and M. Mielsen [13], for the standard, more usual case.
Restriction.The restriction operator is intended to control the interface of a transition system, preserving, in the case of a PLTS, the corresponding positive and negative weights. Formally,
**Definition 8**.: _Let \(T=\langle W,i,R,\Pi\rangle\) be a PLTS, and \(\lambda:\Pi^{\prime}\to\Pi\) be an inclusion. The **restriction** of \(T\) to \(\lambda\), \(T\upharpoonright\lambda\), is a PLTS \(\langle W,i,R^{\prime},\Pi^{\prime}\rangle\) over \(\Pi^{\prime}\) such that \(R^{\prime}=\{(w,\pi,w^{\prime},\alpha,\beta)\in R\mid\pi\in\Pi^{\prime}\}\)._
There is a morphism \(f=(1_{W},\lambda)\) from \(T\upharpoonright\lambda\) to \(T\), and a functor \(P:\mathsf{Pt}\to\mathsf{Set}_{\perp}\) which sends a morphism \((\sigma,\lambda):T\to T^{\prime}\) to the partial function \(\lambda:\Pi^{\prime}\to\Pi\). Clearly, \(f\) is the Cartesian lifting of morphism \(P(f)=\lambda\) in \(\mathsf{Set}_{\perp}\). Being Cartesian means that for any \(g:T^{\prime}\to T\) in \(\mathsf{Pt}\) such that \(P(g)=\lambda\) there is a unique morphism \(h\) such that \(P(h)=1_{\Pi^{\prime}}\) making the following diagram to commute:
Note that, in general, restriction does not preserve reachable states. Often, thus, the result of a restriction is itself restricted to its reachable part.
Relabelling.In the same group of _interface-modifier_ operators, is _relabelling_, which renames the labels of a PLTS according to a total function \(\lambda:\Pi\to\Pi^{\prime}\).
**Definition 9**.: _Let \(T=\langle W,i,R,\Pi\rangle\) be a PLTS, and \(\lambda:\Pi^{\prime}\to\Pi\) be a total function. The **relabelling** of \(T\) according to \(\lambda\), \(T\{\lambda\}\) is the PLTS \(\langle W,i,R^{\prime},\Pi^{\prime}\rangle\) where \(R^{\prime}=\{(w,\lambda(a),w^{\prime},\alpha,\beta)\mid(w,a,w^{\prime},\alpha, \beta)\in R\}\)._
Dually to the previous case, there is a morphism \(f=(1_{W},\lambda)\) from \(T\) to \(T\{\lambda\}\) which is the cocartesian lifting of \(\lambda\) (\(=P(f)\)).
Parallel composition.The product of two PLTSs combines their state spaces and includes all _synchronous_ transitions, triggered by the simultaneous occurrence of an action of each component, as well as _asynchronous_ ones in which a transition in one component is paired with an _idle_ transition, labelled by \(\perp\), in the other. Formally,
**Definition 10**.: _Let \(T_{1}=\langle W_{1},i_{1},R_{1},\Pi_{1}\rangle\) and \(T_{2}=\langle W_{2},i_{2},R_{2},\Pi_{2}\rangle\) be two PLTS. Their **parallel composition**\(T_{1}\times T_{2}\) is the PLTS \(\langle W_{1}\times W_{2},(i_{1},i_{2}),R,\Pi^{\prime}\rangle\), such that \(\Pi^{\prime}=\Pi_{1}\times_{\perp}\Pi_{2}=\{(a,\perp)\mid a\in\Pi_{1}\}\cup\{ (\perp,b)\mid b\in\Pi_{2}\}\cup\{(a,b)\mid a\in\Pi_{1},b\in\Pi_{2}\}\), and \((w,a,w^{\prime},\alpha,\beta)\in R\) if and only if \((\pi_{1}(w),\pi_{1}(a),\pi_{1}(w^{\prime}),\alpha_{1},\beta_{1})\in R_{1}{}^{\perp}\), \((\pi_{2}(w),\pi_{2}(a),\pi_{2}(w^{\prime}),\alpha_{2},\beta_{2})\in R_{2}{}^{\perp}\), \(\alpha=min(\alpha_{1},\alpha_{2})\) and \(\beta=max(\beta_{1},\beta_{2})\)._
**Lemma 3**.: _Parallel composition is the product construction in \(\mathsf{Pt}\)._
Proof.: In the diagram below let \(g_{i}=(\sigma_{i},\lambda_{i})\), for \(i=1,2\), and define \(h\) as \(h=(\langle\sigma_{1},\sigma_{2}\rangle,\langle\lambda_{1},\lambda_{2}\rangle)\), where \(\langle f_{1},f_{2}\rangle(x)=(f_{1}(x),f_{2}(x))\) is the universal arrow in a product diagram in \(\mathsf{Set}\). Clearly, \(h\) lifts universality to \(\mathsf{Pt}\), as the unique arrow making the diagram to commute. It remains show it is indeed an arrow in the category. Indeed, let \(T=\langle W,i,R,\Pi\rangle\), \(T_{1}=\langle W_{1},i_{1},R_{1},\Pi_{1}\rangle\), and define \(T_{1}\times T_{2}=\langle W_{1}\times W_{2},(i_{1},i_{2}),R^{\prime},\Pi^{\prime}\rangle\) according to defintion 10. Thus, for each \((w,a,w^{\prime},\alpha,\beta)\in R\), there is a transition \((\sigma_{1}(w),\lambda_{1}(a),\sigma_{1}(w^{\prime}),\alpha_{1},\beta_{1})\in R _{1}{}^{\perp}\) such that \(\alpha\leq\alpha_{1}\) and \(\beta\geq\beta_{1}\); and also a transition \((\sigma_{2}(w),\lambda_{2}(a),\sigma_{2}(w^{\prime}),\alpha_{2},\beta_{2})\in R _{2}{}^{\perp}\) such that \(\alpha\leq\alpha_{1}\) and \(\beta\geq\beta_{2}\). Moreover, there is a transition
\[(\langle\sigma_{1},\sigma_{2}\rangle(w),\langle\lambda_{1},\lambda_{2}\rangle(a ),\langle\sigma_{1},\sigma_{2}\rangle(w^{\prime}),min(\alpha_{1},\alpha_{2}), max(\beta_{1},\beta_{2}))\in R^{\prime}\]
Thus, there is a transition \((\langle\sigma_{1},\sigma_{2}\rangle(w),\langle\lambda_{1},\lambda_{2}\rangle(a ),\langle\sigma_{1},\sigma_{2}\rangle(w^{\prime}),\alpha^{\prime},\beta^{ \prime}))\in R^{\prime}\), for any \((w,a,w^{\prime},\alpha,\beta)\in R\), such that \(\alpha\leq\alpha^{\prime}\) and \(\beta\geq\beta^{\prime}\). Furthermore, \(\langle\sigma_{1},\sigma_{2}\rangle(i)=(\sigma_{1}(i),\sigma_{2}(i))=(i_{1},i_{2})\). This establishes \(h\) as a \(\mathsf{Pt}\) morphism.
**Example 6**.: _Consider the two PLTSs, \(T_{1}\) and \(T_{2}\), depicted below._
_Their product \(T\) is the PLTS_
A suitable combination of parallel composition and restriction may enforce different synchronization disciplines. For example, _interleaving_ or _asynchronous product_\(T_{1}\interleave T_{2}\) is defined as \((T_{1}\times T_{2})\interleave\lambda\) with the inclusion \(\lambda:\Pi\rightarrow\Pi_{1}\times_{\perp}\Pi_{2}\) for \(\Pi=\{(a,\perp)\mid a\in\Pi_{1}\}\cup\{(\perp,b)\mid b\in\Pi_{2}\}\). This results in a PLTS \(\langle W_{1}\times W_{2},(i_{1},i_{2}),R,\Pi\rangle\) such that \(R=\{(w,a,w^{\prime},\alpha,\beta)\in R^{\prime}\mid a\in\Pi\}\).
Similarly, the _synchronous product_\(T_{1}\otimes T_{2}\) is also defined as \((T_{1}\times T_{2})\interleave\lambda\), taking now \(\Pi=\{(a,b)\mid a\in\Pi_{1}\) and \(b\in\Pi_{2}\}\) as the domain of \(\lambda\).
**Example 7**.: _Interleaving and synchronous product of \(T_{1}\) and \(T_{2}\) as in Example 8, are depicted below._
\((i_{1},v)\)\((i_{1},i_{2})\)\(((a,\perp),0.7,0.2)\)\((w,v)\)\(((a,\perp),0.7,0.
_
* \(t\in R\) _if and only if there exists a transition_ \((w,a,w^{\prime},\alpha,\beta)\in R_{1}\) _such that_ \(t=(\iota_{1}(w),a,\iota_{1}(w^{\prime}),\alpha,\beta)\)_, or a transition_ \((w,a,w^{\prime},\alpha,\beta)\in R_{2}\) _such that_ \(t=(\iota_{2}(w),a,\iota_{2}(w^{\prime}),\alpha,\beta)\)__
_where \(\iota_{1}\) and \(\iota_{2}\) are the left and right injections associated to a coproduct in \(\mathsf{Set}\), respectively._
Sum is actually a coproduct in \(\mathsf{Pt}\) (the proof follows the argument used for the product case), making \(T_{1}+T_{2}\) dual to \(T_{1}\times T_{2}\).
**Example 8**.: _The sum \(T_{1}+T_{2}\), for \(T_{1},T_{2}\) defined as in Example 8 is given by_
Prefixing.As a limited form of sequential composition, prefix appends to a pointed PLTS a new initial state and a new transition to the previous initial state, after which the system behaves as the original one.
**Definition 12**.: _Let \(T=\langle W,i,R,\Pi\rangle\) be a PLTS and \(w_{new}\) a fresh state identifier not in \(W\). Given an action \(a\), and \(\alpha,\beta\in[0,1]\), the prefix \((a,\alpha,\beta)T\) is defined as \(\langle W\cup\{w_{new}\},w_{new},R^{\prime},\Pi\cup\{a\}\rangle\) where \(R^{\prime}=R\cup(w_{new},a,i,\alpha,\beta)\)._
Since it is not required that the prefixing label is distinct from the ones in the original system, prefixing does not extend to a functor in \(\mathsf{Pt}\), as illustrated in the counterexample below. This is obviously the case for a category of classical labelled transition systems as well. In both cases, however, prefix extens to a functor if the corresponding categories are restricted to action-preserving morphisms, i.e. in which the action component of a morphism is always an inclusion
**Example 9**.: _Consider two pointed PLTS \(T_{1}\) and \(T_{2}\)_
_connected by a morphism \((\sigma,\lambda):T_{1}\to T_{2}\) such that \(\sigma(i_{1})=i_{2}\), \(\sigma(w)=v\) and \(\lambda(a)=b\). Now consider the prefixes \((a,1,0)T_{1}\) and \((a,1,0)T_{2}\) depicted below._
_Clearly, a mapping from the actions in \((a,1,0)T_{1}\) to the actions in \((a,1,0)T_{1}\) does not exist so neither exists a morphism between the two systems._
Functorial extensions.Other useful operations between PLTSs, typically acting on transitions' positive and negative weights, and often restricted to PLTSs over a specific residuated lattice, can be defined functorially in \(\mathsf{Pt}\). An example involving a PLTS defined over a Godel algebra is an operation that uniformly increases or decreases the value of the positive (or the negative, or both) weight in all transitions. Let
\[a\oplus b=\begin{cases}1\text{ if }a+b\geq 1\\ 0\text{ if }a+b\leq 0\\ a+b\text{ otherwise}\end{cases}\]
Thus,
**Definition 13**.: _Let \(T=\langle W,i,R,\Pi\rangle\) be a PLTS. Taking \(v\in[-1,1]\), the **positive \(v\)-approximation \(T_{\oplus_{v}^{+}}\)** is a PLTS \(\langle W,i,R^{\prime},\Pi\rangle\) where_
\[R^{\prime}=\{(w,\pi,w^{\prime},\alpha\oplus v,\beta)\mid(w,\pi,w^{\prime}, \alpha,\beta)\in R\}.\]
The definition extends to a functor in \(\mathsf{Pt}\) which is the identity in morphisms. Similar operations can be defined to act on the negative accessibility relation or both.
Another useful operation removes all transitions in a pointed PLTS for which the positive accessibility relation is below a certain value and the negative accessibility relation is above a certain value. Formally,
**Definition 14**.: _Let \(T=\langle W,i,R,\Pi\rangle\) be a pointed PLTS, and \(p,n\in[0,1]\). The **purged** PLTS \(T_{p\uparrow\downarrow n}\) is defined as \(\langle W,i,R^{\prime},\Pi\rangle\) where_
\[R^{\prime}=\{(w,\pi,w^{\prime},\alpha,\beta)\mid(w,\pi,w^{\prime},\alpha, \beta)\in R\text{ and }\alpha\geq p\text{ and }\beta\leq m\}\]
Clearly, the operation extends to a functor in \(\mathsf{Pt}\), mapping morphisms to themselves.
## 5 An application to quantum circuit optimization
In a quantum circuit [10] decoherence consists in decay of a qubit in superposition to its ground state and may be caused by distinct physical phenomena. A quantum circuit is effective only if gate operations and measurements are performed to superposition states within a limited period of time after their preparation. In this section pointed PLTS will be used to model circuits incorporating qubit decoherence as an error factor. Typically, coherence is specified as an interval corresponding to a worst and a best case. We employ the two accessibility relations in a PLTS to model both scenarios simultaneously.
An important observation for the conversion of quantum circuits to PLTS is that quantum circuits always have a sequential execution. Simultaneous operations performed to distinct qubits are combined using the tensor product \(\otimes\) into a single operation to the whole collection of qubits which forms the state of the circuit. The latter is described by a sequence of executions \(e_{1},e_{2},e_{3},...\) where each \(e_{i}\) is the tensor product of the operations performed upon the state at each step. The conversion to a PLTS is straightforward, labelling each transition by the tensor of the relevant gates \(O_{1}\otimes\cdots\otimes O_{m}\), for \(m\) gates involved, but for the computation of the positive and negative accessibility relations, \(r^{+}\) and \(r^{-}\).
The weights of a transition corresponding to the application of a gate \(O\) acting over \(n\) qubits \(q_{1}\) to \(q_{n}\) are given by
\[v(O)=\begin{cases}(1,0)\text{ if qubits }q_{1},\cdots q_{n}\text{ are in a definite state}\\ (\text{Max}_{i}\text{ }f_{\text{max}}(q_{i}),\text{Min}_{i}\text{ }f_{\text{min}}(q_{i}))\text{ otherwise}\end{cases}\]
where \(f_{\text{max}}(q)=\frac{\tau_{\text{max}}(q)-\tau_{\text{prep}}(q)}{100}\) and \(f_{\text{min}}(q)=\frac{\tau_{\text{min}}(q)-\tau_{\text{prep}}(q)}{100}\), \(\tau_{\text{max}}(q)\) and \(\tau_{\text{min}}(q)\) are the longest and shortest coherence times of \(q\), respectively, and \(\tau_{\text{prep}}(q)\) is the time from the preparation of \(q\)'s superposition to the point after the execution of \(O\). The latter are fixed for each type of quantum gate; reference [14] gives experimentally computed values for them as well as for maximum and minimum values for qubit decoherence.
Consider, now, a transition \(t\) labelled by a \(O_{1}\otimes...\otimes O_{m}\) Then, \(r^{+}=\text{Max}_{i=1}^{n}\{\pi_{1}(v(O_{i}))\}\) and \(r^{-}=1-\text{Min}_{i=1}^{n}\{\pi_{2}(v(O_{i}))\}\).
**Example 10**.: _Consider the following circuits designed with IBM Quantum Composer:_
_Assume that the execution time of a single qubit gate is \(\tau_{G}=20\mu s\) and of a two qubit gate is \(2\tau_{G}=40\mu s\)[14], and that both qubits have the same coherence times \(\tau_{max}(q_{1})=\tau_{max}(q_{2})=100\mu s\) and \(\tau_{min}(q_{1})=\tau_{min}(q_{2})=70\mu s\). Thus the circuit on the left (resp. right) translates into \(T_{1}\) (on the left) and \(T_{2}\) (on the right)._
_As both circuits implement the same quantum algorithm and our focus is only on the effectiveness of the circuits, we may abstract from the actual sequences of labels and consider instead \(T_{1}\{\lambda\}\) and \(T_{2}\{\lambda\}\), for \(\lambda\) mapping each label to a unique label \(\star\). Their maximal weighted traces 2 are_
Footnote 2: Such maximal traces are easily identifiable given the peculiar shape of a PLTS corresponding to a quantum circuit.
\[t_{T_{1}\{\lambda\}}=\langle[*,*,*],0.4,0.9\rangle\,\,\,\text{and}\,\,\,t_{T_{2 }\{\lambda\}}=\langle[*,*,*],0.6,0.7\rangle\]
_Clearly \(t_{T_{1}\{\lambda\}}\) is a weighted subtrace of \(t_{T_{2}\{\lambda\}}\), therefore suggesting a criteria for comparing the effectiveness of circuits. Indeed, a circuit is more effective (i.e. less affected by qubit decoherence) than other if the maximal weighted trace of its (relabelled) PLTS is a weighted subtrace of the corresponding construction in the other._
_The second circuit is obviously more efficient than the first. This suggests we could use the weighted subtrace relation as a metric to compare circuit quality, for circuits implementing equivalent algorithms._
Reference [14] introduces a tool which tried to transform a circuit so that the lifetime of quantum superpositions is shortened. They give several examples of circuits and show how the application of the tool results in a circuit performing the same algorithm but with a reduced error rate. Our next example builds on one of their examples, computes the corresponding PLTS and compare the maximal weighted traces.
**Example 11**.: _Consider the following circuits reproduced from [14], which in ideal quantum devices would be indistinguishable._
_These circuits are represented as_
_where \(H\) and \(CX\) are indexed by the numeric identifiers of the qubit(s) to which they apply in each execution step. The maximal weighted trace of the (relabelled PLTS corresponding to) circuit in the right, \(\langle[*,*,*,*,*,*,*],0.6,0.7\rangle\), is a weighted subtrace of the one corresponding to circuit in the left, \(\langle[*,*,*,*,*],0,1\rangle\). Thus, the former circuit is more effective than the latter, as experimentally verified in [14]._
**Example 12**.: _As a final example consider two circuits differing only on the time points in which measurements are placed._
_The corresponding PLTS, computed again with the values given in reference (where execution time of a measurement is \(\tau_{M}=300ns\sim 1\mu s\)), are depicted below_
\[\begin{array}{c|c}s_{1}&r_{1}\\ \left(H_{1}\otimes H_{2},1,0\right)&\\ s_{2}&r_{2}\\ \left(M_{3},0.99,0.31\right)&\\ s_{3}&r_{3}\\ \left(M_{2},0.98,0.32\right)&\\ s_{4}&r_{4}\\ \left(CX_{0,1},0.58,0.72\right)&\\ s_{5}&r_{5}\\ \left(M_{1},0.99,0.31\right)&\\ s_{6}&r_{6}\\ \left(M_{0},0.98,0.32\right)&\\ s_{7}&r_{7}\\ \end{array}\]
_The maximal weighted trace \(\langle[*,*,*,*,*,*,],0.6,0.7\rangle\) corresponding to the circuit on the right is a weighted subtrace of the corresponding one for the circuit on the left, \(\langle[*,*,*,*,*,],0.58,0.72\rangle\). This shows that measuring can be safely postponed to the end of a circuit, as experimentally verified._
## 6 Conclusions and future work
The paper introduced a category of a new kind of labelled transition systems able to capture both _vagueness_ and _inconsistency_ in software modelling scenarios. The structure of this category was explored to define a number of useful operators to build such systems in a compositional way. Finally, PLTS were used to model effectiveness concerns in the analysis of quantum circuits. In this case the weight corresponding to the 'presence' of a transition captures an index measuring its effectiveness assuming the best case value for qubit decoherence. On the other hand, the weight corresponding to the 'absence' of a transition measures the possibility of non-occurrence, assuming qubit decoherence worst case value.
A lot remains to be done. First of all, a process logic, as classically associated to labelled transition systems [12], i.e. a modal logic with label-indexed modalities, can be designed for pointed PTLS. This will provide not only yet another behavioural equivalence, based on the set of formulas satisfied by two systems, but also a formal way to express safety and liveness properties of these systems.
This will be extremely useful to express and verify properties related to the effectiveness of quantum circuits, therefore pushing further the application scenario proposed in section 5. Finally, automating the construction of a pointed PLTS for a given circuit, parametric on the different qubit coherence and gate execution time found experimentally, and adding a prover for the logic suggested above, will provide an interesting basis to support quantum circuit optimization. Reliable, mathematically sound approaches and tools to support quantum computer programming and verification will be part of the quantum research agenda for the years to come. Indeed, their lack may put at risk the expected quantum advantage of the new hardware. |
2303.04728 | A probabilistic approach to Lorentz balls | We develop a probabilistic approach to study the volumetric and geometric
properties of unit balls $\mathbb B_{q,1}^n$ of finite-dimensional Lorentz
sequences spaces $\ell_{q,1}^n$. More precisely, we show that the empirical
distribution of a random vector $X^{(n)}$ uniformly distributed on the volume
normalized Lorentz ball in $\mathbb R^n$ converges weakly to a compactly
supported symmetric probability distribution with explicitly given density; as
a consequence we obtain a weak Poincar\'e-Maxwell-Borel principle for any fixed
number $k\in\mathbb N$ of coordinates of $X^{(n)}$ as $n\to\infty$. Moreover,
we prove a central limit theorem for the largest coordinate of $X^{(n)}$,
demonstrating a quite different behavior than in the case of the $\ell_q^n$
balls, where a Gumbel distribution appears in the limit. Last but not least, we
prove a Schechtman-Schmuckenschl\"ager type result for the asymptotic volume of
intersections of volume normalized Lorentz and $\ell^n_p$ balls. | Zakhar Kabluchko, Joscha Prochno, Mathias Sonnleitner | 2023-03-08T17:15:33Z | http://arxiv.org/abs/2303.04728v1 | # A probabilistic approach to Lorentz balls
###### Abstract
We develop a probabilistic approach to study the volumetric and geometric properties of unit balls \(\mathbb{B}_{q,1}^{n}\) of finite-dimensional Lorentz sequences spaces \(\ell_{q,1}^{n}\). More precisely, we show that the empirical distribution of a random vector \(X^{(n)}\) uniformly distributed on the volume normalized Lorentz ball in \(\mathbb{R}^{n}\) converges weakly to a compactly supported symmetric probability distribution with explicitly given density; as a consequence we obtain a weak Poincare-Maxwell-Borel principle for any fixed number \(k\in\mathbb{N}\) of coordinates of \(X^{(n)}\) as \(n\to\infty\). Moreover, we prove a central limit theorem for the largest coordinate of \(X^{(n)}\), demonstrating a quite different behavior than in the case of the \(\ell_{q}^{n}\) balls, where a Gumbel distribution appears in the limit. Last but not least, we prove a Schechtman-Schmuckenschlager type result for the asymptotic volume of intersections of volume normalized Lorentz and \(\ell_{p}^{n}\) balls.
**Keywords.** Asymptotic volume, central limit theorem, concentration of measure, convex body, Lorentz space, maximum entropy principle, Poincare-Maxwell-Borel principle
**MSC.** Primary 46B09, 52A23, 60F05; Secondary 46B06, 46B20, 46B45, 94A17
## 1 Introduction
The last decades have revealed a deep connection between the geometry of high-dimensional convex bodies and probability theory, and each field fruitfully influenced the other. Probabilistic methods naturally come into play when one considers a (high-dimensional) convex body in \(\mathbb{R}^{n}\), i.e., a compact and convex set with non-empty interior, as a probability space when it is endowed with the canonical Borel \(\sigma\)-field and the normalized uniform measure.
Probably the most prominent family of convex bodies are the unit balls \(\mathbb{B}_{p}^{n}\) of the classical finite-dimensional sequence spaces \(\ell_{p}^{n}\) (\(1\leq p\leq\infty\)). This parametric family of bodies is arguably one the most studied ones in geometric functional analysis and their analytic and geometric properties are quite well understood today. In numerous instances it is the already mentioned probabilistic point of view that gives access to understanding the asymptotic structure as the space dimension \(n\) tends to infinity, because there is a rather simple probabilistic representation of the uniform distribution on \(\mathbb{B}_{p}^{n}\). This representation allows one to go from a random vector uniformly distributed on \(\mathbb{B}_{p}^{n}\), and thus from one having dependent coordinates for \(p<\infty\), to a random vector whose entries are independent and identically distributed according to the so-called \(p\)-Gaussian distribution. This heavily facilitates computations and is therefore one of the key tools used in the study of \(\ell_{p}^{n}\) balls. It were Schechtman and Zinn [56], and independently Rachev and Ruschendorf [51], who showed that if \(X=(X_{1},\ldots,X_{n})\)
is distributed uniformly at random on \(\mathbb{B}_{p}^{n}\), then
\[X\stackrel{{\mathrm{d}}}{{=}}U^{1/n}\frac{Y}{\|Y\|_{p}}, \tag{1}\]
where \(U\) is distributed uniformly on \([0,1]\) and \(Y=(Y_{1},\ldots,Y_{n})\) is independent of \(U\) with \(Y_{1},\ldots,Y_{n}\) being independent and identically distributed with Lebesgue density on \(\mathbb{R}\) given by
\[f_{p}(x):=\begin{cases}\frac{1}{2p^{1/p}\Gamma(1+\frac{1}{p})}e^{-|x|^{p}/p}& \colon 1\leq p<\infty\\ \frac{1}{2}\mathbb{1}_{\{-1,1\}}(x)&\colon p=\infty.\end{cases}\]
Here and below, \(\stackrel{{\mathrm{d}}}{{=}}\) denotes equality in distribution. The previous result was lifted in [7] to a wider class of distributions related to \(\ell_{p}^{n}\) balls that include the uniform distribution and the distribution with respect to the cone probability measure as special cases. As we mentioned above, the representation presented as well as its generalization are frequently used in the asymptotic analysis of geometric and volumetric aspects of those unit balls and we refer, for instance, to [3, 4, 22, 30, 32, 46, 47, 55, 57] and the survey article [50].
A natural and frequently studied generalization of \(\ell_{p}\) spaces (and their function space counterparts), which is classical not only in functional analysis [12, 40], harmonic analysis [25] and optimization [38], is the class of Orlicz spaces. The finite-dimensional Orlicz space \(\ell_{M}^{n}\) is \(\mathbb{R}^{n}\) endowed with the norm
\[\|(x_{i})\|_{i=1}^{n}\|_{M}:=\inf\left\{\rho>0:\sum_{i=1}^{n}M\left(\frac{|x_{i }|}{\rho}\right)\leq 1\right\},\]
where \(M:\mathbb{R}\to\mathbb{R}\) is symmetric, convex, and satisfies both \(M(0)=0\) and \(M(x)>0\) for \(x\neq 0\) (when \(M(t)=|t|^{p}\), we have \(\|\cdot\|_{M}=\|\cdot\|_{p}\)). As in the case of \(\ell_{p}^{n}\) spaces, the Orlicz spaces belong to the important class of finite-dimensional \(1\)-symmetric Banach spaces, i.e., spaces \((X,\|\cdot\|_{X})\) with a basis \(e_{1},\ldots,e_{n}\in X\) such that
\[\left\|\,\sum_{i=1}^{n}x_{i}e_{i}\,\right\|_{X}=\left\|\,\sum_{i=1}^{n} \varepsilon_{i}x_{\pi(i)}e_{i}\,\right\|_{X}\]
holds for all \(x_{1},\ldots,x_{n}\in\mathbb{R}\), all signs \(\varepsilon_{1},\ldots,\varepsilon_{n}\in\{-1,1\}\), and all permutations \(\pi\) of the numbers \(\{1,\ldots,n\}\); Orlicz spaces are intensively studied in the functional analysis literature and we refer to [24, 34, 39, 49, 62] and references cited therein. Trying to lift or generalize results that can be obtained for the spaces \(\ell_{p}^{n}\) by employing the Schechtman-Zinn probabilistic representation (1) one soon hits a dead end, because such representation in the case of unit balls \(\mathbb{B}_{M}^{n}\) of Orlicz spaces \(\ell_{M}^{n}\) is not known. As a consequence, generalizations of results regarding the asymptotic structure of \(\mathbb{B}_{p}^{n}\) (as mentioned above) remained inaccessible for quite a while. Recently, Kabluchko and Prochno [31], using maximum entropy considerations in the framework of non-interacting particles that have their origin in statistical mechanics, derived an _asymptotic_ version of a Schechtman-Zinn type representation for Orlicz spaces, relating the uniform distribution on Orlicz balls to Gibbs distributions with potential given by the Orlicz functions. This connection allowed to obtain a number of further results in the general setting of Orlicz balls, wich can be found, for instance, in [2, 8, 21, 28, 35].
Alongside Orlicz spaces, the second natural class of generalizations of \(\ell_{p}\) spaces are Lorentz spaces, which were introduced by George Lorentz in the 1950s [41, 42]. In fact, it had already been observed
by Marcinkiewicz in [43] that Lebesgue spaces are not sufficient to capture the fine properties of operators on \(L_{p}\) spaces. Since their introduction, Lorentz spaces have found numerous applications in different areas of mathematics such as approximation theory [16], harmonic analysis [23], interpolation theory [9], and signal processing [20]. We are interested in the finite-dimensional Lorentz space \(\ell_{q,p}^{n}\) with \(1\leq p\leq q\leq\infty\), which is merely \(\mathbb{R}^{n}\) with the norm
\[\|(x_{i})_{i=1}^{n}\|_{q,p}:=\Big{(}\sum_{i=1}^{n}|i^{1/q-1/p}x_{i}^{*}|^{p} \Big{)}^{1/p},\]
where \(x_{1}^{*},\ldots,x_{n}^{*}\) is the non-increasing rearrangement of the numbers \(|x_{1}|,\ldots,|x_{n}|\) (when \(q=p\), we have \(\|\cdot\|_{q,p}=\|\cdot\|_{p}\)). Those spaces again belong to the important class of \(1\)-symmetric Banach spaces and have also been intensively studied in the functional analysis literature, for instance, in [5, 18, 26, 27, 48, 53, 54, 60, 61, 64]. Indeed, unexpected phenomena can occur in these spaces; for example, as proved in [19], they are a counterexample to a conjecture on the interpolation behaviour of entropy numbers which holds true for \(\ell_{p}\) spaces. Results regarding a probabilistic approach to the asymptotic volumetric and geometric structure of unit balls \(\mathbb{B}_{q,p}^{n}\) in Lorentz spaces \(\ell_{q,p}^{n}\), of the same flavor as the ones mentioned for \(\mathbb{B}_{p}^{n}\) or \(\mathbb{B}_{M}^{n}\) above, are completely absent from the literature. One reason is arguably that the non-increasing rearrangement that appears in the definition of the norm makes any analysis quite delicate.
The motivation of our paper is thus twofold, namely to develop a first probabilistic approach on the one hand and apply this to study some asymptotic properties of Lorentz spaces \(\ell_{q,p}^{n}\) on the other. In view of the vast literature regarding a probabilistic take on \(\ell_{p}^{n}\) or more generally \(\ell_{M}^{n}\) spaces and their unit balls, this is a first step towards approaching numerous natural questions that arise, like central limit theorems, conditional limit theorems, or moderate and large deviations [3, 4, 21, 22, 30, 32, 36], asymptotic volume ratios and thin-shell measure concentration [2, 31], asymptotic independence of coordinates [8] or asymptotic thin-shell results [35], just to mention a few.
## 2 Main results
We shall now present our main results, starting with the probabilistic approach to the uniform distribution on Lorentz balls and continuing with the applications to geometric and volumetric properties. The results are restricted to the case of Lorentz balls with parameters \(p=1\) and \(1<q\leq\infty\), but we present a conjecture together with heuristic arguments on what asymptotic probabilistic representation to expect in the general setting.
We shall denote by
\[\tilde{\mathbb{D}}_{q,p}^{n}:=n^{1/q}\mathbb{B}_{q,p}^{n}=\bigg{\{}x\in \mathbb{R}^{n}:\frac{1}{n}\sum_{i=1}^{n}\Big{(}\frac{i}{n}\Big{)}^{p/q-1}|x_{i }^{*}|^{p}\leq 1\bigg{\}}\]
for \(q<\infty\) and
\[\tilde{\mathbb{D}}_{\infty,1}^{n}:=\log(n+1)\mathbb{B}_{\infty,1}^{n}=\bigg{\{} x\in\mathbb{R}^{n}:\frac{1}{\log(n+1)}\sum_{i=1}^{n}\frac{1}{i}|x_{i}^{*}|\leq 1 \bigg{\}}.\]
for \(q=\infty\) the normalized unit balls such that, asymptotically up to constants, they have Lebesgue volume \(1\). Indeed, it follows from a classical result on the volume of unit balls in Banach spaces
due to Schutt [59] that \(\operatorname{vol}_{n}(\mathbb{B}_{q,1}^{n})^{1/n}\preceq_{q}n^{1/q}\) and, as shown in [18, Theorem 7], for \(q=\infty\) we have \(\operatorname{vol}_{n}(\mathbb{B}_{\infty,1}^{n})^{1/n}\asymp\log(n+1)^{-1}\); here \(\asymp_{q}\) denotes equivalence up to constants depending only on \(q\) and \(\asymp\) equivalence up to absolute constants.
The following stochastic representation constitutes the basis for our results and is of independent interest. It is the probabilistic entry point into the study of the asymptotic structure of Lorentz balls in the case \(p=1\).
**Theorem A**.: Let \(1\leq q\leq\infty\), \(n\in\mathbb{N}\) and assume \(X^{(n)}\) is uniformly distributed on \(\mathbb{B}_{q,1}^{n}\). Then
\[X^{(n)}\stackrel{{\mathrm{d}}}{{=}}\left(\varepsilon_{1}\frac{ \sum_{j=\pi(1)}^{n}\kappa_{q}(j)^{-1}E_{j}}{\sum_{j=1}^{n+1}E_{j}},\ldots, \varepsilon_{n}\frac{\sum_{j=\pi(n)}^{n}\kappa_{q}(j)^{-1}E_{j}}{\sum_{j=1}^{n +1}E_{j}}\right),\]
where \(E_{1},\ldots,E_{n+1}\) are standard exponential random variables, \(\varepsilon=(\varepsilon_{1},\ldots,\varepsilon_{n})\) is uniformly distributed on \(\{-1,1\}^{n}\), \(\pi\) is uniformly distributed on the symmetric group \(S_{n}\) of permutations of \(\{1,\ldots,n\}\), and for \(j\in\{1,\ldots,n\}\),
\[\kappa_{q}(j):=\sum_{i=1}^{j}i^{1/q-1}.\]
All random objects are independent of each other.
According to Theorem A, the first (or any) coordinate of a random vector uniformly distributed on \(\mathbb{\tilde{B}}_{q,1}^{n}=n^{1/q}\mathbb{\tilde{B}}_{q,1}^{n}\) for \(q<\infty\) is equal in distribution to
\[\varepsilon_{1}\frac{\frac{1}{n}\sum_{i=u_{n}}^{n}\frac{n^{1/q}}{n}E_{i}}{\frac {1}{n}\sum_{i=1}^{n+1}E_{i}}, \tag{2}\]
where \(u_{n}\) is uniformly distributed on \(\{1,\ldots,n\}\) and \(\varepsilon_{1}\) is uniformly distributed on \(\{-1,1\}\).
**Remark 1**.: Note that the special case \(q=1\) of Theorem A corresponds to the \(\ell_{1}^{n}\) ball and is consistent to the well-known fact that the order statistics of \(n\) independent standard exponential random variables are distributed as
\[\frac{E_{n}}{n},\frac{E_{n}}{n}+\frac{E_{n-1}}{n-1},\ldots,\sum_{j=1}^{n} \frac{E_{j}}{j}\]
beginning with the smallest (\(n^{\text{th}}\)) order statistic up to the largest (\(1^{\text{st}}\)) (see, e.g., [14, Eq. (2.5.5)]).
As a consequence of the proof of Theorem A, we can deduce the following precise asymptotics for the volume radius; the following can also be obtained by refining the calculations from [18].
**Corollary 1**.: _Let \(1\leq q\leq\infty\). As \(n\to\infty\), we have_
\[\operatorname{vol}_{n}(\mathbb{B}_{q,1}^{n})^{1/n}\sim\begin{cases}\frac{2}{q} e^{1/q}n^{-1/q}&:1\leq q<\infty,\\ 2(\log n)^{-1}&:q=\infty.\end{cases} \tag{3}\]
In our next theorem, we establish a weak convergence result for the empirical distribution. It is also used to establish the asymptotic distribution of a single coordinate presented afterwards.
**Theorem B**.: Let \(1<q\leq\infty\) and for each \(n\in\mathbb{N}\) assume that \(\widetilde{X}^{(n)}=(\widetilde{X}^{(n)}_{1},\ldots,\widetilde{X}^{(n)}_{n})\) is a random vector uniformly distributed on \(\widetilde{\mathbb{D}}^{n}_{q,1}\). Then, for every bounded continuous function \(f\colon\mathbb{R}\to\mathbb{R}\), we have
\[\frac{1}{n}\sum_{i=1}^{n}f(\widetilde{X}^{(n)}_{i})\,\tfrac{\mathbb{P}}{n\to \infty}\int_{\mathbb{R}}f(x)\,\nu_{q,1}(\mathrm{d}x),\]
where \(\nu_{q,1}\) is a symmetric probability measure on \(\mathbb{R}\) with Lebesgue density \(f_{q,1}\colon\mathbb{R}\to\mathbb{R}\) given by
\[f_{q,1}(x):=\frac{1}{2}\begin{cases}q(1-(q-1)|x|)^{1/(q-1)}\mathbbm{1}_{[- \frac{1}{q-1},\frac{1}{q-1}]}(x)&:q<\infty\\ \mathbbm{1}_{[-1,1]}(x)&:q=\infty.\end{cases}\]
Essentially due to exchangeability of the coordinates, it follows from Theorem B that any fixed choice of \(k\in\mathbb{N}\) coordinates is asymptotically distributed as the \(k\)-fold product of \(\nu_{q,1}\) as the dimension of the ambient space tends to infinity. We will obtain the following weak Poincare-Maxwell-Borel principle for normalized Lorentz balls.
**Corollary 2**.: _Let \(1<q\leq\infty\) and for each \(n\in\mathbb{N}\) assume that \(\widetilde{X}^{(n)}=(\widetilde{X}^{(n)}_{1},\ldots,\widetilde{X}^{(n)}_{n})\) is uniformly distributed on \(\widetilde{\mathbb{D}}^{n}_{q,1}\). For every \(k\in\mathbb{N}\) and with \(\nu_{q,1}\) as in Theorem B, for any bounded and continuous function \(f\colon\mathbb{R}^{k}\to\mathbb{R}\), we have_
\[\mathbb{E}[f(\widetilde{X}^{(n)}_{1},\ldots,\widetilde{X}^{(n)}_{k})]\stackrel{{ n\to\infty}}{{\longrightarrow}}\int_{\mathbb{R}^{k}}f(x)\,\nu_{q,1}^{\otimes k }(\mathrm{d}x),\]
_that is, \((\widetilde{X}^{(n)}_{1},\ldots,\widetilde{X}^{(n)}_{k})\) converges in distribution to a vector \(Y^{(k)}=(Y_{1},\ldots,Y_{k})\sim\nu_{q,1}^{\otimes k}\)._
**Remark 2**.: The case \(q=1\) for which \(\mathbb{B}^{n}_{1}=\mathbb{B}^{n}_{1,1}\) is already known [51] and in this case the asymptotic coordinate distributions are two-sided exponential. By considering \(q_{N}:=1+1/N\to 1\), we see that it is consistent with our result due to limiting behavior
\[f_{q_{N}}(x)=\frac{1+1/N}{2}(1-|x|/N)^{N}\mathbbm{1}_{[-N,N]}(x)\stackrel{{ N\to\infty}}{{\longrightarrow}}\frac{1}{2}\exp(-|x|),\quad x\in\mathbb{R}.\]
**Remark 3**.: A notable difference in the asymptotic probabilistic behavior of vectors chosen uniformly at random from the volume normalized versions of \(\mathbb{B}^{n}_{q,1}\) and \(\mathbb{B}^{n}_{q}\) is that in the former case the asymptotic distribution of a single coordinate has compact support whereas in the latter case one obtains unbounded support (see (1) and the definition of the \(q\)-Gaussian distribution).
In contrast to the maximum norm of uniform random vectors in normalized \(\ell^{n}_{q}\) balls, which has Gumbel fluctuations [32, Theorem 1.1 (c)], we have a central limit theorem for the Lorentz balls for \(q>2\), i.e., while the volume radius in terms of the dimensions is the same in both cases, the probabilistic behavior is quite different. In between, that is, for \(q\in(1,2]\), we have a different type of limit theorem which seems to interpolate between the Gumbel distribution and the normal distribution.
**Theorem C**.: Let \(1\leq q<\infty\), \(r\in(0,\infty)\) and for each \(n\in\mathbb{N}\) assume that \(\widetilde{X}^{(n)}\) is uniformly distributed on \(\widetilde{\mathbb{D}}^{n}_{q,1}\). Then we have:
1. For \(1\leq q<2\) \[n^{1-1/q}\big{(}\|\widetilde{X}^{(n)}\|_{\infty}-\mu_{q,n})\stackrel{{ \mathrm{d}}}{{n\to\infty}}R_{q}\stackrel{{\mathrm{d}}}{{=}}\sum_{ j=1}^{\infty}\frac{E_{j}-1}{\kappa_{q}(j)},\]
2. For \(q=2\) \[\frac{\sqrt{n}}{\sqrt{\log n}}\big{(}\|\widetilde{X}^{(n)}\|_{\infty}-\mu_{2,n} \big{)}\xrightarrow[n\to\infty]{\mathrm{d}}R_{2}\sim\mathcal{N}(0,1/4),\]
3. For \(2<q<\infty\) \[\sqrt{n}\big{(}\|\widetilde{X}^{(n)}\|_{\infty}-\mu_{q,n}\big{)}\xrightarrow[n \to\infty]{\mathrm{d}}\mathcal{N}(0,\sigma_{q}^{2}).\]
Here, \(R_{1}+\gamma\) has a Gumbel law with distribution function \(x\mapsto e^{-e^{-x}}\), \(\gamma\) is the Euler-Mascheroni constant,
\[\mu_{q,n}:=\frac{1}{n}\sum_{j=1}^{n}\frac{n^{1/q}}{\kappa_{q}(j)}\qquad\text{ and}\qquad\sigma_{q}^{2}:=\frac{1}{q(q-1)^{2}(q-2)}.\]
We can also use the probabilistic representation to study the \(\ell_{r}^{n}\) norm of a random vector uniformly distributed in \(\mathbb{D}_{q,1}^{n}:=\mathrm{vol}_{n}(\mathbb{B}_{q,1}^{n})^{-1/n}\mathbb{B} _{q,1}^{n}\), i.e., we determine its asymptotic length with respect to \(\|\cdot\|_{r}\); we shall apply that result later to study the volumetric behavior of certain intersections. We have the following weak law of large numbers.
**Theorem D**.: Let \(1<q\leq\infty\), \(1<r\leq\infty\), and for each \(n\in\mathbb{N}\) assume that \(X^{(n)}\) is uniformly distributed on \(\mathbb{D}_{q,1}^{n}\). Then
\[n^{-1/r}\|X^{(n)}\|_{r}\xrightarrow[n\to\infty]{\mathrm{P}}m_{q,r},\]
where for \(r<\infty\) we have
\[m_{q,r}:=\frac{1}{2e^{1/q}}\frac{q}{q-1}\left(\frac{\Gamma(r+1)\Gamma\Big{(}1 +\frac{q}{q-1}\Big{)}}{\Gamma\Big{(}r+1+\frac{q}{q-1}\Big{)}}\right)^{1/r} \text{ if }q<\infty\quad\text{and}\quad m_{\infty,r}:=\frac{1}{2}\Big{(}\frac{1}{r+1} \Big{)}^{1/r}\text{ if }q=\infty,\]
and for \(r=\infty\) we have
\[m_{q,\infty}:=\frac{1}{2e^{1/q}}\frac{q}{q-1}\text{ if }q<\infty\quad\text{and} \quad m_{\infty,\infty}:=\frac{1}{2}.\]
As a consequence we can deduce the following Schechtman-Schmuckenschlager type result on the asymptotic volume of intersections of Lorentz and \(\ell_{r}^{n}\) balls, thereby complementing the results from [29, 30, 31, 32, 33, 55, 56, 58]. In the following corollary, we shall denote \(\mathbb{D}_{r}^{n}:=\mathrm{vol}_{n}(\mathbb{B}_{r}^{n})^{-1/n}\mathbb{B}_{r}^ {n}\).
**Corollary 3**.: _Let \(1<q\leq\infty\) and \(1<r\leq\infty\). Then, for all \(t>0\),_
\[\mathrm{vol}_{n}(\mathbb{D}_{q,1}^{n}\cap t\mathbb{D}_{r}^{n})\xrightarrow[n \to\infty]{\mathrm{d}}\begin{cases}1&:\,A_{q,r}t>1\\ 0&:\,A_{q,r}t<1,\end{cases}\]
_where for \(r<\infty\), we have_
\[A_{q,r}:=\begin{cases}e^{1/q-1r}\frac{q-1}{q}&:\,q<\infty\\ 1&:\,q=\infty.\end{cases}\]
**Remark 4**.: Comparing with analogous results for the \(\ell_{p}^{n}\) balls obtained in [55], we see that for \(q=\infty\) and \(r<\infty\) the obtained threshold is the same. Therefore, asymptotically \(\mathbb{D}_{\infty,1}^{n}\) behaves somewhat like \(\mathbb{D}_{\infty}^{n}\) when intersected with an \(\ell_{r}^{n}\) ball.
Also for \(r<\infty\), using \(\Gamma(x+\alpha)\sim\Gamma(x)x^{\alpha}\) for \(x\to\infty\), we obtain that
\[\lim_{q\to 1}A_{q,r}=\frac{e^{1-1/r}}{\Gamma(1+1/r)\Gamma(r+1)^{1/r}r^{1/r}},\]
which equals the threshold one would get for \(\mathbb{D}_{1}^{n}\) intersected with \(\mathbb{D}_{r}^{n}\). That is, in the boundary cases \(q\in\{1,\infty\}\) we have that \(\mathbb{D}_{q,1}^{n}\) behaves similarly to \(\mathbb{D}_{q}^{n}\). In between, however, the behaviour of the threshold constants appears to be different, for example as \(q\to\infty\) the constant \(A_{q,r}\) grows for \(\mathbb{D}_{q}^{n}\) but converges for \(\mathbb{D}_{q,1}^{n}\).
For parameters \(p>1\), we currently do not have a probabilistic representation. However, we can use heuristic arguments based on maximum entropy considerations in order to derive the following conjecture about the limiting distribution.
**Conjecture 1**.: _Let \(1\leq p\leq q<\infty\) and for each \(n\in\mathbb{N}\) assume that \(\widetilde{X}^{(n)}=(\widetilde{X}_{1}^{(n)},\ldots,\widetilde{X}_{n}^{(n)})\) is uniformly distributed on \(\bar{\mathbb{D}}_{q,p}^{n}\). Then, for every bounded continuous function \(f\colon\mathbb{R}\to\mathbb{R}\), we have_
\[\frac{1}{n}\sum_{i=1}^{n}f(\widetilde{X}_{i}^{(n)})\xrightarrow[n-\infty]{ \mathbb{P}}\int_{\mathbb{R}}f(x)\,\nu_{q,p}(\mathrm{d}x),\]
_where \(\nu_{q,p}\) is a symmetric probability measure on \(\mathbb{R}\) absolutely continuous with respect to Lebesgue measure and with density function \(f_{q,p}\colon\mathbb{R}\to[0,\infty)\) satisfying \(f_{q,p}(x)=\frac{1}{2}G^{\prime}(|x|)\), where \(G\colon[0,\infty)\to\mathbb{R}\) satisfies \(G(0)=0\) and is the unique solution to the differential equation_
\[G^{\prime\prime}(x)=-G^{\prime}(x)\big{(}1-G(x)\big{)}^{p/q-1}x^{p-1},\quad x \in(0,r_{p,q}),\]
_with \(G(0)=0\) and \(\lim_{x\uparrow r_{p,q}}G^{\prime}(x)=0\), where \(r_{p,q}\in(0,\infty]\) is the first point such that \(G(r_{p,q})=1\). Further, we conjecture that \(r_{p,q}=\infty\) if and only if \(p=q\)._
Varying the free parameter \(G^{\prime}(0)>0\), we obtain different solutions, see Figure 1 for a simulation. If \(G^{\prime}(0)\) is smaller than some critical value \(c_{p,q}\), then we conjecture that \(\lim_{x\to\infty}G(x)<1\). If \(G^{\prime}(0)=c_{p,q}\), then there is a (minimal) value \(r_{p,q}\) such that \(G(r_{p,q})=1\) and we have \(\lim_{x\uparrow r_{p,q}}G^{\prime}(x)=0\). This critical solution is the one that appears in the above conjecture.
**Remark 5**.: The conjecture encapsulates the case \(p=q\) for \(\ell_{p}^{n}\) balls, where it correctly returns \(p\)-Gaussian densities, and the case \(p=1\), where it gives the limiting distribution obtained in Theorem B.
## 3 Proofs of the main results
We shall now present the proofs of our main results presented in Section 2. In what follows, given a set \(A\subseteq\mathbb{R}^{n}\), we shall denote by \(\operatorname{conv}(A)\) the convex hull of the set \(A\) and by \(\partial A\) its boundary. Moreover, we denote by \(e_{1},\ldots,e_{n}\) the standard unit vectors in \(\mathbb{R}^{n}\).
### Proof of Theorem A
Note that due to the \(1\)-symmetry of Lorentz norms (and thus unit balls) it is sufficient to look at the Weyl chamber
\[W:=\left\{x\in\mathbb{R}^{n}\colon x_{1}\geq\cdots\geq x_{n}\geq 0\right\} \tag{4}\]
intersected with \(\mathbb{B}_{q,1}^{n}\). This set then takes the form
\[\mathbb{B}_{q,1}^{n}\cap W=\left\{x\in W\colon\sum_{i=1}^{n}i^{1/q-1}x_{i}\leq 1 \right\}.\]
In a first step, we determine the extreme points (vertices) of the polytope \(\mathbb{B}_{q,1}^{n}\cap W\).
**Lemma 4**.: _Let \(1<q\leq\infty\). The extreme points of \(\mathbb{B}_{q,1}^{n}\cap W\) are given by_
\[0,Me_{1},Me_{2},\ldots,Me_{n},\]
_where_
\[M:=\begin{pmatrix}\kappa_{q}(1)^{-1}&\kappa_{q}(2)^{-1}&\cdots&\kappa_{q}(n)^ {-1}\\ 0&\kappa_{q}(2)^{-1}&\cdots&\kappa_{q}(n)^{-1}\\ 0&0&\ddots&\vdots\\ 0&0&\cdots&\kappa_{q}(n)^{-1}\end{pmatrix}\quad\text{with}\quad\kappa_{q}( j)=\sum_{i=1}^{j}i^{1/q-1},\quad j\in\{1,\ldots,n\}.\]
Proof.: First, we note that the Weyl chamber \(W\) is the positive hull of the summing basis
\[Ae_{1}=e_{1},Ae_{2}=e_{1}+e_{2},\ldots,Ae_{n}=e_{1}+\cdots+e_{n}\quad\text{ with}\quad A=\begin{pmatrix}1&1&\cdots&1\\ 0&1&\cdots&1\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&1\end{pmatrix},\]
Figure 1: Simulation of solutions of the differential equation in Conjecture 1 with different values of \(G^{\prime}(0)\).
because every \(x=(x_{i})_{i=1}^{n}\in W\) can be represented as
\[x=\sum_{i=1}^{n}(x_{i}-x_{i+1})(Ae_{i})=A\Big{(}\sum_{i=1}^{n}(x_{i}-x_{i+1})e_{ i}\Big{)},\]
where we set \(x_{n+1}:=0\). A substitution gives
\[\mathbb{B}_{q,1}^{n}\cap W=\Big{\{}x\in\mathbb{R}^{n}\colon\sum_{i=1}^{n}i^{1/ q-1}x_{i}\leq 1,x=A\Big{(}\sum_{i=1}^{n}y_{i}e_{i}\Big{)},y_{i}\geq 0\Big{\}}=\Big{\{} Ay\in\mathbb{R}^{n}\colon\sum_{i=1}^{n}\kappa_{q}(i)y_{i}\leq 1,y_{i}\geq 0 \Big{\}}.\]
Since \(A\) is linear and bijective, it follows that the extreme points are given by \(0\) and \(A(\kappa_{q}(i)^{-1}e_{i})=Me_{i}\), \(i\in\{1,\ldots,n\}\).
We now use this lemma to prove Theorem A.
Proof of Theorem A.: From Lemma 4, we know the extreme points of \(\mathbb{B}_{q,1}^{n}\cap W\), which are given by the images under \(M\) of \(0\) and the unit vectors \(e_{1},\ldots,e_{n}\); \(M\) is given as in Lemma 4. Thus
\[\mathbb{B}_{q,1}^{n}\cap W=\operatorname{conv}\{0,Me_{1},\ldots,Me_{n}\}=M \big{(}\operatorname{conv}\{0,e_{1},\ldots,e_{n}\}\big{)}. \tag{5}\]
The convex hull \(\operatorname{conv}\{0,e_{1},\ldots,e_{n}\}\) is the projection of the standard \(n\)-dimensional simplex \(\Delta_{n}\) in \(\mathbb{R}^{n+1}\), i.e., of
\[\Delta_{n}:=\bigg{\{}x=(x_{i})_{i=1}^{n+1}\in\mathbb{R}^{n+1}:x_{i}\geq 0, \sum_{i=1}^{n+1}x_{i}=1\bigg{\}}=\operatorname{conv}\{e_{1},\ldots,e_{n+1}\} \subseteq\mathbb{R}^{n+1},\]
onto the first \(n\) coordinates. It is well known (see, e.g., [44]) that
\[Z^{(n)}:=\Big{(}\frac{E_{1}}{\sum_{j=1}^{n+1}E_{j}},\ldots,\frac{E_{n+1}}{ \sum_{j=1}^{n+1}E_{j}}\Big{)}\]
is uniformly distributed on \(\Delta_{n}\), where \(E_{1},\ldots,E_{n+1}\) are i.i.d. standard exponential random variables. Applying the projection and then \(M\) to \(Z^{(n)}\), we see that by (5) the random vector
\[\Big{(}\frac{\sum_{j=1}^{n}\kappa_{q}(j)^{-1}E_{j}}{\sum_{j=1}^{n+1}E_{j}}, \frac{\sum_{j=2}^{n}\kappa_{q}(j)^{-1}E_{j}}{\sum_{j=1}^{n+1}E_{j}},\ldots, \frac{\sum_{j=n}^{n}\kappa_{q}(j)^{-1}E_{j}}{\sum_{j=1}^{n+1}E_{j}}\Big{)}\]
is uniformly distributed on the intersection \(\mathbb{B}_{q,1}^{n}\cap W\). Using the \(1\)-symmetry of \(\mathbb{B}_{q,1}^{n}\), we may change the order of coordinates (introducing a random permutation \(\pi\) of \(\{1,\ldots,n\}\)) as well as the signs of coordinates (introducing a random sign vector \((\varepsilon_{1},\ldots,\varepsilon_{n})\in\{-1,1\}^{n}\)) and obtain the desired probabilistic representation.
We now present the proof of the asymptotics of the volume radius of a Lorentz ball. For this and also other proofs we shall need the following well-known asymptotics, see for example [30, Lemma 3.4].
**Lemma 5**.: _Let \(\alpha>-1\). There exists a constant \(c_{\alpha}\in(0,\infty)\) and an absolute constant \(c\in(0,\infty)\) such that for all \(n\in\mathbb{N}\),_
\[\Big{|}\sum_{i=1}^{n}i^{\alpha}-\frac{1}{\alpha+1}\,n^{\alpha+1}\Big{|}\leq c _{\alpha}\,n^{\max[0,\alpha]}\qquad\text{and}\qquad\Big{|}\sum_{i=1}^{n}i^{-1} -\log n\Big{|}\leq c.\]
Proof of Corollary 1.: From Lemma 4, see Equation (5), we know that \(\mathbb{B}_{q,1}^{n}\cap W=M(\operatorname{conv}\{0,e_{1},\ldots,e_{n}\})\). Together with the \(1\)-symmetry, this leads to
\[\operatorname{vol}_{n}(\mathbb{B}_{q,1}^{n})=2^{n}n!\cdot\operatorname{vol}_{n} \big{(}\mathbb{B}_{q,1}^{n}\cap W\big{)}=2^{n}\det(M)=2^{n}\prod_{i=1}^{n} \kappa_{q}(i)^{-1},\]
where \(W\) is as in (4) and \(M\) as in Lemma 4. Now, observe that by Lemma 5 for a suitable constant \(c_{q}\in(0,\infty)\) just depending on the parameter \(q>1\) the following asymptotics hold:
\[|\kappa_{q}(i)-qi^{1/q}|\leq c_{q}\;\text{ if }q<\infty\quad\text{and}\quad| \kappa_{q}(i)-\log(i+1)|\leq c_{\infty}\;\text{ if }q=\infty. \tag{6}\]
We start with case \(q<\infty\). Here, we have
\[\log\Big{(}\prod_{i=1}^{n}\kappa_{q}(i)^{-1}\Big{)}^{1/n} =-\frac{1}{n}\sum_{i=1}^{n}\log\kappa_{q}(i)=-\frac{1}{n}\sum_{i= 1}^{n}\log\Big{(}\frac{\kappa_{q}(i)\cdot qi^{1/q}}{qi^{1/q}}\Big{)}\] \[=-\log q-\frac{1}{qn}\sum_{i=1}^{n}\log i-\frac{1}{n}\sum_{i=1}^{ n}\log\Big{(}\frac{\kappa_{q}(i)}{qi^{1/q}}\Big{)}. \tag{7}\]
Using \(\log(n!)=n\log n-n+O(\log n)\), we deduce that the first sum on the right-hand side satisfies, as \(n\to\infty\),
\[-\frac{1}{qn}\sum_{i=1}^{n}\log i=-\frac{1}{q}\log(n)+\frac{1}{q}+O\Big{(} \frac{\log n}{n}\Big{)}.\]
By (6) we have
\[\lim_{i\to\infty}\log\Big{(}\frac{\kappa_{q}(i)}{qi^{1/q}}\Big{)}=0.\]
By Cesaro's lemma,
\[\lim_{n\to\infty}\frac{1}{n}\sum_{i=1}^{n}\log\Big{(}\frac{\kappa_{q}(i)}{qi^ {1/q}}\Big{)}=0.\]
This shows the desired result. If \(q=\infty\), we proceed similarly but now use
\[\frac{1}{n}\sum_{i=3}^{n}\log\log i=\frac{1}{n}\int_{3}^{n}\log\log x+O\Big{(} \frac{1}{\log n}\Big{)}=\log\log n-\frac{\operatorname{li}(n)}{n}+O\Big{(} \frac{1}{\log n}\Big{)}=\log\log n+O\Big{(}\frac{1}{\log n}\Big{)},\]
where the asymptotics of the logarithmic integral \(\operatorname{li}(x):=\int_{0}^{x}\frac{\operatorname{d}t}{\operatorname{ln}( t)},x>1\), are well known ([1, Chapter 5]).
### Proof of Theorems B
We now continue with the proof of Theorem B, i.e., the weak convergence results of the empirical distribution. In what follows, for two sequences \((a_{n})_{n\in\mathbb{N}}\) and \((b_{n})_{n\in\mathbb{N}}\) of non-negative real numbers, we shall write \(a_{n}\lesssim b_{n}\) if there exists \(C\in(0,\infty)\) such that for all \(n\in\mathbb{N}\) it holds that \(a_{n}\leq Cb_{n}\). If in addition \(a_{n}\gtrsim b_{n}\), then we write \(a_{n}\asymp b_{n}\). The following statement will be crucial in the proof.
**Lemma 6**.: _Let \(1<q\leq\infty\) and for \(n\in\mathbb{N}\) assume that \(\widetilde{X}^{(n)}=(\widetilde{X}^{(n)}_{1},\ldots,\widetilde{X}^{(n)}_{n})\) is uniformly distributed in \(\widetilde{\mathbb{D}}_{q,1}^{n}\). Then, for any sequence \((a_{n})_{n\in\mathbb{N}}\) with \(a_{n}\stackrel{{ n\to\infty}}{{\longrightarrow}}\infty\), we have_
\[\mathbb{P}\bigg{[}\sup_{1\leq i\leq n}\big{|}(\widetilde{X}^{(n)}_{i})^{*}-G_{ q}\Big{(}\frac{i-1}{n}\Big{)}\bigg{|}\geq a_{n}\delta_{n}\bigg{]}\stackrel{{ n\to\infty}}{{\longrightarrow}}0,\]
_where_
\[G_{q}\colon[0,1]\to[0,(q-1)^{-1}],\quad x\mapsto G_{q}(x)=\frac{1}{q-1}(1-x^{1-1/q })\quad\text{if }q<\infty\]
_and \(G_{\infty}\colon[0,1]\to[0,1]\), \(x\mapsto 1-x\). The error bounds may be chosen as follows:_
\[\delta_{n}=\begin{cases}n^{-(1-1/q)}&:1<q<2,\\ n^{-1/2}\log n&:q=2,\\ n^{-1/q}&:q\in(2,\infty),\\ (\log n)^{-1}&:q=\infty,\end{cases}\quad\text{ where }n\in\mathbb{N}.\]
Proof.: Let \(1<q<\infty\) and \(i\in\{1,\ldots,n\}\). Using the probabilistic representation from Theorem A, we obtain for the \(i^{th}\)-largest entry \((\widetilde{X}_{i}^{(n)})^{*}\) of \((|\widetilde{X}_{i}^{(n)}|)_{i=1}^{n}\) that
\[(\widetilde{X}_{i}^{(n)})^{*}\overset{\mathrm{d}}{=}Y_{i}^{(n)}\left(\frac{ 1}{n}\sum_{j=1}^{n+1}E_{j}\right)^{-1}\quad\text{with}\quad Y_{i}^{(n)}=\frac {1}{n}\sum_{j=i}^{n}\frac{n^{1/q}}{\kappa_{q}(j)}E_{j},\]
and the same holds in the sense of joint distributions. We will prove the statement for the random variable \(Y_{i}^{(n)}\) instead of \((\widetilde{X}_{i}^{(n)})^{*}\) and then the statement of the lemma easily follows, because as a consequence of the triangle inequality and the sub-additivity of the supremum, we have
\[\mathbb{P}\left[\sup_{1\leq i\leq n}\left|(\widetilde{X}_{i}^{(n) })^{*}-G_{q}\left(\frac{i-1}{n}\right)\right|\geq a_{n}\delta_{n}\right]\] \[\leq\mathbb{P}\left[\sup_{1\leq i\leq n}\left|(\widetilde{X}_{i}^ {(n)})^{*}-Y_{i}^{(n)}\right|\geq\frac{a_{n}}{2}\delta_{n}\right]+\mathbb{P} \left[\sup_{1\leq i\leq n}\left|Y_{i}^{(n)}-G_{q}\left(\frac{i-1}{n}\right) \right|\geq\frac{a_{n}}{2}\delta_{n}\right].\]
In order to estimate the first probability on the right-hand side, we make use of the fact that
\[\sup_{1\leq i\leq n}\left|(\widetilde{X}_{i}^{(n)})^{*}-Y_{i}^{(n)}\right| \overset{\mathrm{d}}{=}\big{(}\sup_{1\leq i\leq n}Y_{i}^{(n)}\big{)}\cdot \left|\left(\frac{1}{n}\sum_{j=1}^{n+1}E_{j}\right)^{-1}-1\right|=\|Y^{(n)}\| _{\infty}\cdot\left|\left(\frac{1}{n}\sum_{j=1}^{n+1}E_{j}\right)^{-1}-1\right|. \tag{8}\]
From the proof below it will follow that the first factor on the right-hand side of (8) is bounded by a suitable constant with probability tending to one. The second factor satisfies (via the central limit theorem)
\[\mathbb{P}\left[\left|\left(\frac{1}{n}\sum_{j=1}^{n+1}E_{j}\right) ^{-1}-1\right|\geq\delta_{n}\right] \leq\mathbb{P}\left[\frac{1}{n}\sum_{j=1}^{n+1}E_{j}\leq(1+ \delta_{n})^{-1}\right]\leq\mathbb{P}\left[\frac{1}{n}\sum_{j=1}^{n}E_{j} \leq(1+\delta_{n})^{-1}\right]\] \[=\mathbb{P}\left[\frac{s_{n}}{n}\sum_{j=1}^{n}(E_{j}-1)\leq-1 \right]=\mathbb{P}\left[\frac{1}{\sqrt{n}}\sum_{j=1}^{n}(E_{j}-1)\leq-\frac{ \sqrt{n}}{s_{n}}\right]\overset{n\to\infty}{\longrightarrow}0,\]
where \(s_{n}:=1+\delta_{n}^{-1}\asymp\delta_{n}^{-1}\) satisfies \(\sqrt{n}/s_{n}\to+\infty\), as a consequence of the definition of \(\delta_{n}\) stated in the lemma. Taking into account (8) and the upper bound on the largest coordinate of \(Y^{(n)}\), one easily derives the statement for \(\widetilde{X}^{(n)}\) instead of \(Y^{(n)}\).
We now start with the proof of the statement for \(Y_{i}^{(n)}\). Consider the decomposition
\[Y_{i}^{(n)}-G_{q}\Big{(}\frac{i-1}{n}\Big{)}=\frac{1}{n}\sum_{j=i}^{n}\frac{n^{ 1/q}}{\kappa_{q}(j)}E_{j}-\frac{1}{q}\int_{(i-1)/n}^{1}x^{-1/q}\,\mathrm{d}x= \frac{1}{n}\sum_{j=i}^{n}A_{j}+\frac{1}{n}\sum_{j=i}^{n}B_{j}+\frac{1}{qn}\sum_ {j=i}^{n}C_{j} \tag{9}\]
with summands
\[A_{j}=\frac{n^{1/q}}{\kappa_{q}(j)}(E_{j}-1),\quad B_{j}=\frac{n^{1/q}}{\kappa_{q }(j)}-\frac{n^{1/q}}{qj^{1/q}},\quad C_{j}=\left(\frac{j}{n}\right)^{-1/q}-\int_ {j-1}^{j}\left(\frac{x}{n}\right)^{-1/q}\mathrm{d}x,\quad j=i,\ldots,n.\]
Here we used \(G_{q}(x)=\frac{1}{q}\int_{x}^{1}y^{-1/q}\,\mathrm{d}y\), \(x\in[0,1]\). Only the first sum on the right-hand side of (9) is random and its variance is
\[\frac{1}{n^{2}}\sum_{j=i}^{n}\mathrm{Var}(A_{j})=\frac{1}{n^{2}}\sum_{j=i}^{n} \left(\frac{n^{1/q}}{\kappa_{q}(j)}\right)^{2}\lesssim_{q}n^{-2(1-1/q)}\sum_{ j=1}^{n}j^{-2/q}\lesssim_{q}\begin{cases}n^{-2(1-1/q)}&\text{if $1<q<2$,}\\ n^{-1}\log n&\text{if $q=2$,}\\ n^{-1}&\text{if $q>2$,}\end{cases}\]
which tends to zero as \(n\to\infty\). Here we used the asymptotics \(\kappa_{q}(j)\asymp_{q}j^{1/q}\) and sum-integral approximation, see (6) and Lemma 5.
For the second sum we have due to (6)
\[|B_{j}|=\frac{n^{1/q}}{qj^{1/q}}\frac{|qj^{1/q}-\kappa_{q}(j)|}{\kappa_{q}(j)} \asymp_{q}\frac{n^{1/q}}{j^{1/q}}\frac{1}{j^{1/q}}=n^{1/q}j^{-2/q},\]
which gives that
\[\frac{1}{n}\sum_{j=i}^{n}|B_{j}|\lesssim_{q}n^{1/q-1}\sum_{j=1}^{n}j^{-2/q} \lesssim_{q}\begin{cases}n^{-(1-1/q)}&\text{if $1<q<2$,}\\ n^{-1/2}\log n&\text{if $q=2$,}\\ n^{-1/q}&\text{if $q>2$.}\end{cases}\]
For the third sum we use that for any \(j\geq 1\), we find \(\xi\in(j-1,j)\) such that
\[0\leq\int_{j-1}^{j}\left(\frac{x}{n}\right)^{-1/q}\mathrm{d}x-\left(\frac{j}{ n}\right)^{-1/q}=\left(\frac{\xi}{n}\right)^{-1/q}-\left(\frac{j}{n}\right)^{-1/q} \lesssim\left(\frac{j}{n}\right)^{-1/q}\frac{1}{j},\]
which follows from the mean value theorem used twice. Thus, the third sum satisfies
\[\frac{1}{n}\sum_{j=i}^{n}|C_{j}|\lesssim_{q}n^{-(1-1/q)}\sum_{j=1}^{n}j^{-(1+ 1/q)}\lesssim_{q}n^{-1+1/q}.\]
Lastly, we note that
\[\mathbb{P}\left[\sup_{1\leq i\leq n}\left|Y_{i}^{(n)}-G_{q}\left( \frac{i-1}{n}\right)\right|\geq a_{n}\delta_{n}\right] \leq\mathbb{P}\left[\sup_{1\leq i\leq n}\left|\frac{1}{n}\sum_{j=i }^{n}A_{j}\right|\geq\frac{a_{n}}{3}\delta_{n}\right]\] \[+\mathbb{P}\left[\sup_{1\leq i\leq n}\left|\frac{1}{n}\sum_{j=i}^ {n}B_{j}\right|\geq\frac{a_{n}}{3}\delta_{n}\right]\] \[+\mathbb{P}\left[\sup_{1\leq i\leq n}\left|\frac{1}{n}\sum_{j=i}^ {n}C_{j}\right|\geq\frac{a_{n}}{3}\delta_{n}\right].\]
For large \(n\) the last two summands vanish because of the previous estimates and the first tends to zero due to Kolmogorov's inequality ([37, Thm. 5.28]). This proves the statement for \(q<\infty\).
If \(q=\infty\) we need to carry out several modifications, in particular adapt the definition of \(Y^{(n)}\) by replacing \(n^{1/q}\) with \(\log(n+1)\). Let us write for \(i\in\{1,\ldots,n\}\) again the coordinate as in equation (9), i.e.,
\[Y_{i}^{(n)}-G_{\infty}\Big{(}\frac{i-1}{n}\Big{)}=\frac{1}{n}\sum_{j=i}^{n} \frac{\log(n+1)}{\kappa_{\infty}(j)}E_{j}-\left(1-\frac{i-1}{n}\right)=\frac{1 }{n}\sum_{j=i}^{n}A_{j}+\frac{1}{n}\sum_{j=i}^{n}B_{j}+\frac{1}{n}\sum_{j=i}^ {n}C_{j}\]
with summands
\[A_{j}=\frac{\log(n+1)}{\kappa_{\infty}(j)}(E_{j}-1),\quad B_{j}=\frac{\log(n+1)}{ \kappa_{\infty}(j)}-\frac{\log(n+1)}{\log(j+1)},\quad C_{j}=\frac{\log(n+1)}{ \log(j+1)}-1.\]
One can estimate the errors similarly after adjusting \(\delta_{n}\). For the first sum we have \(\kappa_{\infty}(j)\asymp\log(j+1)\) using (6) and thus
\[\frac{1}{n^{2}}\sum_{j=i}^{n}\operatorname{Var}(A_{j})=\frac{1}{n^{2}}\sum_{j =i}^{n}\Big{(}\frac{\log(n+1)}{\kappa_{\infty}(j)}\Big{)}^{2}\lesssim\frac{ \log^{2}(n+1)}{n^{2}}\sum_{j=1}^{n}\frac{1}{\log^{2}(j+1)}\lesssim\frac{1}{n},\]
where in the last estimate we used that \(\log^{2}(n+1)\) is slowly varying and thus the arithmetic mean is equivalent to the largest summand, see also [10, Prop. 1.5.8]. Similarly, we have
\[\frac{1}{n}\sum_{j=i}^{n}|B_{j}|\lesssim\frac{\log(n+1)}{n}\sum_{j=1}^{n}\frac {1}{\log^{2}(j+1)}\lesssim\frac{1}{\log(n+1)}.\]
The third sum satisfies
\[\frac{1}{n}\sum_{j=i}^{n}|C_{j}|\leq\frac{1}{n}\sum_{j=1}^{n}|C_{j}|=\frac{ \log(n+1)}{n}\sum_{j=1}^{n}\frac{1}{\log(j+1)}-1\lesssim\frac{1}{\log(n+1)},\]
where we used the approximation
\[\sum_{j=1}^{n}\frac{1}{\log(j+1)}=\int_{2}^{n+1}\frac{1}{\log x}\,\mathrm{d}x+ O(1)=\mathrm{li}(n+1)+O(1)=\frac{n}{\log(n+1)}+O\Big{(}\frac{n}{\log^{2}(n+1)} \Big{)}.\]
The proof is now completed as in the case \(q<\infty\).
For the proof of Theorem B we use a symmetrization trick, which we present in the following lemma; the result may well be known, but we were unable to find it in the literature.
**Lemma 7**.: _For each \(n\in\mathbb{N}\) let \(X^{(n)}=(X^{(n)}_{1},\ldots,X^{(n)}_{n})\) be a random vector in \(\mathbb{R}^{n}\) with identically distributed coordinates such that \((\varepsilon_{1}X^{(n)}_{1},\ldots,\varepsilon_{n}X^{(n)}_{n})\stackrel{{ \mathrm{\tiny{\textregistered}}}}{{=}}X^{(n)}\) for any choice of signs \((\varepsilon_{1},\ldots,\varepsilon_{n})\in\{-1,1\}^{n}\). If for every bounded continuous function \(f\colon[0,\infty)\to\mathbb{R}\)_
\[\frac{1}{n}\sum_{i=1}^{n}f\big{(}|X^{(n)}_{i}|\big{)}\xrightarrow[n\to\infty ]{\mathbb{P}}\int_{0}^{\infty}f(x)\varphi(x)\,\mathrm{d}x,\]
_holds for a density \(\varphi\colon[0,\infty)\to[0,\infty)\), then for every bounded continuous function \(f\colon\mathbb{R}\to\mathbb{R}\)_
\[\frac{1}{n}\sum_{i=1}^{n}f\big{(}X^{(n)}_{i}\big{)}\xrightarrow[n\to\infty]{ \mathbb{P}}\frac{1}{2}\int_{-\infty}^{\infty}f(x)\varphi(|x|)\,\mathrm{d}x.\]
Proof.: Let \(f\colon\mathbb{R}\to\mathbb{R}\) be a bounded and continuous function. The two mappings
\[f_{1}\colon[0,\infty)\to\mathbb{R},\quad x\mapsto f(x)\quad\text{and}\quad f _{-1}\colon[0,\infty)\to\mathbb{R},\quad x\mapsto f(-x)\]
are again bounded and continuous on \([0,\infty)\). Let \(\varepsilon_{1},\ldots,\varepsilon_{n}\) be independent Rademacher random variables, independent of all other random objects. Then the symmetry of \(X^{(n)}\) implies
\[S_{n}:=\frac{1}{n}\sum_{i=1}^{n}f\big{(}X^{(n)}_{i}\big{)}\stackrel{{ \mathrm{\tiny{\textregistered}}}}{{=}}\frac{1}{n}\sum_{i=1}^{n}f\big{(} \varepsilon_{i}|X^{(n)}_{i}|\big{)}=\frac{1}{n}\sum_{i=1}^{n}f_{\varepsilon_{ i}}\big{(}|X^{(n)}_{i}|\big{)}.\]
By linearity of the conditional expectation, we obtain
\[\mathbb{E}\big{[}S_{n}|X^{(n)}\big{]} =\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\big{[}f_{\varepsilon_{i}}\big{(} |X_{i}^{(n)}|\big{)}|X^{(n)}\big{]}\] \[=\frac{1}{n}\sum_{i=1}^{n}\frac{(f_{1}+f_{-1})(|X_{i}^{(n)}|)}{2} \xrightarrow[]{\mathbb{P}}\int_{0}^{\infty}\frac{(f_{1}+f_{-1})(x)}{2}\varphi( x)\,\mathrm{d}x=\frac{1}{2}\int_{-\infty}^{\infty}f(x)\varphi(|x|)\,\mathrm{d}x. \tag{10}\]
Further, the conditional variance satisfies
\[\mathrm{Var}\big{[}S_{n}|X^{(n)}\big{]}=\frac{1}{2n^{2}}\sum_{i=1}^{n}\Big{(}f _{1}\big{(}|X_{i}^{(n)}|\big{)}-\frac{(f_{1}+f_{-1})(|X_{i}^{(n)}|)}{2}\Big{)} ^{2}+\Big{(}f_{-1}\big{(}|X_{i}^{(n)}|\big{)}-\frac{(f_{1}+f_{-1})(|X_{i}^{(n) }|)}{2}\Big{)}^{2}\leq\frac{4\|f\|_{\infty}^{2}}{n}.\]
Conditioned on \(X^{(n)}\), Chebyshev's inequality gives, for every \(\varepsilon>0\),
\[\mathbb{P}\big{[}|S_{n}-\mathbb{E}[S_{n}|X^{(n)}|]>\varepsilon|X^{(n)}\big{]} \leq\frac{\mathrm{Var}\big{[}S_{n}|X^{(n)}\big{]}}{\varepsilon^{2}}.\]
Thus, the law of total expectation yields
\[\mathbb{P}\big{[}|S_{n}-\mathbb{E}[S_{n}|X^{(n)}|]>\varepsilon\big{]}\leq \frac{\mathbb{E}\big{[}\mathrm{Var}[S_{n}|X^{(n)}]\big{]}}{\varepsilon^{2}} \leq\frac{4\|f\|_{\infty}^{2}}{\varepsilon^{2}n}\xrightarrow[]{n\to\infty}0.\]
This together with (10) implies
\[S_{n}\xrightarrow[]{\mathbb{P}}\frac{1}{n\to\infty}\frac{1}{2}\int_{-\infty}^{ \infty}f(x)\varphi(|x|)\,\mathrm{d}x,\]
which completes the proof.
Proof of Theorem B.: We use Lemma 7, which allows us to pass to the absolute values. Let \(\varepsilon>0\) and \(f\colon[0,\infty)\to[0,\infty)\) be bounded and continuous. We have
\[\mathbb{P}\left[\Big{|}\frac{1}{n}\sum_{i=1}^{n}f\big{(}|\widetilde {X}_{i}^{(n)}|\big{)}-\int_{\mathbb{R}_{+}}f(x)\,\nu_{q,1}(\mathrm{d}x)\Big{|} >\varepsilon\right] \leq\mathbb{P}\left[\Big{|}\frac{1}{n}\sum_{i=1}^{n}f\big{(}| \widetilde{X}_{i}^{(n)}|\big{)}-\frac{1}{n}\sum_{i=1}^{n}f\Big{(}G_{q}\Big{(} \frac{i-1}{n}\Big{)}\Big{)}\Big{|}>\frac{\varepsilon}{2}\right]\] \[+\mathbb{P}\left[\Big{|}\frac{1}{n}\sum_{i=1}^{n}f\big{(}G_{q} \Big{(}\frac{i-1}{n}\Big{)}\Big{)}-\int_{\mathbb{R}_{+}}f(x)\,\nu_{q,1}( \mathrm{d}x)\Big{|}>\frac{\varepsilon}{2}\right]. \tag{11}\]
For the first summand on the right-hand side, we note that
\[\Big{|}\frac{1}{n}\sum_{i=1}^{n}f\big{(}|\widetilde{X}_{i}^{(n)}|\big{)}-\frac {1}{n}\sum_{i=1}^{n}f\Big{(}G_{q}\Big{(}\frac{i-1}{n}\Big{)}\Big{)}\Big{|} \leq\sup_{1\leq i\leq n}\Big{|}f\big{(}(\widetilde{X}_{i}^{(n)})^{*}\big{)}-f \Big{(}G_{q}\Big{(}\frac{i-1}{n}\Big{)}\Big{)}\Big{|},\]
since we may arrange the absolute values in any order without changing the average. Let \((a_{n})_{n\in\mathbb{N}}\) be a sequence with \(a_{n}\xrightarrow[]{n\to\infty}\infty\) such that \(a_{n}\delta_{n}\xrightarrow[]{n\to\infty}0\) and \(a_{n}\delta_{n}\leq 1\) for all \(n\in\mathbb{N}\). For \(n\in\mathbb{N}\), we define the event
\[A_{n}:=\bigg{\{}\sup_{1\leq i\leq n}\Big{|}(\widetilde{X}_{i}^{(n)})^{*}-G_{q} \Big{(}\frac{i-1}{n}\Big{)}\Big{|}\leq a_{n}\delta_{n}\bigg{\}}.\]
Then on \(A_{n}\) it holds that: \((\widetilde{X}_{i}^{(n)})^{*}\leq(\widetilde{X}_{1}^{(n)})^{*}\leq G_{q}(0)+1\) for all \(i\in\{1,\ldots,n\}\) and
\[\sup_{1\leq i\leq n}\Big{|}f\big{(}(\widetilde{X}_{i}^{(n)})^{*}\big{)}-f\Big{(} G_{q}\Big{(}\frac{i-1}{n}\Big{)}\Big{)}\Big{|}\leq\sup_{\begin{subarray}{c}x,y\in[0,G_{q} (0)+1]\\ |x-y|\leq a_{n}\delta_{n}\end{subarray}}|f(x)-f(y)|,\]
which tends to zero because \(f\) is uniformly continuous on \([0,G_{q}(0)+1]\). By Lemma 6 it holds that \(\mathbb{P}[A_{n}]\stackrel{{ n\to\infty}}{{\longrightarrow}}1\), and thus the first summand in (11) tends to zero as well. For the second summand in (11), we note that the continuity of \(f\) and \(G_{q}\colon[0,1]\to[0,\infty)\) imply via a substitution that
\[\frac{1}{n}\sum_{i=1}^{n}f\Big{(}G_{q}\Big{(}\frac{i-1}{n}\Big{)}\Big{)} \stackrel{{ n\to\infty}}{{\longrightarrow}}\int_{0}^{1}f(G_{q}( x))\,\mathrm{d}x=\int_{0}^{G_{q}(0)}f(x)(-G_{q}^{-1})^{\prime}(x)\,\mathrm{d}x,\]
where
\[G_{q}^{-1}\colon[0,G_{q}(0)],\quad x\mapsto(1-(q-1)x)^{q/(q-1)},\]
if \(q<\infty\) and \(G_{\infty}^{-1}=G_{\infty}\), such that \((-G_{q}^{-1})^{\prime}\) is the density of \(\nu_{q,1}\).
With Theorem B at our disposal, we are able to deduce Corollary 2 on the asymptotic distribution of a single coordinate.
Proof of Corollary 2.: The result essentially follows from the exchangeability of the coordinates of \(\widetilde{X}^{(n)}=(\widetilde{X}^{(n)}_{1},\ldots,\widetilde{X}^{(n)}_{n})\) and in mathematical physics literature is known as propagation of chaos, see, e.g., the work [63] of Sznitman; recall that exchangeability means that any permutation of coordinates has the same joint distribution as the original one. For convenience of the reader, we adapt an argument from [15, p. 326].
Let \(k\in\mathbb{N}\). We show for any bounded and continuous function \(f\colon\mathbb{R}^{k}\to\mathbb{R}\) that
\[\mathbb{E}\big{[}f(\widetilde{X}^{(n)}_{1},\ldots,\widetilde{X}^{(n)}_{k}) \big{]}\stackrel{{ n\to\infty}}{{\longrightarrow}}\big{(} \mathbb{E}f(Y)\big{)}^{k},\]
where \(Y\) is real random variable distributed according to the Lebesgue density \(f_{q,1}\). It is sufficient to assume that \(f=\prod_{i=1}^{k}f_{i}\) for \(f_{i}\colon\mathbb{R}\to\mathbb{R}\) bounded and continuous (see [15, Appendix D, p. 356]). Due to exchangeability and linearity, we have
\[\mathbb{E}\Big{[}\prod_{i=1}^{k}f_{i}(\widetilde{X}^{(n)}_{i})\Big{]}= \mathbb{E}\left[\frac{(n-k)!}{n!}\sum_{i_{1},\nu\mapsto i_{k}}\prod_{i=1}^{k}f _{i}(\widetilde{X}^{(n)}_{i_{j}})\right].\]
Let \(L_{\widetilde{X}^{(n)}}=\frac{1}{n}\sum_{i=1}^{n}\delta_{\widetilde{X}^{(n)}_{ i}}\). Then we also have that
\[\mathbb{E}\bigg{[}\int\prod_{i=1}^{k}f_{i}\,\mathrm{d}L_{\widetilde{X}^{(n)}}^{ \otimes k}\bigg{]}=\mathbb{E}\bigg{[}\prod_{i=1}^{k}\frac{1}{n}\sum_{j=1}^{n}f _{i}(\widetilde{X}^{(n)}_{j})\bigg{]}=\frac{1}{n^{k}}\mathbb{E}\bigg{[}\sum_{ i_{1},\nu\mapsto i_{k}}\prod_{i=1}^{k}f_{i}(\widetilde{X}^{(n)}_{i_{j}})\bigg{]}.\]
Therefore, it holds that
\[\Big{|}\mathbb{E}\prod_{i=1}^{k}f_{i}(\widetilde{X}^{(n)}_{i})-\mathbb{E}\int \prod_{i=1}^{k}f_{i}\mathrm{d}L_{\widetilde{X}^{(n)}}^{\otimes k}\Big{|}= \Big{|}1-\frac{n!}{(n-k)!n^{k}}\Big{|}\max_{1\leq i\leq k}\|f_{i}\|_{\infty}^{ k}\stackrel{{ n\to\infty}}{{\longrightarrow}}0.\]
Applied to the \(f_{i}\)'s, Theorem B and the continuous mapping theorem give the convergence
\[\int\prod_{i=1}^{k}f_{i}\,\mathrm{d}L_{\widetilde{X}^{(n)}}^{\otimes k}=\prod _{i=1}^{k}\int_{f_{i}}\,\mathrm{d}L_{\widetilde{X}^{(n)}}\stackrel{{ \mathbb{P}}}{{\longrightarrow}}\prod_{i=1}^{k}f_{i}(Y).\]
By uniform boundedness we can take expectations on both sides and this completes the proof.
### Proof of Theorem C
We now prove the central limit theorem for the maximum norm of a random vector in a Lorentz ball.
Proof of Theorem C.: We have
\[\|\widetilde{X}^{(n)}\|_{\infty}\,{\buildrel\mathrm{d}\over{=}}\,\frac{Y_{n}}{Z_ {n}}\quad\text{with}\quad Y_{n}:=\frac{1}{n}\sum_{j=1}^{n}\frac{n^{1/q}}{ \kappa_{q}(j)}E_{j}\quad\text{and}\quad Z_{n}:=\frac{1}{n}\sum_{j=1}^{n+1}E_{j}.\]
We first prove (i). Let \(1\leq q<2\) and write
\[Y_{n}=n^{1/q-1}\sum_{j=1}^{n}\frac{E_{j}-1}{\kappa_{q}(j)}+\mu_{q,n}\quad\text {with}\quad\mu_{q,n}=\frac{1}{n}\sum_{j=1}^{n}\frac{n^{1/q}}{\kappa_{q}(j)}\]
as in the statement of the theorem. Note that
\[\mathbb{E}\Big{[}\frac{E_{j}-1}{\kappa_{q}(j)}\Big{]}=0\quad\text{and}\quad \text{Var}\Big{[}\frac{E_{j}-1}{\kappa_{q}(j)}\Big{]}=\frac{1}{\kappa_{q}(j) ^{2}}.\]
Since \(\kappa_{q}(j)\asymp_{q}j^{1/q}\), we have
\[\sum_{j=1}^{\infty}\text{Var}\Big{[}\frac{E_{j}-1}{\kappa_{q}(j)}\Big{]}= \sum_{j=1}^{\infty}\frac{1}{\kappa_{q}(j)^{2}}<\infty\]
and thus by the martingale convergence theorem ([37, Thm. 11.4])
\[\widetilde{Y}_{n}:=n^{1-1/q}\,Y_{n}-\sum_{j=1}^{n}\frac{1}{\kappa_{q}(j)}= \sum_{j=1}^{n}\frac{E_{j}-1}{\kappa_{q}(j)}\stackrel{{\text{a.s. }}}{{n-\infty}}\sum_{j=1}^{\infty}\frac{E_{j}-1}{\kappa_{q}(j)}=:R_{q}. \tag{12}\]
Write
\[n^{1-1/q}\big{(}\|\widetilde{X}^{(n)}\|_{\infty}-\mu_{q,n}\big{)}\,{\buildrel \mathrm{d}\over{=}}\,n^{1-1/q}\Big{(}\frac{Y_{n}}{Z_{n}}-\mu_{q,n}\Big{)}= \frac{\widetilde{Y}_{n}+\sum_{j=1}^{n}\frac{1}{\kappa_{q}(j)}}{1+n^{-1/2} \widetilde{Z}_{n}+n^{-1}}-\sum_{j=1}^{n}\frac{1}{\kappa_{q}(j)},\]
where \(\widetilde{Z}_{n}:=n^{-1/2}\sum_{j=1}^{n+1}(E_{j}-1)\,{\buildrel\mathrm{d} \over{=}}\,\mathcal{N}(0,1)\) by the central limit theorem and Slutsky's theorem ([37, Thm. 13.18]) since \(E_{n+1}/\sqrt{n}\stackrel{{\mathrm{p}}}{{\longrightarrow}}0\). Simplifying, by Slutsky's theorem,
\[n^{1-1/q}\big{(}\|\widetilde{X}^{(n)}\|_{\infty}-\mu_{q,n}\big{)}\,{\buildrel \mathrm{d}\over{=}}\,\frac{\widetilde{Y}_{n}-n^{-1/2}\Big{(}\sum_{j=1}^{n} \frac{1}{\kappa_{q}(j)}\Big{)}(\widetilde{Z}_{n}+n^{-1/2})}{1+n^{-1/2} \widetilde{Z}_{n}+n^{-1}}\stackrel{{\mathrm{d}\over{=}}}{{n- \infty}}R_{q},\]
since \(n^{-1/2}\Big{(}\sum_{j=1}^{n}\frac{1}{\kappa_{q}(j)}\Big{)}\stackrel{{ n\to\infty}}{{\longrightarrow}}0\).
For convenience of the reader we prove that \(R_{1}\) is Gumbel distributed (see, e.g., [32, Theorem 1.1 (c)]). It is well-known that \(\sum_{j=1}^{n}\frac{1}{j}=\log n+\gamma+o(1)\) and from Remark 1, we deduce that
\[\sum_{j=1}^{n}\frac{E_{j}-1}{j}\,{\buildrel\mathrm{d}\over{=}}\,\max_{1\leq j \leq n}E_{j}-\log n-\gamma+e_{n},\]
where \(e_{n}\stackrel{{ n\to\infty}}{{\longrightarrow}}0\). Now the well-known fact that \(\max_{1\leq j\leq n}E_{j}-\log n\stackrel{{\mathrm{d}\over{n \to\infty}}}{{\longrightarrow}}G\), where \(G\) is standard Gumbel distributed, Slutsky's theorem and uniqueness of the limit imply that \(R_{1}\,{\buildrel\mathrm{d}\over{=}}\,G-\gamma\).
For the proof of (ii) let \(q=2\). Defining \(\tilde{Y}_{n}\) as on the left-hand side of (12), by a version of the Lindeberg central limit theorem from [13, Thm. 5.3], we have
\[\frac{1}{\sqrt{\log n}}\tilde{Y}_{n}=\frac{1}{\sqrt{\log n}}\sum_{j=1}^{n}\frac {E_{j}-1}{\kappa_{2}(j)}\ \tfrac{\mathrm{d}}{n\to\infty}\ \mathcal{N}(0,1/4),\]
since \(\sum_{j=1}^{n}\frac{1}{\kappa_{2}(j)^{2}}\sim\frac{1}{4}\log n\stackrel{{ n\to\infty}}{{\longrightarrow}}\infty\).
A similar rewriting and again Slutsky's theorem give
\[\frac{\sqrt{n}}{\sqrt{\log n}}\big{(}\|\widetilde{X}^{(n)}\|_{\infty}-\mu_{2,n }\big{)}=\frac{\frac{\widetilde{Y}_{n}}{\sqrt{\log n}}-\frac{n^{-1/2}}{\sqrt{ \log n}}\Big{(}\sum_{j=1}^{n}\frac{1}{\kappa_{2}(j)}\Big{)}(\tilde{Z}_{n}+1) }{1+n^{-1/2}\tilde{Z}_{n}+n^{-1/2}}\ \tfrac{\mathrm{d}}{n\to\infty}\ \mathcal{N}(0,1/4),\]
since \(\frac{n^{-1/2}}{\sqrt{\log n}}\Big{(}\sum_{j=1}^{n}\frac{1}{\kappa_{2}(j)} \Big{)}\asymp\frac{1}{\sqrt{\log n}}\stackrel{{ n\to\infty}}{{ \longrightarrow}}0\).
In order to prove (iii), let now \(q>2\). We will show that the vector \(\sqrt{n}(Y_{n}-\mu_{q,n},Z_{n}-1)\) tends to a Gaussian random vector and then use Taylor's theorem to conclude the proof. We have
\[\sqrt{n}(Y_{n}-\mu_{q,n},Z_{n}-1)=\frac{1}{\sqrt{n}}\sum_{j=1}^{n}\Big{(} \frac{n^{1/q}}{\kappa_{q}(j)}(E_{j}-1),E_{j}-1\Big{)}+\Big{(}0,\frac{E_{n+1}} {\sqrt{n}}\Big{)}.\]
Note that, since \(\frac{E_{n+1}}{\sqrt{n}}\) tends to zero in probability, by Slutsky's theorem we can omit the last summand as we are dealing with convergence in distribution. For the sum of random vectors on the right-hand side we check Lyapunov's condition for the multivariate central limit theorem (see, e.g., [45, Thm. 3.2.2]). To this end, let \(\delta>0\) with \(q>2+\delta\) and compute
\[n^{-(2+\delta)/2}\sum_{i=1}^{n}\mathbb{E}\Big{|}\frac{n^{1/q}}{ \kappa_{q}(j)}E_{j}\Big{|}^{2+\delta} \lesssim n^{-(2+\delta)/2}\sum_{i=1}^{n}\Big{|}\frac{n^{1/q}}{ \kappa_{q}(j)}\Big{|}^{2+\delta}\] \[\lesssim_{q}n^{(2+\delta)(-1/2+1/q)}\sum_{i=1}^{n}j^{-(2+\delta)/ q}\lesssim_{q}n^{1-(2+\delta)/2}\stackrel{{ n\to\infty}}{{\longrightarrow}}0.\]
It remains to compute the limit of the covariance matrix. We have \(\mathrm{Var}\Big{[}\frac{1}{\sqrt{n}}\sum_{j=1}^{n}E_{j}\Big{]}=1\) for all \(n\in\mathbb{N}\) as well as
\[\lim_{n\to\infty}\mathrm{Var}\Big{[}\frac{1}{\sqrt{n}}\sum_{j=1}^{n}\frac{n^{1 /q}}{\kappa_{q}(j)}E_{j}\Big{]}=\lim_{n\to\infty}\frac{1}{n}\sum_{j=1}^{n} \frac{n^{2/q}}{\kappa_{q}(j)^{2}}=\frac{1}{q^{2}}\int_{0}^{1}x^{-2/q}\mathrm{ d}x=\frac{1}{q(q-2)}=:s_{q}^{2},\]
where we used that \(\kappa_{q}(j)\sim qj^{1/q}\) as \(j\to\infty\). Similarly,
\[\lim_{n\to\infty}\frac{1}{n}\sum_{j=1}^{n}\mathrm{Cov}\Big{[}\frac{n^{1/q}}{ \kappa_{q}(j)}E_{j},E_{j}\Big{]}=\lim_{n\to\infty}\frac{1}{n}\sum_{j=1}^{n} \frac{n^{1/q}}{\kappa_{q}(j)}=\frac{1}{q}\int_{0}^{1}x^{-1/q}\mathrm{d}x=\frac {1}{q-1}=:\mu_{q,\infty}.\]
Therefore, the limiting covariance matrix is
\[\Sigma:=\lim_{n\to\infty}\mathrm{Cov}\big{[}\sqrt{n}(Y_{n}-\mu_{q,n},Z_{n}-1) \big{]}=\begin{pmatrix}s_{q}^{2}&\mu_{q,\infty}\\ \mu_{q,\infty}&1\end{pmatrix}\]
and we derive the central limit theorem
\[\sqrt{n}(Y_{n}-\mu_{q,n},Z_{n}-1)\ \tfrac{\mathrm{d}}{n\to\infty}\ (Y,Z)\sim \mathcal{N}(0,\Sigma). \tag{13}\]
In order to prove a central limit theorem for \(\frac{Y_{n}}{Z_{n}}\), note that the function \((x,y)\mapsto F(x,y)=\frac{x}{y}\) is continuously differentiable for \(x,y>0\) and \(\nabla F(x,y)=(\frac{1}{y},-\frac{x}{y^{2}})^{\top}\). Hence, by Taylor's theorem,
\[\sqrt{n}\big{(}F(Y_{n},Z_{n})-F(\mu_{q,n},1)\big{)}=\sqrt{n}\big{(}(Y_{n},Z_{n })-(\mu_{q,n},1)\big{)}\nabla F(\mu_{q,n},1)^{\top}+e_{n,q}, \tag{14}\]
where the random error term is
\[e_{n,q}:=\|\sqrt{n}(Y_{n}-\mu_{q,n},Z_{n}-1)\|_{2}h(Y_{n}-\mu_{q,n},Z_{n}-1),\]
where \(h\) is a function tending to zero as its argument approaches zero. Due to the CLT in (13) the random variable \(\|\sqrt{n}(Y_{n}-\mu_{q,n},Z_{n}-1)\|_{2}\) stays bounded in probability and \(h(Y_{n}-\mu_{q,n},Z_{n}-1)\) tends to zero in probability. Thus, the error satisfies \(e_{n,q}\xrightarrow[n\to\infty]{\mathbb{P}}0\). For the first term on the right-hand side of (14), we note that \(\nabla F(\mu_{q,n},1)^{\top}\xrightarrow[n\to\infty]{\mathbb{P}}F(\mu_{q, \infty},1)^{\top}\). Therefore, by (13) and (14) together with Slutsky's theorem, we have
\[\sqrt{n}(F(Y_{n},Z_{n})-F(\mu_{q,n},1))\xrightarrow[n\to\infty]{\mathbb{d}}( Y,Z)\nabla F(\mu_{q,\infty},1)^{\top}\sim\mathcal{N}(0,\sigma_{q}^{2})\]
with
\[\sigma_{q}^{2}:=(1,-\mu_{q,\infty})\Sigma(1,-\mu_{q,\infty})^{\top}=\frac{1}{ q(q-1)^{2}(q-2)}.\]
This implies the claimed central limit theorem for \(\|\widetilde{X}^{(n)}\|_{\infty}\).
### Proof of Theorem D
We now present the proof of Theorem D, i.e., of the weak law of large numbers for the \(\ell_{r}^{n}\) norm of random points in normalized Lorentz balls. A key ingredient is again Lemma 6.
Proof of Theorem D.: Let \(r<\infty\) and assume first that \(\widetilde{X}^{(n)}\) is uniformly distributed on \(\tilde{\mathbb{P}}_{q,1}^{n}\). First, because of the permutation invariance of the \(\ell_{r}^{n}\) norm, we have
\[n^{-1}\|\widetilde{X}^{(n)}\|_{r}^{r}=\frac{1}{n}\sum_{i=1}^{n}\left(( \widetilde{X}_{i}^{(n)})^{*}\right)^{r}.\]
Hence, by Lemma 6 together with the same arguments as in the proof of Theorem B,
\[\left|n^{-1}\|\widetilde{X}^{(n)}\|_{r}^{r}-\frac{1}{n}\sum_{i=1}^{n}G_{q} \Big{(}\frac{i-1}{n}\Big{)}^{r}\right|\xrightarrow[n\to\infty]{\mathbb{P}}0.\]
It is therefore sufficient to compute the deterministic limit
\[\lim_{n\to\infty}\frac{1}{n}\sum_{i=1}^{n}G_{q}\Big{(}\frac{i-1}{n}\Big{)}^{r }=\int_{0}^{1}G_{q}(x)^{r}\,\mathrm{d}x=\begin{cases}\int_{0}^{1}\left(\frac{ 1}{q-1}(1-x^{1-1/q})\right)^{r}\,\mathrm{d}x&:q<\infty,\\ \int_{0}^{1}(1-x)^{r}\,\mathrm{d}x&:q=\infty.\end{cases} \tag{15}\]
The integral can be evaluated using a substitution and the beta function.
The case \(r=\infty\) follows directly from the fact that \(\|\widetilde{X}^{(n)}\|_{\infty}=(\widetilde{X}_{1}^{(n)})^{*}\) and Lemma 6.
By Corollary 1 it remains to multiply the resulting constants by \(\frac{q}{2e^{\nicefrac{{1}}{{q}}}}\) for \(q<\infty\) and \(\frac{1}{2}\) for \(q=\infty\) in order to obtain the result for \(X^{(n)}\) uniformly distributed in \(\mathbb{D}_{q,1}\).
Finally, we present the proof of the threshold result for the asymptotic volume of intersections of normalized Lorentz and \(\ell_{r}^{n}\) balls. The proof is based on the weak law of large numbers for the \(\ell_{r}^{n}\) norm (see Theorem D). Recall that for each \(r\in(0,\infty)\)
\[\operatorname{vol}_{n}(\mathbb{B}_{r}^{n})^{1/n}=\frac{2\Gamma(1+\frac{1}{r})}{ \Gamma\big{(}1+\frac{n}{r}\big{)}^{1/n}}\sim 2\Gamma\big{(}1+\frac{1}{r} \big{)}(er)^{1/r}n^{-1/r},\quad\text{as $n\to\infty$}, \tag{16}\]
which is known at least since the work [17] of Dirichlet.
Proof of Corollary 3.: We first prove the statement for \(q<\infty\). For \(X^{(n)}\) uniformly distributed on \(\mathbb{D}_{q,1}^{n}\), we can write
\[\operatorname{vol}_{n}\big{(}\mathbb{D}_{q,1}^{n}\cap t\mathbb{D}_{r}^{n} \big{)}=\mathbb{P}\bigg{[}\|X^{(n)}\|_{r}\leq\frac{t}{\operatorname{vol}_{n}( \mathbb{B}_{r}^{n})^{1/n}}\bigg{]}.\]
Then, simply rewriting the previous expression, we obtain
\[\operatorname{vol}_{n}\big{(}\mathbb{D}_{q,1}^{n}\cap t\mathbb{D}_{r}^{n} \big{)}=\mathbb{P}\bigg{[}n^{-1/r}\|X^{(n)}\|_{r}-m_{q,r}\leq\frac{t}{n^{1/r} \operatorname{vol}_{n}(\mathbb{B}_{r}^{n})^{1/n}}-m_{q,r}\bigg{]},\]
with \(m_{q,r}\) as in Theorem D, in particular for \(r<\infty\),
\[m_{q,r}=\frac{1}{2e^{1/q}}\frac{q}{q-1}\left(\frac{\Gamma(r+1)\Gamma\Big{(}1+ \frac{q}{q-1}\Big{)}}{\Gamma\Big{(}r+1+\frac{q}{q-1}\Big{)}}\right)^{1/r}.\]
From the stated volume radius asymptotics in (16), we know that
\[\frac{1}{n^{1/r}\operatorname{vol}_{n}(\mathbb{B}_{r}^{n})^{1/n}}\stackrel{{ n\to\infty}}{{\longrightarrow}}c_{q,r}:=\begin{cases}\frac{1}{2(er)^{1/r} \Gamma(1+1/r)}&:r<\infty\\ \frac{1}{2}&:r=\infty.\end{cases}\]
Therefore, in view of the weak law of large numbers in Theorem D, the probability tends to one if \(tc_{q,r}>m_{q,r}\) and to zero if \(tc_{q,r}<m_{q,r}\). Hence, the statement holds with threshold \(A_{q,r}=c_{q,r}(m_{q,r})^{-1}\). If \(q=\infty\), we can proceed analogously using \(m_{\infty,r}=\frac{1}{2}\Big{(}\frac{1}{r+1}\Big{)}^{1/r}\). This completes the proof.
## 4 The conjecture for general \(p>1\) -- Maximum entropy heuristics
Let us describe the heuristics leading us to Conjecture 1 and specifically to the differential equation that appears in it. Our arguments are based on maximum entropy considerations [52], but are not mathematically rigorous (see [31] for a similar rigorous result in the case of Orlicz balls). We shall denote by \(\mathcal{M}_{1}(\mathbb{R}_{+})\) the set of probability measures on \(\mathbb{R}_{+}\) equipped with the weak topology, which is a Polish space. Since Lorentz balls belong to the class of \(1\)-symmetric convex bodies, we can restrict considerations to the positive orthant (see also Lemma 7) and consider a uniformly distributed random vector \(X^{(n)}\) in
\[\widetilde{\mathbb{D}}_{q,p,+}^{n}=\bigg{\{}x\in\mathbb{R}_{+}^{n}\colon\frac {1}{n}\sum_{i=1}^{n}\Big{(}\frac{i}{n}\Big{)}^{p/q-1}|x_{i}^{*}|^{p}\leq 1 \bigg{\}}.\]
We now consider a sequence of independent and identically distributed "random variables" \(Y_{1},Y_{2},\ldots\) which we assume to be uniformly "distributed" according to the infinite Lebesgue measure \(\lambda\) on \(\mathbb{R}_{+}\).
Conditioned on \(Y^{(n)}:=(Y_{1},\ldots,Y_{n})\in\tilde{\mathbb{O}}_{q,p,+}^{n}\), which is a so-called energy constraint, the "random vector" \(Y^{(n)}\) is uniformly distributed in \(\tilde{\mathbb{O}}_{q,p,+}^{n}\). The maximum entropy principle states (under suitable conditions) that the random empirical probability measure \(L_{Y^{(n)}}\) associated to \(Y^{(n)}\), where for \(x\in\mathbb{R}^{n}\)
\[L_{x}:=\frac{1}{n}\sum_{i=1}^{n}\delta_{x_{i}},\]
converges, when conditioned upon the rare event of remaining in a closed convex set \(K\subset\mathcal{M}_{1}(\mathbb{R}_{+})\), to the measure \(\mu^{*}\) minimizing the relative entropy
\[H(\mu|\lambda):=\begin{cases}\int_{0}^{\infty}p(x)\log p(x)\,\lambda(\mathrm{ d}x)&:p=\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}\text{ exists},\\ \infty&\text{otherwise}\end{cases}\]
over \(K\) (c.f. Sanov's theorem [52, Section 5.2] and its version for infinite measures [6]). In order to rewrite the conditioning \(Y^{(n)}\in\tilde{\mathbb{O}}_{q,p,+}^{n}\) in terms of the empirical measure, we use the quantile function (i.e., the inverse cumulative distribution function), which for any probability measure \(\mu\in\mathcal{M}_{1}(\mathbb{R}_{+})\) is given by
\[Q_{\mu}\colon[0,1]\to[0,\infty],\quad Q_{\mu}(t):=\inf\{s\geq 0\colon\mu([0,s]) \geq t\}.\]
The following lemma follows from a simple rewriting which also appears in the analysis of L-statistics, see, e.g., [11].
**Lemma 8**.: _Let \(n\in\mathbb{N}\). For any \(x\in\mathbb{R}_{+}^{n}\), we have_
\[x\in\tilde{\mathbb{O}}_{q,p,+}^{n}\iff L_{x}\in K_{n}:=\left\{\mu\in\mathcal{ M}_{1}(\mathbb{R}_{+})\colon\int_{0}^{1}Q_{\mu}(t)^{p}J_{n}(t)\,\mathrm{d}t\leq 1 \right\},\]
_where_
\[J_{n}(t):=\sum_{i=1}^{n}\left(1-\frac{i-1}{n}\right)^{p/q-1}\mathbbm{1}_{\{ \frac{(i-1}{n},\frac{i}{n})\}}(t).\]
Proof.: By definition
\[x\in\tilde{\mathbb{O}}_{q,p,+}^{n}\iff\frac{1}{n}\sum_{i=1}^{n}\left(\frac{i} {n}\right)^{p/q-1}|x_{i}^{*}|^{p}\leq 1.\]
Moreover, we note that for \(t\in[0,1]\)
\[Q_{L_{x}}(t)=\inf\big{\{}s\geq 0:L_{x}([0,s])\geq t\big{\}}=\inf\Big{\{}s\geq 0 :\,\frac{1}{n}\sum_{i=1}^{n}\delta_{x_{i}}([0,s])\geq t\Big{\}}\geq 0.\]
The function \(s\mapsto\frac{1}{n}\sum_{i=1}^{n}\delta_{x_{i}}([0,s])\) returns the proportion of coordinates which are at most \(s\). So the quantile function evaluated at \(t\) is the minimal \(s\) such that this proportion still exceeds \(t\). If \(t\in(\frac{i-1}{n},\frac{i}{n}]\), then this must be the value \(s=x_{n-i+1}^{*}\) (the \(i^{\text{th}}\) smallest coordinate) since after this point at least \(i\) of the \(n\) coordinates are at most \(s\) and this is not true for any smaller value. Therefore, we have
\[Q_{L_{x}}(t)=x_{n-i+1}^{*}\quad\text{for }t\in\Big{(}\frac{i-1}{n},\frac{i}{n} \Big{]},\ i\in\{1,\ldots,n\}.\]
Now we observe further that, for all \(t\in[0,1]\),
\[\frac{1}{n}\sum_{i=1}^{n}\Big{(}\frac{i}{n}\Big{)}^{p/q-1}|x_{i}^{*}|^{p}= \frac{1}{n}\sum_{i=1}^{n}\Big{(}1-\frac{i-1}{n}\Big{)}^{p/q-1}|x_{n-i+1}^{*}|^{p}\]
\[=\frac{1}{n}\sum_{i=1}^{n}J_{n}(t)Q_{L_{x}}(t)^{p}\mathbb{1}_{\{\frac{(1-1 )}{n},\frac{i}{n}\}}(t).\]
Since both functions are constant on \(t\in(\frac{t-1}{n},\frac{i}{n}]\), the factor \(1/n\) can be interpreted as the integral over these functions on this interval, i.e.,
\[\frac{1}{n}\sum_{i=1}^{n}J_{n}(t)Q_{L_{x}}(t)^{p}\mathbb{1}_{\{\frac{(1-1)}{n}, \frac{i}{n}\}}(t)=\sum_{i=1}^{n}\int_{\frac{t-1}{n}}^{\frac{i}{n}}J_{n}(t)Q_{L _{x}}(t)^{p}\,\mathrm{d}t=\int_{0}^{1}J_{n}(t)Q_{L_{x}}(t)^{p}\,\mathrm{d}t.\]
This immediately proves the lemma.
In view of Lemma 8, we have in our setting
\[Y^{(n)}\in\widetilde{\mathbb{D}}^{n}_{q,p,+}\iff L_{Y^{(n)}}\in K_{n}.\]
To apply variants of the Gibbs conditioning principle as the one used, e.g., in [21] and [36], we have to account for the fact that the set \(K_{n}\) depends on \(n\). For large \(n\) it will be approximately equal to the set
\[K:=\left\{\mu\in\mathcal{M}_{1}(\mathbb{R}_{+})\colon\int_{0}^{1}Q_{\mu}(t)^{ p}J(t)\,\mathrm{d}t\leq 1\right\},\]
where
\[J\colon[0,1]\to\mathbb{R},\quad J(t)=(1-t)^{p/q-1},\quad t\in[0,1],\]
is the pointwise limit of the sequence \(J_{n}\), \(n\in\mathbb{N}\). We ignore the technical details.
In order to minimize the relative entropy over all measures in \(K\), we can obviously ignore non-absolutely continuous measures for which the relative entropy is \(\infty\). Let \(F_{\mu}\) be the distribution function of the minimizer and \(F_{\mu}^{\prime}=\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}\) its probability density function as well as \(F_{\mu}^{-1}=Q_{\mu}\) the quantile function. Using the substitution \(x=F_{\mu}^{-1}(y)\) and the fact that \(Q_{\mu}^{\prime}(y)=(F_{\mu}^{-1})^{\prime}(y)=1/F_{\mu}^{\prime}(F_{\mu}^{-1 }(y))\), we can write
\[H(\mu|\lambda)=\int_{0}^{\infty}F_{\mu}^{\prime}(x)\log F_{\mu}^{\prime}(x)\, \lambda(\mathrm{d}x)=\int_{0}^{1}\log F_{\mu}^{\prime}(F_{\mu}^{-1}(y))\, \lambda(\mathrm{d}y)=-\int_{0}^{1}\log Q_{\mu}^{\prime}(y)\,\lambda(\mathrm{d }y).\]
This leads to the following variational problem
\[\max \int_{0}^{1}\log Q^{\prime}(x)\,\mathrm{d}x\] s.t. \[Q\colon[0,1]\to[0,\infty]\] \[Q\] is nondecreasing and differentiable \[\int_{0}^{1}Q(x)^{p}(1-x)^{p/q-1}\,\mathrm{d}x\leq 1.\]
Its solution will be the quantile function of the minimizer \(\mu^{\star}\in K\) of the relative entropy and thus of the supposed limiting distribution. For the sake of convenience, we shall set \(\alpha:=p/q-1\in[-1,0]\). In the following, we heuristically compute a solution to this problem and derive a differential equation for the corresponding distribution function.
Let us compute the variation and set \(P:=Q+\varepsilon g\) for a small \(\varepsilon>0\) and a continuously differentiable function \(g\colon[0,1]\to\mathbb{R}\) such that \(g(0)=0\):
\[\int_{0}^{1}\log(Q+\varepsilon g)^{\prime}\,\mathrm{d}x=\int_{0}^{1}\log Q^{ \prime}(x)\,\mathrm{d}x+\int_{0}^{1}\log\left(1+\varepsilon\frac{g^{\prime}(x )}{Q^{\prime}(x)}\right)\mathrm{d}x.\]
We expect that as \(\varepsilon\) becomes small, the following approximation
\[\int_{0}^{1}\log\Big{(}1+\varepsilon\frac{g^{\prime}(x)}{Q^{\prime}(x)}\Big{)} \,\mathrm{d}x=\varepsilon\int_{0}^{1}\frac{g^{\prime}(x)}{Q^{\prime}(x)}\, \mathrm{d}x+o(\varepsilon)\]
is valid and hence
\[\varepsilon^{-1}\Big{(}\int_{0}^{1}\log(Q+\varepsilon g)^{\prime}\,\mathrm{d}x -\int_{0}^{1}\log Q^{\prime}(x)\,\mathrm{d}x\Big{)}\overset{\varepsilon\to 0}{ \longrightarrow}\int_{0}^{1}\frac{g^{\prime}(x)}{Q^{\prime}(x)}\,\mathrm{d}x.\]
We now look at the constraints and compute
\[\int_{0}^{1}\big{(}Q(x)+\varepsilon g(x)\big{)}^{p}(1-x)^{\alpha}\,\mathrm{d} x=\int_{0}^{1}Q(x)^{p}\Big{(}1+\varepsilon\frac{g(x)}{Q(x)}\Big{)}^{p}(1-x)^{ \alpha}\,\mathrm{d}x.\]
Again, we expect that, as \(\varepsilon\) becomes small, the approximation
\[\int_{0}^{1}Q(x)^{p}\Big{(}1+\varepsilon\frac{g(x)}{Q(x)}\Big{)} ^{p}(1-x)^{\alpha}\,\mathrm{d}x =\int_{0}^{1}Q(x)^{p}\Big{(}1+\varepsilon\frac{pg(x)}{Q(x)}\Big{)} (1-x)^{\alpha}\,\mathrm{d}x+o(\varepsilon)\] \[=\int_{0}^{1}Q(x)^{p}(1-x)^{\alpha}\mathrm{d}x+\varepsilon\int_ {0}^{1}pg(x)Q(x)^{p-1}(1-x)^{\alpha}\,\mathrm{d}x+o(\varepsilon)\]
is valid. Hence,
\[\varepsilon^{-1}\Big{(}\int_{0}^{1}(Q+\varepsilon g)^{p}(1-x)^{\alpha}\, \mathrm{d}x-\int_{0}^{1}Q(x)^{p}(1-x)^{\alpha}\,\mathrm{d}x\Big{)}\overset{ \varepsilon\to 0}{\longrightarrow}\int_{0}^{1}pg(x)Q(x)^{p-1}(1-x)^{ \alpha}\,\mathrm{d}x.\]
Therefore, we only allow functions \(g:[0,1]\to\mathbb{R}\) with \(g(0)=0\) and
\[p\int_{0}^{1}g(x)Q(x)^{p-1}(1-x)^{\alpha}\,\mathrm{d}x=0.\]
A necessary condition for the maximizer is that for every such \(g\),
\[0=\int_{0}^{1}\frac{g^{\prime}(x)}{Q^{\prime}(x)}\,\mathrm{d}x=\frac{g(1)}{Q^ {\prime}(1)}-\int_{0}^{1}g(x)\Big{(}\frac{1}{Q^{\prime}}\Big{)}^{\prime}(x)\, \mathrm{d}x=\int_{0}^{1}g(x)\Big{(}\frac{\delta_{1}(x)}{Q^{\prime}(1)}-\Big{(} \frac{1}{Q^{\prime}}\Big{)}^{\prime}(x)\Big{)}\,\mathrm{d}x,\]
where in the second equality we used partial integration. We now use the following result.
**Lemma 9**.: _Let \(\mu_{1},\mu_{2}\) be signed measures on \([0,1]\) such that if \(\int_{0}^{1}g(x)\mu_{1}(\mathrm{d}x)=0\) for some \(g\in C[0,1]\) with \(g(0)=0\), then also \(\int_{0}^{1}g(x)\mu_{2}(\mathrm{d}x)=0\). Then, \(\mu_{2}=\alpha\mu_{1}+\beta\delta_{0}\) for some constants \(\alpha,\beta\in\mathbb{R}\)._
Applied to our situation above, we conclude that there exists some \(c\neq 0\) such that, for all \(x\in[0,1]\),
\[\Big{(}\frac{1}{Q^{\prime}}\Big{)}^{\prime}(x)=cQ(x)^{p-1}(1-x)^{\alpha}\quad \text{and}\quad\frac{1}{Q^{\prime}(1)}=0.\]
A substitution yields
\[F^{\prime\prime}(x)=cF^{\prime}(x)(1-F(x))^{\alpha}x^{p-1},\quad x\in\mathbb{ R}_{+}.\]
We must have \(F(0)=0\) and there exists \(r>0\) with \(F(x)=1\) for \(x\geq r\) (\(r=+\infty\) is possible). Also the condition on \(Q\) translates to (if we maximize)
\[1=\int_{0}^{r}x^{p}(1-F(x))^{\alpha}F^{\prime}(x)\,\mathrm{d}x=\frac{1}{c}\int _{0}^{r}xF^{\prime\prime}(x)\,\mathrm{d}x.\]
Using that \(F^{\prime}(r)=0\), which follows from the fact \(Q^{\prime}(1)=+\infty\), together with integration by parts, we see that the right-hand side is \(-\frac{1}{c}\). Hence, we conclude \(c=-1\) leading to the desired differential equation. |
2305.11915 | PINNs error estimates for nonlinear equations in $\mathbb{R}$-smooth
Banach spaces | In the paper, we describe in operator form classes of PDEs that admit PINN's
error estimation. Also, for $L^p$ spaces, we obtain a Bramble-Hilbert type
lemma that is a tool for PINN's residuals bounding. | Jiexing Gao, Yurii Zakharian | 2023-05-18T14:51:00Z | http://arxiv.org/abs/2305.11915v3 | # PINNs error estimates for nonlinear equations in \(\mathbb{R}\)-smooth Banach spaces
###### Abstract
In the paper, we describe in operator form classes of PDEs that admit PINN's error estimation. Also, for \(L^{p}\) spaces, we obtain a Bramble-Hilbert type lemma that is a tool for PINN's residuals bounding.
## 1 Introduction
In 2017, M. Raissi, P. Perdikaris, and G. Em Karniadakis introduced the notion of a Physics-informed neural network (PINN), that is, the neural network approximating solution of a nonlinear partial differential equation (PDE) [21, 22]. Approximation based on minimizing losses that correspond to the quadrature rules of PDE and boundary/initial conditions.
A natural question arises, how does total approximation error depend on residuals/losses? In a paper [17], S. Mishra and R. Molinaro demonstrated a method of total error estimation in terms of residuals and training error (minimized loss). Furthermore, in paper [5], the authors obtained \(L^{2}\)-bound on residuals using the Bramble-Hilbert lemma. See also [2, 6, 7]. The authors in [17] gave an operator description of the conditions sufficient for such estimation to be applied. However, in practice, the verification of the conditions reduces to obtaining the estimation explicitly.
The goal of the paper is to specify classes of PDEs that admit PINNs error estimation due to S. Mishra, R. Molinaro, De Ryck, et al. To be more precise, we consider the following four types of equations.
* (Parabolic-type equation) \[\frac{dw}{dt}=A(t)(w)\] (1)
* (Generalized parabolic-type equation) \[\frac{dw}{dt}+\frac{d}{dt}\sum_{k=1}^{K}U_{k}w=A(t)(w)\] (2)
* (Hyperbolic-type equation) \[\frac{d^{2}w}{dt^{2}}=Uw+F(t)(w)+A(t)\left(\frac{dw}{dt}\right)\] (3)
* (Elliptic-type equation) \[A(y)=f_{pde}\] (4)
Also, we give sufficient conditions for operators used in (1)-(4). We note that in papers [2, 5, 17], the authors repeated one procedure: they estimated the time derivative of total error squared \(L^{2}\)-norm in terms of the residuals and the \(L^{2}\)-norm square itself. After that, they applied the Gronwall-Bellman lemma to obtain the desired estimation. For the equations (1)-(4), we extend this idea to a Banach space, for which such derivative exists, and obtained total error estimation in general (operator) form.
A similar issue was also studied in [12], where the authors presented PINN's error estimation, for a homogeneous initial value problem, where a respective operator is a linear generator of a strongly continuous semigroup. Furthermore, they applied this method to a few nonlinear equations. We describe an extension of their method to the semi-linear equations and compare respective estimation with the estimation obtained by differentiation of the \(p\)-powered norm.
Finally, we obtain a Bramble-Hilbert type lemma for \(L^{p}\), and briefly describe how to obtain \(L^{p}\) bound on PINN's residuals.
## 2 Preliminary
### \(\mathbb{R}\)-smooth Banach space and real \(p\)-form
In what follows, we will use the following denotations. Let \(\mathbb{K}\) be either \(\mathbb{C}\) or \(\mathbb{R}\). Let \(X\) be a Banach space over \(\mathbb{K}\), and \(S=\{y\in X\ |\ \|y\|=1\}\) be a unit sphere.
**Definition 1**.: A real-valued function \(\varphi:X\to\mathbb{R}\) is said to be _Gateaux \(\mathbb{R}\)-differentiable at \(y\in X\)_ if for every \(\chi\in X\) there exists
\[\lim_{\mathbb{R}\ni s\to 0}\frac{\varphi(y+s\chi)-\varphi(y)}{s}=:D(\varphi)(y;\chi)\]
that is called _Gateaux \(\mathbb{R}\)-derivative at \(y\) w.r.t. direction \(\chi\)_. If \(\varphi\) is Gateaux \(\mathbb{R}\)-differentiable at any point, then we say that \(\varphi\) is _Gateaux \(\mathbb{R}\)-differentiable_.
**Remark 1**.: We used the Gateaux derivative as in paper [11], since we don't need \(s\) to be complex. Let us note that the \(\mathbb{R}\)-Gateaux derivative still satisfies the chain rule: if \(w:[0;+\infty)\to X\) is differentiable, \(D(\varphi)(y,\cdot)\) exists and continuous, for \(\varphi:X\to\mathbb{R}\) and every \(y\in X\), then
\[\frac{d\varphi(w)}{dt}=D(\varphi)\left(w;\frac{dw}{dt}\right) \tag{5}\]
**Definition 2**.: A Banach space \(X\) is said to be \(\mathbb{R}\)-_smooth_ if its norm is \(\mathbb{R}\)-Gateaux differentiable at every \(y\in S\).
**Remark 2**.: If \(X\) is a \(\mathbb{R}\)-smooth Banach space, then for every \(y\in X\setminus\{0\}\),
1. \(D\left(\|\cdot\|\right)(y;\cdot)\) is \(\mathbb{R}\)-linear, that is \[D(\|\cdot\|)(y;\lambda_{1}\chi_{1}+\lambda_{2}\chi_{2})=\lambda_{1}D\left(\| \cdot\|\right)(y;\chi_{1})+\lambda_{2}D\left(\|\cdot\|\right)(y;\chi_{2})\] for every \(y_{1},y_{2}\in X\) and \(\lambda_{1},\lambda_{2}\in\mathbb{R}\).
2. \(D\left(\|\cdot\|\right)(y;\cdot)\) is bounded and \[\|D\left(\|\cdot\|\right)(y;\cdot)\|=1.\]
3. \(D\left(\|\cdot\|\right)(y;y)=\|y\|\)
Furthermore, \(D\left(\|\cdot\|\right)(y;\cdot)\) is a unique real-valued functional satisfying \((1)-(3)\)[13].
On the other hand, we can define semi-inner-product [15],
1. \([y_{1}+y_{2},y_{3}]=[y_{1},y_{3}]+[y_{2},y_{3}]\), and \([\lambda y_{1},y_{2}]=\lambda[y_{1},y_{2}]\) for \(\lambda\in\mathbb{K}\).
2. \([y,y]>0\), for \(y\neq 0\),
3. \(|[y_{1},y_{2}]|^{2}\leq[y_{1},y_{1}][y_{2},y_{2}]\).
which agrees with the norm, that is \([y,y]=\|y\|^{2}\).
J. R. Giles showed that the semi-inner-product \([\cdot,\cdot]:X\times X\to\mathbb{K}\) satisfies
1. (homogeneity) \([y_{1},\lambda y_{2}]=\overline{\lambda}[y_{1},y_{2}]\), for every \(x,y\in X\) and any \(\lambda\in\mathbb{K}\).
2. (continuity) \(\operatorname{Re}[y_{1},y_{2}+sy_{1}]\to\operatorname{Re}[y_{1},y_{2}]\) for all real \(s\to 0\).
if and only if the respective norm is Gateaux differentiable at any point of unit sphere [11]. Furthermore,
\[\operatorname{Re}([\chi,y])=\|y\|D(\|\cdot\|)(y;\chi),\ \forall y\in X \tag{6}\]
Finally,
\[[\chi,y]=\|y\|\left(D(\|\cdot\|)(y;\chi)-iD(\|\cdot\|)(y;i\chi)\right)\]
At \(y=0\), \(D(\|\cdot\|)(y;\chi)\) does not exists. Nevertheless, \(\|y\|D(\|\cdot\|)(y;\chi)\to 0\), if \(y\to 0\).
**Remark 3**.: In the paper, we don't need to use the \(\mathbb{R}\)-Gateaux derivative of the norm or even the semi-inner product. Instead, we need \(\|y_{2}\|^{p-2}\mathrm{Re}[y_{1},y_{2}]\), for some \(p>1\). Let us note that \(\|y_{2}\|^{p-2}[y_{1},y_{2}]\) appears in literature as a _semi-inner product of type \(p\)[18]_.
**Definition 3**.: Let \(p>1\), \(X\) be a \(\mathbb{R}\)-smooth Banach space. A form \(\left\langle\cdot,\cdot\right\rangle_{p}:X\times X\to\mathbb{R}\), where \(\left\langle y_{1},y_{2}\right\rangle_{p}:=\|y_{2}\|^{p-2}\mathrm{Re}[y_{1},y_{ 2}]\) we will call a _real \(p\)-form_.
**Lemma 1**.: _A real \(p\)-form on a \(\mathbb{R}\) smooth Banach space satisfies the following properties._
1. \(\left\langle y_{1}+y_{2},y_{3}\right\rangle_{p}=\left\langle y_{1},y_{3} \right\rangle_{p}+\left\langle y_{2},y_{3}\right\rangle_{p}\)_, and_ \(\left\langle\lambda y_{1},y_{2}\right\rangle_{p}=\left(\mathrm{Re}\lambda \right)\left\langle y_{1},y_{2}\right\rangle_{p}\)_, for_ \(\lambda\in\mathbb{K}\)_._
2. \(\left\langle y_{1},\lambda y_{2}\right\rangle_{p}=\left(\mathrm{Re}\lambda \right)|\lambda|^{p-2}\langle y_{1},y_{2}\rangle_{p}\)_, for any_ \(\lambda\in\mathbb{K}\)_._
3. \(\left\langle y,y\right\rangle_{p}=\|y\|^{p}\)_._
4. \(\left|\left\langle y_{1},y_{2}\right\rangle_{p}\right|\leq\|y_{1}\|\|y_{2}\|^{p -1}\)__
5. \(\|\cdot\|^{p}\) _is_ \(\mathbb{R}\)_-Gateaux differentiable and_ \[D(\|\cdot\|^{p})(y_{2};y_{1})=p\langle y_{1},y_{2}\rangle_{p},\] (7) _for every_ \(y_{1},y_{2}\in X\)_._
6. \(\left\langle\chi_{n},y\right\rangle_{p}\to\left\langle\chi,y\right\rangle_{p}\)_, if_ \(\chi_{n}\to\chi\)_._
7. _If_ \(w:[0;T]\to\infty\) _is differentiable, then_ \[\frac{d}{dt}\|w\|^{p}=p\left\langle\frac{dw}{dt},w\right\rangle_{p}\] (8)
8. _Let_ \(w:[0;T]\to\infty\) _be differentiable,_ \(\mathcal{D}(U)\subset X\) _be a subspace,_ \(U:\mathcal{D}(U)\to X\) _be a linear operator such that_ \(Uw\) _is differentiable and_ \(\frac{d}{dt}Uw=U\frac{dw}{dt}\)_. Then_ \[\frac{d}{dt}\|Uw\|^{p}=p\left\langle U\frac{dw}{dt},Uw\right\rangle_{p}\] (9)
9. _If_ \(D(\|\cdot\|)(\cdot,\chi)\) _is continuous on the ring_ \(\{y\in X\mid r_{0}<\|x\|<r_{1}\}\) _for every_ \(\chi\in X\) _and_ \(0<r_{0}<r_{1}\)_, then the real_ \(p\)_-form is continuous w.r.t. the second variable, that is,_ \[\left\langle\chi,y_{n}\right\rangle_{p}\to\left\langle\chi,y\right\rangle_{p}, \;\forall y\in X,\forall y_{n}\to y.\]
Proof.: Properties 1-6 follow from the definition and semi-inner product properties. Property 7 is a consequence of 5, 6, and (5).
In property 8, we cannot apply the chain rule since \(D(\|U\cdot\|^{p})(y;\chi)=p\langle U\chi,Uy\rangle_{p}\) is not continuous w.r.t. \(\chi\) in general. However,
\[\frac{\|Uw(t+\Delta t)\|^{p}-\|Uw(t)\|^{p}}{\Delta t}=\frac{\|Uw( t)+U\left(w(t+\Delta t)-w(t)\right)\|^{p}-\|Uw(t)\|^{p}}{\Delta t}=\] \[=D(\|U\cdot\|^{p})\left(w(t);U\frac{w(t+\Delta t)-w(t)}{\Delta t} \right)+\frac{o(\Delta t)}{\Delta t}=\] \[=p\left\langle U\frac{w(t+\Delta t)-w(t)}{\Delta t},w(t)\right\rangle _{p}+\frac{o(\Delta t)}{\Delta t}.\]
With \(U\frac{w(t+\Delta t)-w(t)}{\Delta t}\to\frac{d}{dt}Uw=U\frac{dw}{dt}\) and property 6, we obtain (9).
To prove property 9, we need to consider two cases. If \(y\neq 0\), then starting from some \(n\), \(y_{n}\) all together with \(y\) contain in a ring, where the real \(p\)-form is continuous. If \(y_{n}\to 0\), then \(\left\langle\chi,y_{n}\right\rangle_{p}\to 0=\left\langle\chi,0\right\rangle_{p}\) by property 4.
Let us describe a few examples.
**Example 1**.: Every Hilbert space is a \(\mathbb{R}\)-smooth space. The inner product coincides with the semi-inner product. In this case, the real \(2\)-form is
\[\langle y_{1},y_{2}\rangle_{2}=\mathrm{Re}\langle y_{1},y_{2}\rangle\]
**Example 2**.: Let \(X=L^{p}(\Omega)\) in a case of \(\sigma\)-finite measure, \(p>1\). Then
\[D(\|\cdot\|_{L^{p}(\Omega)})(y;\chi)=\mathrm{Re}\frac{\int_{\Omega}|y|^{p-1} \mathrm{sgn}(y)\overline{\chi}dx}{\|y\|_{L^{p}(\Omega)}^{p-1}},\]
(for real Banach space, see, for instance, [8]). Furthermore, \(D(\|\cdot\|)(\cdot,\chi)\) is continuous on ring \(\{y\in X\mid r_{0}<\|x\|<r_{1}\}\) for every \(\chi\in X\) and \(0<r_{0}<r_{1}\)[24].
Hence,
\[[y_{1},y_{2}]=\|y_{2}\|_{L^{p}(\Omega)}\overline{D(\|\cdot\|_{L^{p}(\Omega)})( y_{2};y_{1})}=\frac{\int_{\Omega}|y_{2}|^{p-1}\mathrm{sgn}(\overline{y_{2}})y_{1 }dx}{\|y_{2}\|_{L^{p}(\Omega)}^{p-2}}\]
Finally, real \(p\)-form is
\[\langle y_{1},y_{2}\rangle_{p}=\mathrm{Re}\int_{\Omega}|y_{2}|^{p-1}\mathrm{ sgn}(\overline{y_{2}})y_{1}dx,\]
and it is continuous w.r.t. the second variable.
**Example 3**.: If \(X=L^{p}(\Omega\to\mathbb{C}^{n})\) is a space of vector-valued functions with the norm
\[\|\mathbf{y}\|_{L^{p}(\Omega\to\mathbb{C}^{n})}=\left(\sum_{i=1}^{n}\|y_{i}\|_ {L^{p}(\Omega)}^{p}\right)^{\frac{1}{p}}\]
then one can show that
\[[\mathbf{y}_{1},\mathbf{y}_{2}]=\frac{\sum_{i=1}^{n}\int_{\Omega}|y_{2,i}|^{p- 1}\mathrm{sgn}(\overline{y_{2,i}})y_{1,i}dx}{\|\mathbf{y}_{2}\|_{L^{p}(\Omega \to\mathbb{C}^{n})}^{p-2}}\]
and real \(p\)-form is
\[\langle\mathbf{y}_{1},\mathbf{y}_{2}\rangle_{p}=\sum_{i=1}^{n}\int_{\Omega}|y _{2,i}|^{p-1}\mathrm{sgn}(\overline{y_{2,i}})y_{1,i}dx\]
It is also continuous w.r.t. the second variable. We consider again the case of \(\sigma\)-finite measure, \(p>1\).
**Example 4**.: Let us consider \(X=L^{1}(\Omega)\) in the case of \(\sigma\)-finite measure and real-valued functions. Then there exists equivalent Gateaux differentiable norm \(\|\cdot\|_{\Psi}\) given by formula
\[\|y\|_{\Psi}=\inf\left\{s>0\mid\int_{\Omega}\Psi\left(\frac{y}{s}\right)dx\leq 1\right\}\]
where \(\Psi(\kappa)=\mathbb{I}_{[-1;1]}(\kappa)\frac{\kappa^{2}}{2}+\left(1-\mathbb{I }_{[-1;1]}(\kappa)\right)\left(|\kappa|-\frac{1}{2}\right)\) (see [25]). Its Gateaux derivative is
\[D(\|\cdot\|_{\Psi})(y;\chi)=\frac{\int_{\Omega}\frac{d\Psi}{d\kappa}\left( \frac{y}{\|y\|_{\Psi}}\right)\chi dx}{\int_{\Omega}\frac{d\Psi}{d\kappa}\left( \frac{y}{\|y\|_{\Psi}}\right)ydx}\|y\|_{\Psi},\]
Hence,
\[[y_{1},y_{2}]=\frac{\int_{\Omega}\frac{d\Psi}{d\kappa}\left(\frac{y_{2}}{\|y_{ 2}\|_{\Psi}}\right)y_{1}dx}{\int_{\Omega}\frac{d\Psi}{d\kappa}\left(\frac{y_{ 2}}{\|y_{2}\|_{\Psi}}\right)y_{2}dx}\|y_{2}\|_{\Psi}^{2}.\]
Let us note that
\[\frac{1}{1+\frac{1}{2}|\Omega|}\|\cdot\|_{L^{1}(\Omega)}\leq\|\cdot\|_{\Psi} \leq\|\cdot\|_{L^{1}(\Omega)}\]
However, such a semi-inner product is not suitable for our construction.
For further examples, see, for instance, [14].
### Submonotone operators
**Definition 4**.: Let \(\mathcal{D}(A)\subset X\) be a subspace. Let \(A:\mathcal{D}(A)\to X\) be an operator on \(X\) and \(\mathcal{M}\subset\mathcal{D}(A)\) be a subspace. In most cases, we will say that \(A\) is \((p,\psi)\)_-submonotone on \(\mathcal{M}\)_ for some \(\psi:\mathcal{D}(A)\to\mathbb{R}\) and \(p>1\) if it is true that
\[\langle A(\chi)-A(y),\chi-y\rangle_{p}\geq\psi(\chi,y)-\Lambda(\chi,y)\|_{ \chi}-y\|^{p},\]
for every \(\chi\in\mathcal{D}(A)\), \(y\in\mathcal{M}\), and some \(\Lambda(\chi,y)\geq 0\).
**Remark 4**.: We use the name "submonotone" due to a similar term in the paper [23]. We will use the fact that \(-A\) is submonotone, that is,
\[\langle A(\chi)-A(y),\chi-y\rangle_{p}\leq\psi(\chi,y)+\Lambda(\chi,y)\|_{ \chi}-y\|^{p}.\]
Here we take \(-\psi\) instead of \(\psi\) since we don't care about its sign.
We need to describe \(\psi\) in terms of boundary condition operator \(B\). In other words, \(\psi(\chi,y)\) must vanish if \(\chi\in B^{-1}(h_{b})\). Furthermore, in the case of solution \(w\) and approximation \(w_{\theta}\), \(\psi(w_{\theta},w)\) must be estimated in terms of respective boundary condition residual.
**Definition 5**.: Let \(\mathcal{D}(B)\subset X\) be a subspace. Let \(B:\mathcal{D}(B)\to Y\) be an operator on \(X\) and \(\psi:\left(\mathcal{D}(\psi)\right)^{2}\to\mathbb{R}\) be a function, where \(\mathcal{D}(\psi)\subset\mathcal{D}(B)\) is a subspace. We say that \(\psi\) is _subordinate to \(B\) on \(\mathcal{M}\)_, for \(\mathcal{M}\subset D(\psi)\), if there exists \(\rho:C(Y\to[0;+\infty))\) such that \(s\to 0\;\Rightarrow\;\rho(s)\to 0\) and
\[|\psi(\chi,y)|\leq\gamma(\chi,y)\rho(B(\chi)-B(y))\]
for every \(\chi\in\mathcal{D}(\psi)\), \(y\in\mathcal{M}\), and some \(\gamma(\chi,y)\geq 0\).
**Remark 5**.: In most cases, operator \(A\) is submonotone on \(\mathcal{M}=\mathcal{D}(A)\), and function \(\psi\) is subordinate on \(\mathcal{M}=\mathcal{D}(\psi)\). In this case, we will say \((p,\psi)\)_-submonotone_ and _subordinate_. However, we consider the submonotonicity of Navier-Stokes nonlinearity (example 10) for the fields with vanishing divergence.
Let us consider an important example of \(A\), where \(-A\) is submonotone. Working with semi inner-product G. Lumer and R. S. Phillips [16] gave the following definition.
**Definition 6**.: Let \(\mathcal{D}(A)\subset X\) be a subspace. \(A:\mathcal{D}(A)\to X\) is said to be _dissipative_ if
\[\operatorname{Re}[Ay,y]\leq 0,\]
for every \(y\in\mathcal{D}(A)\).
Real part of the semi-inner product can be replaced with the real \(p\)-form. Instead, we describe an operator which is dissipative on the preimage of a boundary conditions operator.
**Definition 7**.: Let \(A:\mathcal{D}(A)\to X\) be an operator on \(X\). We will say that \(A\) is _\((p,\psi)\)-dissipative_ for some \(\psi:\mathcal{D}(A)\to\mathbb{R}\) and \(p>1\) if it is true that
\[\langle A(y),y\rangle_{p}\leq\psi(y)\]
for every \(y\in\mathcal{D}(A)\).
Hence, let \(A=A_{0}+F\), where \(A_{0}:\mathcal{D}(A_{0})\to X\) is linear and \((p,\psi)\)-dissipative, and \(F:\mathcal{D}(F)\to X\) is conditionally Lipshitz. The last means that
\[\|F(y_{1})-F(y_{2})\|\leq\Lambda(y_{1},y_{2})\|y_{1}-y_{2}\|,\]
for every \(y_{1},y_{2}\in\mathcal{D}(F)\) and some \(\Lambda(y_{1},y_{2})\geq 0\). Then \(-A\) is \((p,\psi)\)-submonotone.
Let us consider a few examples. In order not to complicate, we will give examples of operators that are independent of \(t\). Also, we will consider simple derivatives only. The following statement allows us to extend submonotonicity to general case. The proof is quite standard. The second property follows from real \(p\)-form continuous property.
**Statement 1**.: _Let \(X\) be a \(\mathbb{R}\)-smooth Banach space. The following two properties hold._
1. _If_ \(A_{1,2}(t)\) _are_ \((p,\psi_{1,2}(t))\)_-submonotone on_ \(\mathcal{M}(t)\) _for every_ \(t\in[0;T]\)_, then_ \[A(t):=g_{1}(t)A_{1}(t)+g_{2}(t)A_{2}(t)\] _is_ \((p,\psi(t))\)_-submonotone on_ \(\mathcal{M}(t)\) _for_ \(\psi(t)=g_{1}(t)\psi_{1}(t)+g_{2}(t)\psi_{2}(t)\) _and every_ \(g_{1,2}:[0;T]\to[0;+\infty)\)_._
2. _Moreover, let_ \(D(\|\cdot\|)(\cdot,\chi)\) _be continuous on ring_ \(\{y\in X\mid r_{0}<\|y\|<r_{1}\}\) _for every_ \(0<r_{0}<r_{1}\) _and_ \(\chi\in X\)_. If_ \(A\) _is_ \((p,\psi)\)_-submonotone, there exists a closure_ \(\overline{A}\)_,_ \(\psi\) _is continuous w.r.t._ \(\overline{A}\)_-norm, that is_ \[y_{n}\to y,\;\chi_{n}\to\chi,\;Ay_{n}\to\overline{A}y,\;A\chi_{n}\to\overline{A }\chi\;\Rightarrow\;\psi(\chi_{n},y_{n})\to\psi(\chi,y),\] _and_ \(\Lambda\) _is continuous w.r.t._ \(\overline{A}\)_-norm, then_ \(\overline{A}\) _is_ \((p,\psi)\)_-submonotone._
We start with \((p,\psi)\)-dissipative operators.
**Example 5**.: Let \(\Omega\subset\mathbb{R}^{m}\), \(\Gamma=\partial\Omega\), \(X=L^{p}(\Omega\to\mathbb{C})\), \(1\leq j\leq m\), \(p>1\). Then
\[Ay=s\frac{\partial y}{\partial x_{j}},\]
where \(s\in\mathbb{R}\), is \((p,\psi)\)-dissipative. Really, let \(y=\eta+i\zeta\). Then
\[\langle Ay,y\rangle_{p} =s\mathrm{Re}\int_{\Omega}\frac{\partial y}{\partial x_{j}}|y|^{p- 1}\mathrm{sgn}(\overline{y})dx=s\int_{\Omega}|y|^{p-2}\left(\frac{\partial \eta}{\partial x_{j}}\eta+\frac{\partial\zeta}{\partial x_{j}}\zeta\right)dx=\] \[=\frac{s}{2}\int_{\Omega}|y|^{p-2}\frac{\partial|y|^{2}}{\partial x _{j}}dx=\frac{s}{p}\int_{\Gamma}|y|^{p}(\mathbf{e}_{j}\cdot\mathbf{n}_{\Gamma})d\Gamma\]
**Example 6**.: Let \(\Omega\subset\mathbb{R}^{m}\), \(\Gamma=\partial\Omega\), \(X=L^{p}(\Omega\to\mathbb{C})\), \(1\leq j\leq m\), \(p\geq 4\) or \(p=2\). Then
\[Ay=\frac{\partial^{2}y}{\partial x_{j}^{2}}\]
is \((p,\psi)\)-dissipative. Again, let \(y=\eta+i\zeta\). Then
\[\langle Ay,y\rangle_{p}=\mathrm{Re}\int_{\Omega}\frac{\partial^{2 }y}{\partial x_{j}^{2}}|y|^{p-1}\mathrm{sgn}(\overline{y})dx=\int_{\Omega}|y| ^{p-2}\left(\frac{\partial^{2}\eta}{\partial x_{j}^{2}}\eta+\frac{\partial^{2 }\zeta}{\partial x_{j}^{2}}\zeta\right)dx=\] \[=\int_{\Gamma}|y|^{p-2}\left(\eta\frac{\partial\eta}{\partial x_ {j}}+\zeta\frac{\partial\zeta}{\partial x_{j}}\right)(\mathbf{e}_{j}\cdot \mathbf{n}_{\Gamma})d\Gamma-\int_{\Omega}\frac{\partial}{\partial x_{j}}\left( |y|^{p-2}\eta\right)\frac{\partial\eta}{\partial x_{j}}+\frac{\partial}{ \partial x_{j}}\left(|y|^{p-2}\zeta\right)\frac{\partial\zeta}{\partial x_{j} }dx=\] \[=\int_{\Gamma}|y|^{p-2}\left(\eta\frac{\partial\eta}{\partial x_{ j}}+\zeta\frac{\partial\zeta}{\partial x_{j}}\right)(\mathbf{e}_{j}\cdot \mathbf{n}_{\Gamma})d\Gamma-\int_{\Omega}|y|^{p-2}\left(\left(\frac{\partial \eta}{\partial x_{j}}\right)^{2}+\left(\frac{\partial\zeta}{\partial x_{j}} \right)^{2}\right)dx-\] \[-\frac{p-2}{4}\int_{\Omega}|y|^{p-4}\left(\frac{\partial|y|^{2}} {\partial x_{j}}\right)^{2}dx\leq\int_{\Gamma}|y|^{p-2}\left(\eta\frac{ \partial\eta}{\partial x_{j}}+\zeta\frac{\partial\zeta}{\partial x_{j}}\right)( \mathbf{e}_{j}\cdot\mathbf{n}_{\Gamma})d\Gamma\]
**Example 7**.: Let \(\Omega\subset\mathbb{R}^{m}\), \(\Gamma=\partial\Omega\), \(X=L^{2}(\Omega\to\mathbb{C})\), \(1\leq j\leq m\). Then
\[Ay=(s_{1}^{2}+is_{2})\frac{\partial^{2}y}{\partial x_{j}^{2}},\]
where \(s_{1},s_{2}\in\mathbb{R}\), is \((2,\psi)\)-dissipative. Again, let \(y=\eta+i\zeta\). Then
\[\langle Ay,y\rangle_{2}=\mathrm{Re}\int_{\Omega}(s_{1}^{2}+is_{2})\frac{ \partial^{2}y}{\partial x_{j}^{2}}\overline{y}dx=s_{1}^{2}\int_{\Omega}\left( \frac{\partial^{2}\eta}{\partial x_{j}^{2}}\eta+\frac{\partial^{2}\zeta}{ \partial x_{j}^{2}}\zeta\right)dx+s_{2}\int_{\Omega}\left(\frac{\partial^{2} \eta}{\partial x_{j}^{2}}\zeta-\frac{\partial^{2}\zeta}{\partial x_{j}^{2}} \eta\right)=\]
\[=\int_{\Gamma}\left((s_{1}^{2}\eta+s_{2}\zeta)\frac{\partial\eta}{\partial x_{j }}+(s_{1}^{2}\zeta-s_{2}\eta)\frac{\partial\zeta}{\partial x_{j}}\right)( \mathbf{e}_{j}\cdot\mathbf{n}_{\Gamma})d\Gamma-s_{1}^{2}\int_{\Omega}\left( \left(\frac{\partial\eta}{\partial x_{j}}\right)^{2}+\left(\frac{\partial \zeta}{\partial x_{j}}\right)^{2}\right)dx\]
**Example 8**.: Let \(\Omega\subset\mathbb{R}^{m}\), \(\Gamma=\partial\Omega\). \(X=L^{2}(\Omega\to\mathbb{R}^{n})\), \(1\leq j\leq m\). Then
\[A\mathbf{y}=\frac{\partial^{\sigma}}{\partial x_{j}^{\sigma}}Q\mathbf{y}\]
is \((2,\psi)\)-dissipative, where \(Q\in\mathbb{R}^{n\times n}\) is a such matrix that
\[\begin{cases}(-1)^{\frac{\sigma}{2}}Q\leq 0,&\sigma=2l\\ Q=Q^{T},&\sigma=2l+1\end{cases}\]
Really, if \(\sigma=2l\),
\[\langle Ay,\mathbf{y}\rangle_{2} =\int_{\Omega}\left(\frac{\partial^{\sigma}}{\partial x_{j}^{ \sigma}}Q\mathbf{y}\cdot\mathbf{y}\right)dx=\sum_{\nu=0}^{l-1}\int_{\Gamma}(-1 )^{\nu}\left(\frac{\partial^{\sigma-1-\nu}}{\partial x_{j}^{\sigma-1-\nu}}Q \mathbf{y}\cdot\frac{\partial^{\nu}}{\partial x_{j}^{\nu}}\mathbf{y}\right)( \mathbf{e}_{j}\cdot\mathbf{n}_{\Gamma})\,d\Gamma+\] \[+\int_{\Omega}\left((-1)^{l}Q\frac{\partial^{l}}{\partial x_{j}^ {l}}\mathbf{y}\cdot\frac{\partial^{l}}{\partial x_{j}^{l}}\mathbf{y}\right)dx \leq\sum_{\nu=0}^{l-1}\int_{\Gamma}(-1)^{\nu}\left(\frac{\partial^{\sigma-1-\nu}}{ \partial x_{j}^{\sigma-1-\nu}}Q\mathbf{y}\cdot\frac{\partial^{\nu}}{\partial x_ {j}^{\nu}}\mathbf{y}\right)(\mathbf{e}_{j}\cdot\mathbf{n}_{\Gamma})\,d\Gamma\]
If \(\sigma=2l+1\),
\[\langle A\mathbf{y},\mathbf{y}\rangle_{2} =\int_{\Omega}\left(\frac{\partial^{\sigma}}{\partial x_{j}^{\sigma }}Q\mathbf{y}\cdot\mathbf{y}\right)dx=\sum_{\nu=0}^{l-1}\int_{\Gamma}(-1)^{\nu} \left(\frac{\partial^{\sigma-1-\nu}}{\partial x_{j}^{\sigma-1-\nu}}Q\mathbf{y} \cdot\frac{\partial^{\nu}}{\partial x_{j}^{\nu}}\mathbf{y}\right)(\mathbf{e}_ {j}\cdot\mathbf{n}_{\Gamma})\,d\Gamma+\] \[+(-1)^{l}\int_{\Omega}\left(\frac{\partial^{l+1}}{\partial x_{j} ^{l}}Q\mathbf{y}\cdot\frac{\partial^{l}}{\partial x_{j}^{l}}\mathbf{y}\right)dx\]
Since
\[\int_{\Omega}\left(\frac{\partial^{l+1}}{\partial x_{j}^{l+1}}Q \mathbf{y}\cdot\frac{\partial^{l}}{\partial x_{j}^{l}}\mathbf{y}\right)dx =\int_{\Gamma}\left(\frac{\partial^{l}}{\partial x_{j}^{l}}Q \mathbf{y}\cdot\frac{\partial^{l}}{\partial x_{j}^{l}}\mathbf{y}\right)( \mathbf{e}_{j}\cdot\mathbf{n}_{\Gamma})\,d\Gamma-\] \[-\int_{\Omega}\left(\frac{\partial^{l}}{\partial x_{j}^{l}}Q \mathbf{y}\cdot\frac{\partial^{l+1}}{\partial x_{j}^{l+1}}\mathbf{y}\right)dx\]
and \(Q=Q^{T}\), then
\[\langle A\mathbf{y},\mathbf{y}\rangle_{2} =\sum_{\nu=0}^{l-1}\int_{\Gamma}(-1)^{\nu}\left(\frac{\partial^{ \sigma-1-\nu}}{\partial x_{j}^{\sigma-1-\nu}}Q\mathbf{y}\cdot\frac{\partial^{ \nu}}{\partial x_{j}^{\nu}}\mathbf{y}\right)(\mathbf{e}_{j}\cdot\mathbf{n}_{ \Gamma})\,d\Gamma+\] \[+\frac{(-1)^{l}}{2}\int_{\Gamma}\left(\frac{\partial^{l}}{ \partial x_{j}^{l}}Q\mathbf{y}\cdot\frac{\partial^{l}}{\partial x_{j}^{l}} \mathbf{y}\right)(\mathbf{e}_{j}\cdot\mathbf{n}_{\Gamma})\,d\Gamma\]
Now, let us consider nonlinear submonotone operators.
**Example 9**.: Let \(X=L^{p}((a;b))\) consist of real-valued functions, \(p>1\). We will consider the following nonlinearity
\[A(y)=syy^{\prime},\]
where \(s\in\mathbb{R}\). If \(\hat{y}=y_{1}-y_{2}\), then we have
\[A(y_{1})-A(y_{2})=s\hat{y}\hat{y}^{\prime}+s\hat{y}y_{2}^{\prime}+sy_{2}\hat{ y}^{\prime}.\]
Hence, the desired estimation is
\[\langle A(y_{1})-A(y_{2}),\hat{y}\rangle_{p}=\int_{a}^{b}(s\hat{ y}\hat{y}^{\prime}+s\hat{y}y_{2}^{\prime}+sy_{2}\hat{y}^{\prime})|\hat{y}|^{p-1} \mathrm{sgn}(\hat{y})dx\leq\] \[\leq\frac{s}{p+1}|\hat{y}|^{p+1}\Big{|}_{a}^{b}+|s|\|\|y_{2}\|_{C ^{1}([a;b)]}\|\hat{y}\|_{L^{p}((a;b))}^{b}+\frac{s}{p}|\hat{y}|^{p}y_{2}\Big{|} _{a}^{b}-\frac{s}{p}\int_{a}^{b}y_{2}^{\prime}|y|^{p}dx\leq\] \[\leq\left(\frac{s}{p+1}|\hat{y}|^{p+1}+\frac{s}{p}|\hat{y}|^{p}y_ {2}\right)\Big{|}_{a}^{b}+|s|\left(1+\frac{1}{p}\right)\|y_{2}\|_{C^{1}([a;b) ]}\|\hat{y}\|_{L^{p}((a;b))}^{p}.\]
The following example is a multidimensional case of example 9. However, its negative is submonotone on a subspace.
**Example 10**.: Here we consider the nonlinearity of the Navier-Stokes equation for an incompressible fluid in the case of \(L^{p}(\Omega\to\mathbb{R}^{m})\), \(p>1\). Let \(\Omega\subset\mathbb{R}^{m}\), \(\Gamma=\partial\Omega\), \(X=L^{p}(\Omega\to\mathbb{R}^{m})\), and let
\[A(\mathbf{y})=s(\mathbf{y}\cdot\nabla)\mathbf{y},\]
where \(s\in\mathbb{R}\). Let also, \(\mathcal{M}\) contains \(\mathbf{y}\) with \(\nabla\cdot\mathbf{y}=0\).
We put \(\hat{\mathbf{y}}=\chi-\mathbf{y}\), \(\mathbf{y}\in\mathcal{M}\). Additionally, we consider residual \(\mathcal{R}_{div}=\nabla\cdot\hat{\mathbf{y}}\). Then
\[A(\chi)-A(\mathbf{y})=(\hat{\mathbf{y}}\cdot\nabla)\hat{\mathbf{y}}+(\hat{ \mathbf{y}}\cdot\nabla)\mathbf{y}+(\mathbf{y}\cdot\nabla)\hat{\mathbf{y}}\]
For the first term, we have
\[\langle(\hat{\mathbf{y}}\cdot\nabla)\hat{\mathbf{y}},\hat{ \mathbf{y}}\rangle_{p}=\sum_{i=1}^{m}\int_{\Omega}|\hat{y}_{i}|^{p-1}\mathrm{ sgn}(\hat{y}_{i})\sum_{j=1}^{m}\hat{y}_{j}\frac{\partial\hat{y}_{i}}{ \partial x_{j}}dx=\] \[=\frac{1}{p}\int_{\Gamma}|\hat{\mathbf{y}}|^{p}\left(\hat{ \mathbf{y}}\cdot\mathbf{n}_{\Gamma}\right)d\Gamma-\frac{1}{p}\int_{\Omega}|\hat {\mathbf{y}}|^{p}\left(\nabla\cdot\hat{\mathbf{y}}\right)dx\leq\] \[\leq\frac{1}{p}\int_{\Gamma}|\hat{\mathbf{y}}|^{p}\left(\hat{ \mathbf{y}}\cdot\mathbf{n}_{\Gamma}\right)d\Gamma+\frac{\|\chi\|_{C(\overline{ \Omega})}+\|\mathbf{y}\|_{C(\overline{\Omega})}}{p^{2}}\left((p-1)\|\hat{ \mathbf{y}}\|_{L^{p}(\Omega)}^{p}+\|\mathcal{R}_{div}\|_{L^{p}(\Omega)}^{p}\right)\]
For the second term,
\[\langle(\hat{\mathbf{y}}\cdot\nabla)\mathbf{y},\hat{\mathbf{y}}\rangle_ {p}=\sum_{i=1}^{m}\int_{\Omega}|\hat{y}_{i}|^{p-1}\mathrm{sgn}(\hat{y}_{i})\sum_ {j=1}^{m}\hat{y}_{j}\frac{\partial y_{i}}{\partial x_{j}}dx\leq\] \[\leq\sum_{i=1}^{m}\sum_{j=1}^{m}\|\nabla\mathbf{y}\|_{L^{\infty}( \Omega)}\int_{\Omega}|\hat{y}_{i}|^{p-1}|\hat{y}_{j}|dx\leq m\|\nabla\mathbf{y }\|_{L^{\infty}(\Omega)}\|\hat{\mathbf{y}}\|_{L^{p}(\Omega)}^{p}\]
Finally,
\[\langle(\mathbf{y}\cdot\nabla)\hat{\mathbf{y}},\hat{\mathbf{y}}\rangle_{p}= \sum_{i=1}^{m}\int_{\Omega}|\hat{y}_{i}|^{p-1}\mathrm{sgn}(\hat{y}_{i})\sum_ {j=1}^{m}y_{j}\frac{\partial\hat{y}_{i}}{\partial x_{j}}dx=\]
Another example of an operator with the submonotone negative is the \(p\)-Laplace operator.
**Example 11**.: Let \(\Omega\subset\mathbb{R}^{m}\), \(\Gamma=\partial\Omega\), \(X=L^{p}(\Omega\rightarrow\mathbb{R})\), \(p\geq 2\), and let
\[A(y)=\nabla\cdot\left(|\nabla y|^{p-2}\nabla y\right).\]
Let again \(\hat{y}=y_{1}-y_{2}\). Then
\[\langle A(y_{1})-A(y_{2}),\hat{y}\rangle_{p}=\int_{\Omega}\nabla \cdot\left(|\nabla y_{1}|^{p-2}\nabla y_{1}-|\nabla y_{2}|^{p-2}\nabla y_{2} \right)|\hat{y}|^{p-1}\mathrm{sgn}(\hat{y})dx=\] \[=\int_{\Gamma}|\hat{y}|^{p-1}\mathrm{sgn}(\hat{y})\left(\left(| \nabla y_{1}|^{p-2}\nabla y_{1}-|\nabla y_{2}|^{p-2}\nabla y_{2}\right)\cdot \mathbf{n}_{\Gamma}\right)d\Gamma-\] \[-(p-1)\int_{\Omega}|\hat{y}|^{p-2}\left(|\nabla y_{1}|^{p-2} \nabla y_{1}-|\nabla y_{2}|^{p-2}\nabla y_{2}\right)\cdot(\nabla y_{1}-\nabla y _{2})dx\]
With Cauchy-Schwartz in \(\mathbb{R}^{m}\) and Young inequality,
\[-\left(|\nabla y_{1}|^{p-2}\nabla y_{1}-|\nabla y_{2}|^{p-2} \nabla y_{2}\right)\cdot(\nabla y_{1}-\nabla y_{2})=\] \[=-|\nabla y_{1}|^{p}-|\nabla y_{2}|^{p}+\left(|\nabla y_{1}|^{p- 2}+|\nabla y_{2}|^{p-2}\right)(\nabla y_{1}\cdot\nabla y_{2})\leq\] \[=-|\nabla y_{1}|^{p}-|\nabla y_{2}|^{p}+|\nabla y_{1}|^{p-1}| \nabla y_{2}|+|\nabla y_{2}|^{p-1}|\nabla y_{1}|\leq 0\]
Hence,
\[\langle A(y_{1})-A(y_{2}),\hat{y}\rangle_{p}\leq\int_{\Gamma}|\hat{y}|^{p-1} \mathrm{sgn}(\hat{y})\left(\left(|\nabla y_{1}|^{p-2}\nabla y_{1}-|\nabla y_{ 2}|^{p-2}\nabla y_{2}\right)\cdot\mathbf{n}_{\Gamma}\right)d\Gamma\]
Finally, we consider the following operator.
**Example 12**.: Let \(\Omega\subset\mathbb{R}^{m}\), \(\Gamma=\partial\Omega\), \(X=L^{p}(\Omega\rightarrow\mathbb{R})\), \(p>1\), and let
\[A(y)=-y|y|^{p-2}\]
As in example 11, one can show that
\[\langle A(y_{1})-A(y_{2}),y_{1}-y_{2}\rangle_{p}\leq 0\]
### Powered operators and submonotone operators w.r.t. norm
In [2], the authors also consider the one-dimensional Camassa-Holm equation that includes mixed derivative \(\frac{\partial^{3}}{\partial t\partial x^{2}}\). We will briefly describe an extension results from the previous section to similar cases. For simplicity, we will consider submonotonicity and subordinancy on the whole domain. Also, we will deal with time-independent operators only.
**Definition 8**.: Let \(X\) be a \(\mathbb{R}\)-smooth Banach space, and \(\mathcal{D}(U)\subset X\) be a subspace. Let \(U:\mathcal{D}(U)\to X\) be a linear operator on \(X\). We will say that \(U\) is _\((p,\psi)\)-powered_, for some \(\psi:\mathcal{D}(U)\times\mathcal{D}\left(U\right)\rightarrow\mathbb{R}\), if there exists an operator \(U^{\frac{1}{p}}:\mathcal{D}\left(U^{\frac{1}{p}}\right)\to X\) satisfying
\[\chi\in\mathcal{D}(U)\;\Rightarrow\;\chi\in\mathcal{D}\left(U^{ \frac{1}{p}}\right)\] \[\langle U(\chi_{1}-\chi_{2}),y_{1}-y_{2}\rangle_{p}=\psi(\chi_{1},\chi_{2},y_{1},y_{2})+\left\langle U^{\frac{1}{p}}(\chi_{1}-\chi_{2}),U^{ \frac{1}{p}}(y_{1}-y_{2})\right\rangle_{p},\;\chi_{1},\chi_{2},y_{1},y_{2}\in \mathcal{D}(U)\]
With powered operators, we can consider the following extensions of definition 4.
**Definition 9**.: Let \(X\) be a \(\mathbb{R}\)-smooth Banach space, \(\{U_{k}\}_{k=1}^{K}\) be a set of \((p,\psi_{k})\)-powered operators, and \(\mathcal{D}(U_{k})\subset\mathcal{D}(A)\subset\mathcal{D}\left(U_{k}^{\frac{1}{ p}}\right)\) be subspaces. We say that \(A\) is _(p,\(\psi\))-submonotone, w.r.t. norm \(\|\cdot\|_{\sum U_{k}^{\frac{1}{p}}}\)_ if
\[\langle A(\chi_{1}-\chi_{2}),\chi_{1}-\chi_{2}\rangle_{p}\leq\psi(\chi_{1}, \chi_{2})+\Lambda(\chi_{1},\chi_{2})\left(\|\chi_{1}-\chi_{2}\|^{p}+\sum_{k=1}^ {K}\left\|U_{k}^{\frac{1}{p}}\chi_{1}-U_{k}^{\frac{1}{p}}\chi_{2}\right\|^{p} \right),\]
for all \(\chi_{1},\chi_{2}\in\mathcal{D}(A)\) and some \(\Lambda(\chi_{1},\chi_{2})>0\).
Also, we need to describe \(\psi\) and \(\psi_{k}\) functions similarly to definition 5.
**Definition 10**.: Let \(\mathcal{D}(B)\subset X\) be a subspace. Let \(B:\mathcal{D}(B)\to Y\) be an operator on \(X\), \(\psi:(\mathcal{D}(\psi))^{2}\rightarrow\mathbb{R}\) be a function, where \(\mathcal{D}(\psi)\subset\mathcal{D}(B)\), \(\{U_{k}\}_{k=1}^{K}\) be a set of \((p,\psi_{k})\)-powered operators, and \(\mathcal{D}(U_{k})\subset\mathcal{D}(\psi)\) be subspaces. We say that \(\psi\) is _subordinate to \(B\) w.r.t. \(\overline{U}_{1}\)_,...,\(U_{k}\)_ if there exist \(\rho_{0},\ldots,\rho_{K}:C(Y\rightarrow[0;+\infty))\), \(0\leq k\leq K\), such that \(s\to 0\ \Rightarrow\ \rho_{k}(s)\to 0\) and
\[|\psi(\chi,y)|\leq\gamma_{0}(\chi,y)\rho_{0}(B(\chi)-B(y))+\sum_{k=1}^{K} \gamma_{k}(\chi,y)\rho_{k}(B(U_{k}\chi)-B(U_{k}y)),\]
for every \(\chi\in\mathcal{D}(\psi)\), \(y\in\mathcal{M}\), and some \(\gamma_{k}(\chi,y)\geq 0\).
**Definition 11**.: Let \(\mathcal{D}(B)\subset X\) be a subspace. Let \(B:\mathcal{D}(B)\to Y\) be an operator on \(X\), \(\psi:(\mathcal{D}(\psi))^{4}\rightarrow\mathbb{R}\) be a function, where \(\mathcal{D}(\psi)\subset\mathcal{D}(B)\) is a subspace. We say that \(\psi\) is _subordinate to \(B\)_ if there exists \(\rho:C(Y\rightarrow[0;+\infty))\) such that \(s\to 0\ \Rightarrow\ \rho(s)\to 0\) and
\[|\psi(\chi_{1},\chi_{2},y_{1},y_{2})|\leq\gamma(\chi_{1},\chi_{2},y_{1},y_{2}) \rho(B(y_{1})-B(y_{2})),\]
for every \(\chi_{1},\chi_{2},y_{1},y_{2}\in\mathcal{D}(\psi)\), and some \(\gamma(\chi_{1},\chi_{2},y_{1},y_{2})\geq 0\).
Before considering the examples, we again formulate the following property.
**Statement 2**.: _Let \(X\) be a \(\mathbb{R}\)-smooth Banach space. Moreover, let \(D(\|\cdot\|)(\cdot,\chi)\) be continuous on ring \(\{y\in X\mid r_{0}<\|y\|<r_{1}\}\) for every \(0<r_{0}<r_{1}\) and \(\chi\in X\). If \(U\) is \((p,\psi)\)-powered, there exist closures \(\overline{U^{\frac{1}{p}}}\) and \(\overline{U}\) with property_
\[\chi_{n}\rightarrow\chi,\ U\chi_{n}\rightarrow\overline{U}\chi\ \Rightarrow\ U^{\frac{1}{p}}\chi_{n} \rightarrow\overline{U^{\frac{1}{p}}}\chi,\]
_and \(\psi\) is continuous w.r.t. \(\overline{U}\)-norm, then \(\overline{U}\) is \((p,\psi)\)-powered and \(\overline{U}^{\frac{1}{p}}=\overline{U^{\frac{1}{p}}}\)._
**Remark 6**.: One can also consider properties of submonotone w.r.t. norm operator, similar to statement 1.
We start with \((p,\psi)\)-powered operators.
**Example 13**.: Let \(X=L^{p}(\Omega\rightarrow\mathbb{R}^{n})\), and \(U=\operatorname{diag}(\mu_{1},\ldots,\mu_{n})\), \(\mu_{i}>0\), that is
\[U:\mathbf{y}=(y_{1},\ldots,y_{n})^{T}\mapsto(\mu_{1}y_{1},\ldots,\mu_{n}y_{n} )^{T}=U\mathbf{y},\ \forall\mathbf{y}\in L^{p}(\Omega\rightarrow\mathbb{R}^{n})\]
Then \(U^{\frac{1}{p}}=\operatorname{diag}\left(\mu_{1}^{\frac{1}{p}},\ldots,\mu_{n}^{ \frac{1}{p}}\right)\) and \(\psi\equiv 0\). Such operators appear, for instance, in the Maxwell equation.
If \(p=2\), then \(U\) can be any symmetric, positively defined matrix.
**Example 14**.: Let \(X=L^{2}((a;b))\) (for simplicity, \(X\) consists of real vector-valued functions), \(U=(-1)^{\sigma}\frac{\partial^{2\sigma}}{\partial x^{\sigma}}\). Then \(U\) is \((2,\psi)\)-powered and \(\sqrt{U}=\frac{\partial^{\sigma}}{\partial x^{\sigma}}\). Really,
\[\langle U\chi,y\rangle_{2}=\left\langle(-1)^{\sigma}\chi^{(2\sigma)},y\right\rangle _{2}=\sum_{\nu=1}^{\sigma}(-1)^{\sigma+\nu-1}\chi^{(2\sigma-\nu)}y^{(\nu)} \Big{|}_{a}^{b}+\left\langle\chi^{(\sigma)},y^{(\sigma)}\right\rangle_{2}\]
Let us also consider an operator whose negative is submonotone w.r.t. Sobolev norm.
**Example 15**.: Let \(X=L^{2}((a;b))\) consists of real-valued functions. We consider the following nonlinearity
\[A(y)=\frac{\frac{d}{dx}\left(|y|^{2}y^{(\sigma)}\right)}{y}=2y^{\prime}y^{( \sigma)}+yy^{(\sigma+1)},\]
where \(\sigma\geq 0\). Let us show that
\[\langle A(y_{1})-A(y_{2}),\hat{y}\rangle_{2}\leq\psi(\hat{y})+\Lambda(y_{1},y_{2}) \|\hat{y}\|_{H}^{2}\big{[}\mbox{\small$\frac{\pi}{2}$}\big{]}_{((a;b))},\;\hat{y }=y_{1}-y_{2} \tag{10}\]
We have
\[A(y_{1})-A(y_{2}) =2\left(\hat{y}^{\prime}\hat{y}^{(\sigma)}+y_{2}^{\prime}\hat{y}^ {(\sigma)}+\hat{y}^{\prime}y_{2}^{(\sigma)}\right)+\hat{y}\hat{y}^{(\sigma+1) }+y_{2}\hat{y}^{(\sigma+1)}+\hat{y}y_{2}^{(\sigma+1)}=\] \[=A(\hat{y})+2\left(y_{2}^{\prime}\hat{y}^{(\sigma)}+\hat{y}^{ \prime}y_{2}^{(\sigma)}\right)+y_{2}\hat{y}^{(\sigma+1)}+\hat{y}y_{2}^{( \sigma+1)}\]
Hence, we need to estimate \(\langle A(\hat{y}),\hat{y}\rangle_{2}\) and four terms of form \(\langle\chi\hat{y}^{\sigma},\hat{y}\rangle_{2}\), \(0\leq\tilde{\sigma}\leq\sigma+1\), \(\chi\in C^{\sigma+1-\left[\frac{\hat{y}}{2}\right]}([a;b])\). First,
\[\langle A(\hat{y}),\hat{y}\rangle_{2}=\hat{y}^{2}\hat{y}^{(\sigma)}\Big{|}_{a }^{b},\]
where \(y^{2}y^{(\sigma)}|_{a}^{b}\) is subordinate, for instance, to the periodic boundary conditions operator \(\left(y\Big{|}_{a}^{b},\ldots,\;y^{(\sigma)}\Big{|}_{a}^{b}\right)^{T}\).
Let us show that
\[\left|\langle\chi\hat{y}^{(\tilde{\sigma})},\hat{y}\rangle_{2}\right|\leq\psi (\hat{y})+\Lambda(\|\chi\|_{C^{\sigma+1-\left[\frac{\hat{y}}{2}\right]}([a;b] )})\|\hat{y}\|_{H}^{2}\big{[}\mbox{\small$\frac{\hat{y}}{2}$}\big{]}_{((a;b))}\]
where \(\psi(y)\) is subordinate to the periodic boundary conditions operator.
We consider two possible cases. First, let \(\tilde{\sigma}=2l\). Then
\[\left|\langle\chi\hat{y}^{(\tilde{\sigma})},\hat{y}\rangle_{2}\right|=\int_{a }^{b}\chi\hat{y}^{(2l)}\hat{y}dx=\sum_{\nu=1}^{l}(-1)^{\nu-1}\hat{y}^{(2l-\nu )}\left(\chi\hat{y}\right)^{(\nu)}\Big{|}_{a}^{b}+(-1)^{l}\int_{a}^{b}\hat{y} ^{(l)}\left(\chi\hat{y}\right)^{(l)}dx\]
First sum is subordinate to the periodic boundary conditions operator. It remains to consider
\[\left|\int_{a}^{b}\hat{y}^{(l)}\left(\chi\hat{y}\right)^{(l)}dx\right|=\left| \int_{a}^{b}\hat{y}^{(l)}\sum_{\nu=0}^{l}\binom{l}{\nu}\chi^{(l-\nu)}\hat{y}^ {(\nu)}dx\right|\leq 2^{l}\|\chi\|_{C^{l}([a;b])}\|\hat{y}\|_{H^{l}((a;b))}^{2}.\]
Now, let \(\tilde{\sigma}=2l+1\). Then, similarly,
\[\left|\langle\chi\hat{y}^{(\tilde{\sigma})},\hat{y}\rangle_{2}\right| =\int_{a}^{b}\chi\hat{y}^{(2l+1)}\hat{y}dx=\sum_{\nu=1}^{l+1}(-1)^ {\nu-1}\hat{y}^{(2l+1-\nu)}\left(\chi\hat{y}\right))^{(\nu)}\Big{|}_{a}^{b}+\] \[+(-1)^{l+1}\int_{a}^{b}\hat{y}^{(l)}\left(\chi\hat{y}\right)^{(l+ 1)}dx\]
Again, we need to estimate
\[\left|\int_{a}^{b}\hat{y}^{(l)}\left(\chi\hat{y}\right)^{(l+1)}dx\right|= \left|\int_{a}^{b}\hat{y}^{(l)}\sum_{\nu=0}^{l+1}\binom{l+1}{\nu}\chi^{(l+1- \nu)}\hat{y}^{(\nu)}dx\right|.\]
If \(\nu\leq l\), then we estimate the following expression as in the previous case,
\[\left|\int_{a}^{b}\hat{y}^{(l)}\chi^{(l+1-\nu)}\hat{y}^{(\nu)}dx\right|\leq\| \chi\|_{C^{l+1}([a;b])}\|\hat{y}\|_{H^{l}((a;b))}^{2}\]
It remains to estimate
\[\left|\int_{a}^{b}\hat{y}^{(l)}\chi\hat{y}^{(l+1)}dx\right| \leq\left|\chi\frac{\left(\hat{y}^{(l)}\right)^{2}}{2}\right|_{a }^{b}\right|+\left|\int_{a}^{b}\frac{\left(\hat{y}^{(l)}\right)^{2}}{2}\chi^{ \prime}dx\right|\leq\] \[\leq\left|\chi\frac{\left(\hat{y}^{(l)}\right)^{2}}{2}\right|_{a }^{b}\right|+\frac{1}{2}\|\chi\|_{C^{1}([a;b])}\|\hat{y}\|_{H^{l}((a;b))}^{2}\]
Hence, in both cases, we obtain the desired estimation.
**Remark 7**.: One can consider an operator in \(L^{p}((a;b))\), \(p>1\), similar to example 15. Really, let
\[A(y)=\frac{\frac{d}{dt}\left(|y|^{p}y^{(\sigma)}\right)}{|y|^{p-1}{\rm sgn}(y)} =py^{\prime}y^{(\sigma)}+yy^{(\sigma+1)},\]
where \(\sigma\geq 0\), and if \(p\) is not an even integer, \(\lceil\frac{\sigma}{2}\rceil\leq p-1\).
With Faa di Bruno's formula, one can show that \(-A(y)\) is \((p,\psi)\) submonotone w.r.t. \(W^{\left\lceil\frac{\sigma}{2}\right\rceil,p}((a;b))\) norm, that is,
\[\langle A(y_{1})-A(y_{2}),\hat{y}\rangle_{p}\leq\psi(\hat{y})+\Lambda(y_{1},y_ {2})\|\hat{y}\|_{W^{\left\lceil\frac{\sigma}{2}\right\rceil,\cdot(a;b))}^{p}},\;\hat{y}=y_{1}-y_{2} \tag{11}\]
However, for the case \(p\neq 2\), the \((p,\psi)\)-power of the differentiation operator (as in example 14) cannot be built. Alternatively, we can try to consider operators \(U_{1}\) and \(U_{2}\) such that
\[\left\langle U_{1}\left(\frac{dU_{2}(y)}{dt},y\right),y\right\rangle_{p}=\psi (\dots)+\left\langle\frac{dU^{\frac{1}{p}}(y)}{dt},U^{\frac{1}{p}}(y)\right \rangle_{p},\]
however, such constructions lead us to the nonlinear and artificial operators. Really, one can check that even for an operator \(U^{\frac{1}{p}}=\frac{\partial^{\sigma}}{\partial x^{\sigma}}\), in \(L^{p}((a;b))\) space, we have
\[\left\langle\frac{\frac{d}{dt}\frac{\partial^{\sigma}}{\partial x^{\sigma}}({ \rm sgn}(y^{(\sigma)})|y^{(\sigma)}|^{p-1})}{(-1)^{\sigma}(p-1)|y|^{p-2}},y \right\rangle_{p}=\psi\left(y,\dots,y^{(\sigma)},\frac{dy}{dt},\dots,\frac{dy ^{(\sigma)}}{dt}\right)+\left\langle\frac{dy^{(\sigma)}}{dt},y^{(\sigma)} \right\rangle_{p}\]
### Coercive operators
In [17], the authors considered the conditional Lipschitz condition of the inverse operator. We consider instead the following
**Definition 12**.: Let \(X\) be a is a \(\mathbb{R}\)-smooth Banach space, Let \(\mathcal{D}(A)\subset X\) be a subspace. \(A:\mathcal{D}(A)\to X\), \(\mathcal{M}\subset\mathcal{D}(A)\) be a subspace. We will say that \(A\) is \((p,\psi)\)_coercive on \(\mathcal{M}\)_, for some \(\psi:\mathcal{D}(A)\to\mathbb{R}\) and \(p>1\), if it is true that
\[\|\chi-y\|^{p}\leq\psi(\chi,y)+\Lambda(\chi,y)\langle A(\chi)-A(y),\chi-y \rangle_{p}\]
for every \(\chi\in\mathcal{D}(A)\), \(y\in\mathcal{M}\), and some \(\Lambda(\chi,y)\geq 0\).
Again, we formulate the following properties
**Statement 3**.: _Let \(X\) be a \(\mathbb{R}\)-smooth Banach space. The following two properties hold_
1. _If_ \(A_{1,2}\) _are_ \((p,\psi_{1,2})\)_-coercive on_ \(\mathcal{M}\)_, then_ \[A:=s_{1}A_{1}+s_{2}A_{2}\] _is_ \((p,\psi)\)_-coercive on_ \(\mathcal{M}\) _for_ \(\psi=s_{1}\psi_{1}+s_{2}\psi_{2}\) _and every_ \(s_{1,2}\geq 0\)_._
2. _Moreover, let_ \(D(\|\cdot\|)(\cdot,\chi)\) _be continuous on ring_ \(\{y\in X\mid r_{0}<\|y\|<r_{1}\}\) _for every_ \(0<r_{0}<r_{1}\) _and_ \(\chi\in X\)_. If_ \(A\) _is_ \((p,\psi)\)_-coercive, there exists a closure_ \(\overline{A}\)_, and_ \(\psi\)_,_ \(\Lambda\) _are continuous w.r.t._ \(\overline{A}\)_-norm, then_ \(\overline{A}\) _is_ \((p,\psi)\)_-coercive._
**Example 16**.: Let \(\Omega\subset\mathbb{R}^{m}\), \(\Gamma=\partial\Omega\), \(X=L^{p}(\Omega)\), \(p\geq 2\). Let
\[A_{0}(y)=-\nabla\cdot\left(|\nabla y|^{p-2}\nabla y\right)\]
be a minus \(p\)-Laplace operator, and
\[A_{1}(y)=|y|^{p-2}y.\]
Then
\[A(y)=A_{0}(y)+q_{1}A_{1}(y)+q_{2}y,\]
is \((p,\psi)\)-coercive, for \(q_{1},q_{2}\geq 0\). In examples 11 and 12, we showed that
\[\langle A_{0}(y_{1})-A_{0}(y_{2}),y_{1}-y_{2}\rangle_{p}\geq-\int_{\Gamma}| \hat{y}|^{p-1}{\rm sgn}(\hat{y})\left(\left(|\nabla y_{1}|^{p-2}\nabla y_{1}-| \nabla y_{2}|^{p-2}\nabla y_{2}\right)\cdot{\bf n}_{\Gamma}\right)d\Gamma,\]
and
\[\langle A_{1}(y_{1})-A_{1}(y_{2}),y_{1}-y_{2}\rangle_{p}\geq 0,\]
where \(\hat{y}=y_{1}-y_{2}\). Hence,
\[\|y_{1}-y_{2}\|^{p} \leq\frac{1}{q_{2}}\int_{\Gamma}|\hat{y}|^{p-1}{\rm sgn}(\hat{y}) \left(\left(|\nabla y_{1}|^{p-2}\nabla y_{1}-|\nabla y_{2}|^{p-2}\nabla y_{2} \right)\cdot{\bf n}_{\Gamma}\right)d\Gamma+\] \[+\frac{1}{q_{2}}(A(y_{1})-A(y_{2}),y_{1}-y_{2})_{p}\]
## 3 PINN's error estimate
We will describe in detail only parabolic type of equations. In other cases, for simplicity, we omit some conditions. For instance, we give an estimation in terms of training error only for the parabolic-type equation.
### Parabolic-type equation
Given the following problem (12)-(14) in a \(\mathbb{R}\)-smooth Banach space \(X\)
\[\frac{dw}{dt}=A(t)(w) \tag{12}\]
with initial condition
\[w\Big{|}_{t=0}=w_{0} \tag{13}\]
and boundary condition
\[B(t)(w)=h_{b}, \tag{14}\]
where \(A(t):\mathcal{D}(A(t))\to X\) and \(B(t):\mathcal{D}(B(t))\to Y\) are nonlinear operators, \(\mathcal{D}(A(t))\subset\mathcal{D}(B(t))\subset X\) are subspaces, \(Y\) is a Banach space. We also consider the following subspace of \(\mathcal{D}(A(t))\)
\[\mathcal{M}(A(t),B(t)):=(B(t))^{-1}(h_{b}(t))\cap\mathcal{D}(A(t)) \tag{15}\]
We assume solution \(w\) of (12)-(14) to be continuous on \([0;T]\), continuously differentiable on \((0;T]\), and \(w(t)\in\mathcal{M}(A(t),B(t))\) for \(0<t\leq T\).
Let us consider the neural network \(w_{\theta}\) with parameter \(\theta\), approximating solution \(w\) of (12)-(14), and the following PINN residuals
\[\mathcal{R}_{pde}=\frac{dw_{\theta}}{dt}-A(t)(w_{\theta}), \tag{16}\] \[\mathcal{R}_{in}=w_{\theta}|_{t=0}-w_{0},\] \[\mathcal{R}_{bnd}=B(t)(w_{\theta})-h_{b}.\]
Also, if we take \(\hat{w}:=w_{\theta}-w\), then
\[\mathcal{R}_{pde}=\frac{d\hat{w}}{dt}-A(t)(w_{\theta})+A(t)(w),\] \[\mathcal{R}_{in}=\hat{w}|_{t=0},\] \[\mathcal{R}_{bnd}=B(t)(w_{\theta})-B(t)(w).\]
We are interested in the following total error
\[\mathcal{E}:=\|\hat{w}\|_{L^{q}((0;T)\to X)}^{q} \tag{17}\]
Also, we need to consider approximating rules for norms of residuals (quadrature rules in particular). We need three types of rules for each of the residuals.
1. Let \(\mathcal{T}_{pde}\subset[0;T]\times\mathbb{R}^{m}\), and let there exist \(\mathcal{Q}_{N,pde}:L^{p}((0;T)\to X)\times\mathcal{T}_{pde}^{N}\to \mathbb{R}\) such that \[\big{|}\|w\|_{L^{p}((0;T)\to X)}-\mathcal{Q}_{N,pde}(w,(t_{1},x_{1}),\ldots, (t_{N},x_{N}))\big{|}\leq\beta_{quad,pde}(w)N^{-\alpha_{pde}},\] for some \(\beta_{quad,pde}(w)\geq 0\), \(\alpha_{pde}>0\) and every \(N\in\mathbb{N}\), \(w\in L^{p}((0;T)\to X)\), and \(\{(t_{i},x_{i})\}_{i=1}^{N}\subset\mathcal{T}_{pde}\).
2. Let \(\mathcal{T}_{in}\subset\mathbb{R}^{m}\), and let there exist \(\mathcal{Q}_{N,in}:X\times\mathcal{T}_{in}^{N}\to\mathbb{R}\) such that \[\|\|y\|_{X}-\mathcal{Q}_{N,in}(y,x_{1},\ldots,x_{N})\|\leq\beta_{quad,in}(y)N ^{-\alpha_{in}}\] for some \(\beta_{quad,in}(y)\geq 0\), \(\alpha_{in}>0\) and every \(N\in\mathbb{N}\), \(y\in X\), and \(\{x_{i}\}_{i=1}^{N}\subset\mathcal{T}_{in}\).
3. Let \(\mathcal{T}_{bnd}\subset[0;T]\), and let there exist \(\mathcal{Q}_{N,bnd}:L^{1}((0;T))\times\mathcal{T}_{bnd}^{N}\to\mathbb{R}\) such that \[\big{|}\|g\|_{L^{1}((0;T))}-\mathcal{Q}_{N,bnd}(g,t_{1},\ldots,t_{N})\big{|} \leq\beta_{quad,bnd}(g)N^{-\alpha_{bnd}}\] for some \(\beta_{quad,bnd}(g)\geq 0\), \(\alpha_{bnd}>0\) and every \(N\in\mathbb{N}\), \(g\in L^{1}((0;T))\), and \(\{t_{i}\}_{i=1}^{N}\subset\mathcal{T}_{bnd}\).
Given training sets \(\{(t_{i},x_{i})\}_{i=1}^{N_{\textit{pde}}}\subset\mathcal{T}_{\textit{pde}}\), \(\{x_{i}\}_{i=1}^{N_{\textit{in}}}\subset\mathcal{T}_{\textit{in}}\), \(\{t_{i}\}_{i=1}^{N_{\textit{nnd}}}\subset\mathcal{T}_{\textit{bnd}}\), we have the following training errors
\[\mathcal{E}_{T,\textit{pde}}=\mathcal{Q}_{N_{\textit{pde}},\textit{pde}}( \mathcal{R}_{\textit{pde}},(t_{1},x_{1}),\ldots,(t_{N_{\textit{pde}}},x_{N_{ \textit{pde}}})),\]
\[\mathcal{E}_{T,in}=\mathcal{Q}_{N_{\textit{in}},in}(\mathcal{R}_{in},x_{1}, \ldots,x_{N_{\textit{in}}}),\]
\[\mathcal{E}_{T,\textit{bnd}}=\mathcal{Q}_{N_{\textit{bnd}},\textit{bnd}}( \rho(\mathcal{R}_{\textit{bnd}}),t_{1},\ldots,t_{N_{\textit{bnd}}})\]
**Theorem 1**.: _Given a solution \(w\) of a problem (12)-(14), and neural network \(w_{\theta}\) with residuals (16) and total error (17). Let, moreover, \(-A(t)\) be a \((p,\psi(t))\)-submonotone on \(\mathcal{M}(A(t),B(t))\), with \(\psi(t)\) subordinate to \(B(t)\) on \(\mathcal{M}(A(t),B(t))\), for every \(t\in[0;T]\). (\(\mathcal{M}(A(t),B(t))\) is defined in (21)). Furthermore, let respective \(\gamma(\cdot,w_{\theta}(\cdot),w(\cdot))\in C([0;T])\), \(\rho(t)\equiv\rho\), and \(\Lambda(\cdot,w_{\theta}(\cdot),w(\cdot))\in L^{1}((0;T))\). Then we have an estimation_
\[\mathcal{E}\leq\mathcal{C}^{\frac{q}{p}}\left(\frac{p(e^{\frac{q(p-1)T}{p}}-1) }{q(p-1)}\right)e^{q\|\Lambda(\cdot,w_{\theta}(\cdot),w(\cdot))\|_{L^{1}((0;T ))}},\]
_where_
\[\mathcal{C}=\|\mathcal{R}_{in}\|^{p}+\|\mathcal{R}_{\textit{pde}}\|_{L^{p}((0 ;T)\to X)}^{p}+p\|\gamma(\cdot,w_{\theta}(\cdot),w(\cdot))\|_{C([0;T])}\| \rho(\mathcal{R}_{\textit{bnd}})\|_{L^{1}((0;T))}\]
Proof.: With (8), we have
\[\frac{d}{dt}\|\hat{w}\|^{p}=p\left\langle\frac{d\hat{w}}{dt},\hat{w}\right\rangle _{p}=p\left\langle\mathcal{R}_{\textit{pde}},\hat{w}\right\rangle_{p}+p\left \langle A(t)(w_{\theta})-A(t)(w),\hat{w}\right\rangle_{p}\]
With properties of \(p\)-form and Young inequality,
\[p\left|\langle\mathcal{R}_{\textit{pde}},\hat{w}\rangle_{p}\right|\leq p\| \mathcal{R}_{\textit{pde}}\|\|\hat{w}\|^{p-1}\leq\|\mathcal{R}_{\textit{pde}} \|^{p}+(p-1)\|\hat{w}\|^{p}\]
Since \(-A(t)\) is \((p,\psi(t))\)-submonotone on \(\mathcal{M}(A(t),B(t))\), \(\psi(t)\) is subordinate to \(B(t)\) on \(\mathcal{M}(A(t),B(t))\), then
\[\langle A(t)(w_{\theta})-A(t)(w),\hat{w}\rangle_{p}\leq\psi(t,w(t),w_{ \theta}(t))+\Lambda(t,w_{\theta}(t),w(t))\|\hat{w}\|^{p}\leq\]
\[\leq\gamma(t,w_{\theta}(t),w(t))\rho(B(t)(w_{\theta})-B(t)(w))+\Lambda(t,w_{ \theta}(t),w(t))\|\hat{w}\|^{p}=\]
\[=\gamma(t,w_{\theta}(t),w(t))\rho(\mathcal{R}_{\textit{bnd}})+\Lambda(t,w_{ \theta}(t),w(t))\|\hat{w}\|^{p}\]
Hence,
\[\frac{d}{dt}\|\hat{w}\|^{p}\leq\|\mathcal{R}_{\textit{pde}}\|^{p}+p\gamma(t,w _{\theta}(t),w(t))\rho(\mathcal{R}_{\textit{bnd}})+(p-1+p\Lambda(t,w_{\theta} (t),w(t)))\|\hat{w}\|^{p}\]
Integrating from \(0\) to \(t\leq T\),
\[\|\hat{w}\|^{p} \leq\|\mathcal{R}_{\textit{in}}\|^{p}+\|\mathcal{R}_{\textit{pde}} \|_{L^{p}((0;T)\to X)}^{p}+p\|\gamma(\cdot,w_{\theta}(\cdot),w(\cdot))\|_{C([ 0;T))}\|\rho(\mathcal{R}_{\textit{bnd}})\|_{L^{1}((0;T))}+\]
\[+\int_{0}^{t}(p-1+p\Lambda(t,w_{\theta}(t),w(t)))\|w\|^{p}d\tau\]
With Gronwall inequality,
\[\|\hat{w}\|^{p}\leq\mathcal{C}e^{(p-1)t+p\int_{0}^{t}(\Lambda(t,w_{\theta}(t), w(t)))d\tau},\]
where
\[\mathcal{C}=\|\mathcal{R}_{in}\|^{p}+\|\mathcal{R}_{\textit{pde}}\|_{L^{p}((0;T )\to X)}^{p}+p\|\gamma(\cdot,w_{\theta}(\cdot),w(\cdot))\|_{C([0;T])}\|\rho( \mathcal{R}_{\textit{bnd}})\|_{L^{1}((0;T))}\]
Taking \(q\)-powered \(L^{q}\)-norm on \(t\), \(1\leq q<\infty\),
\[\mathcal{E}=\|\hat{w}\|_{L^{q}((0;T)\to X)}^{q}\leq\mathcal{C}^{\frac{q}{p}} \left(\frac{p(e^{\frac{q(p-1)T}{p}}-1)}{q(p-1)}\right)e^{q\|\Lambda(\cdot,w_{ \theta}(\cdot),w(\cdot))\|_{L^{1}((0;T))}}\]
**Remark 8**.: In theorem 1, one can take a norm other than of \(L^{q}((0;T)\to X)\) space. Also, we assume that solution \(w\) and network \(w_{\theta}\) are sufficiently regular to have finite \(L^{q}\)-norm.
**Corollary 1**.: _With conditions of theorem 1 and training errors, we have_
\[\mathcal{E}\leq\mathcal{C}^{\frac{q}{p}}\left(\frac{p(e^{\frac{q(p-1)T}{p}}-1)}{ q(p-1)}\right)e^{q\|\Lambda(\cdot,w_{\theta}(\cdot),w(\cdot))\|_{L^{1}((0;T))}},\]
_where_
\[\mathcal{\tilde{C}} =\left(\mathcal{E}_{T,in}+\beta_{\textit{quad},in}(w,w_{\theta})N^ {-\alpha_{in}}\right)^{p}+\left(\mathcal{E}_{T,\textit{pde}}+\beta_{\textit{quad}, \textit{pde}}(w,w_{\theta})N^{-\alpha_{pde}}\right)_{L^{p}((0;T)\to X)}^{p}+\] \[+p\|\gamma(\cdot,w_{\theta}(\cdot),w(\cdot))\|_{C([0;T])}\left( \mathcal{E}_{T,\textit{bnd}}+\beta_{\textit{quad},\textit{bnd}}(w,w_{\theta})N^ {-\alpha_{bnd}}\right)\]
**Remark 9**.: Approximation (quadrature) constants actually depend on some norm of residuals.
### Parabolic-type equation, non-smooth Banach space
Let us consider another approach presented in [12]. In this paper, the authors consider PINN's error estimation for the problem (12)-(14), for time-independent, linear operator \(A\), such that \(A\Big{|}_{\mathrm{Ker}(B)}\) generates strongly continuous operator semigroup, and for surjective operator \(B\), with linear bounded \(\left(A\Big{|}_{\mathrm{Ker}(A,B)}\right)\cdot\Theta\) (\(\Theta\) is a right inverse of \(B\)). They also applied this estimation method to a few nonlinear equations. (for main results in operator semigroup theory, see for instance, [20]).
We briefly describe an extension of this approach to the following semilinear problem in arbitrary Banach space \(X\)
\[\frac{dw}{dt}=Aw+F(t)(w) \tag{18}\]
with initial condition
\[w\Big{|}_{t=0}=w_{0} \tag{19}\]
and boundary condition
\[B(w)=0, \tag{20}\]
where \(A:\mathcal{D}(A)\to X\) and \(B:\mathcal{D}(B)\to Y\) are linear operators, \(\mathcal{D}(A)\subset\mathcal{D}(B)\subset X\) are subspaces, \(Y\) is a Banach space. Again, we consider the following subspace of \(\mathcal{D}(A)\)
\[\mathcal{M}(A,B):=\mathrm{Ker}(B)\cap\mathcal{D}(A). \tag{21}\]
**Remark 10**.: For generator \(A\) of a strongly continuous semigroup \(V(t)\), \(-A\) is "submonotone" in the following sense. In a \(\mathbb{R}\)-smooth Banach space, an operator of form \(-(A_{0}+\omega I)\), where \(A_{0}\) is dissipative and \(\omega\in\mathbb{R}\), is submonotone. Since \(A\) is a generator of strongly continuous semigroup, then for some \(M\geq 1\) and \(\omega\in\mathbb{R}\), it is true that
\[\|(sI-A)^{-1}\|\leq\frac{M}{s-\omega},\;\forall s>\omega.\]
Then, for \(A_{0}:=A-\omega I\), we have
\[\|(sI-A_{0})^{-1}\|\leq\frac{M}{s},\;\forall s>0.\]
There exists an equivalent norm \(\|\cdot\|_{V}\) for which
\[\|(sI-A_{0})^{-1}\|_{V}\leq\frac{1}{s},\;\forall s>0,\]
that is, \(A_{0}\) is dissipative (in a meaning similar to the Banach space case).
Returning to the problem (18)-(20), residuals (16) will take the form
\[\begin{split}&\mathcal{R}_{pde}=\frac{dw_{\theta}}{dt}-Aw_{ \theta}-F(t)(w_{\theta}),\\ &\mathcal{R}_{in}=w_{\theta}|_{t=0}-w_{0},\\ &\mathcal{R}_{bnd}=B(w_{\theta}).\end{split} \tag{22}\]
**Theorem 2**.: _Given a solution \(w\) of a problem (18)-(20), and neural network \(w_{\theta}\) with residuals (22) and total error (17). Let \(A\Big{|}_{\mathcal{M}(A,B)}\) be a generator of semigroup \(V(t)\), \(\|V(t)\|\leq Me^{\omega t}\). Let \(B\) be right invertible, and for a right inverse \(\Theta\), let \(\left(A\Big{|}_{\mathcal{M}(A,B)}\right)\cdot\Theta\) be bounded. Finally, let \(F(t)\) be conditionally Lipschitz for every \(t\in[0;T]\) and \(\Lambda(\cdot,w_{\theta}(\cdot),w(\cdot))\in L^{1}((0;T))\). Then we have an estimation_
\[\mathcal{E}\leq\mathcal{C}\int_{0}^{T}e^{\frac{M\|A(\cdot,w_{\theta}(\cdot), w(\cdot))\|_{L^{1}((0,T))^{\omega t}}}{\omega}}dt \tag{23}\]
_where_
\[\begin{split}&\mathcal{C}=\|\Theta\mathcal{R}_{bnd}\|+Me^{ \omega T}\|\mathcal{R}_{in}\|+Me^{\omega T}\|\Theta\mathcal{R}_{bnd}(0)\|+\\ &+Me^{\omega T}\|\mathcal{R}_{pde}\|_{L^{1}((0;T)\to X)}+Me^{ \omega T}\|A\Theta\|\|\mathcal{R}_{bnd}\|_{L^{1}((0;T)\to X)}+Me^{\omega T} \left\|\frac{d\Theta\mathcal{R}_{bnd}}{dt}\right\|_{L^{1}((0;T)\to X)}\end{split}\]
Proof.: Given fixed \(w\) and \(w_{\theta}\), let again \(\hat{w}=w_{\theta}-w\). Then \(\tilde{w}:=\hat{w}-\Theta\mathcal{R}_{bnd}\in\mathcal{M}(A,B)\) is a solution of the following problem
\[\frac{d\tilde{w}}{dt}=A\tilde{w}+z(t),\] \[\tilde{w}|_{t=0}=\mathcal{R}_{in}-\Theta\mathcal{R}_{bnd}(0),\]
where
\[z=\mathcal{R}_{pde}+F(t)(w_{\theta})-F(t)(w)+A\Theta\mathcal{R}_{bnd}-\frac{d \Theta\mathcal{R}_{bnd}}{dt}\]
Then \(\tilde{w}\) is an also "mild" solution, that is, it satisfies the following integral equation
\[\tilde{w}(t)=V(t)\tilde{w}|_{t=0}+\int_{0}^{t}V(t-\tau)z(\tau)d\tau.\]
Hence,
\[\|\hat{w}\| \leq\|\Theta\mathcal{R}_{bnd}\|+Me^{\omega T}\|\mathcal{R}_{in}\| +Me^{\omega T}\|\Theta\mathcal{R}_{bnd}(0)\|+Me^{\omega T}\|\mathcal{R}_{pde} \|_{L^{1}((0;T)\to X)}+\] \[+Me^{\omega T}\|A\Theta\mathcal{R}_{bnd}\|_{L^{1}((0;T)\to X)}+Me^{ \omega T}\left\|\frac{d\Theta\mathcal{R}_{bnd}}{dt}\right\|_{L^{1}((0;T)\to X)}\] \[+M\int_{0}^{t}\Lambda(t,w_{\theta}(t),w(t))e^{\omega(t-\tau)}\| \hat{w}\|d\tau\leq\|\Theta\mathcal{R}_{bnd}\|+Me^{\omega T}\|\mathcal{R}_{in} \|+Me^{\omega T}\|\Theta\mathcal{R}_{bnd}(0)\|+\] \[+Me^{\omega T}\|\mathcal{R}_{pde}\|_{L^{1}((0;T)\to X)}+Me^{ \omega T}\|A\Theta\|\|\mathcal{R}_{bnd}\|_{L^{1}((0;T)\to X)}+Me^{\omega T} \left\|\frac{d\Theta\mathcal{R}_{bnd}}{dt}\right\|_{L^{1}((0;T)\to X)}+\] \[+M\int_{0}^{t}\Lambda(t,w_{\theta}(t),w(t))e^{\omega(t-\tau)}\| \hat{w}\|d\tau=\mathcal{C}+M\int_{0}^{t}\Lambda(t,w_{\theta}(t),w(t))e^{ \omega(t-\tau)}\|\hat{w}\|d\tau,\]
for a.e. \(t\in[0;T]\), where
\[\mathcal{C}=\|\Theta\mathcal{R}_{bnd}\|+Me^{\omega T}\|\mathcal{R }_{in}\|+Me^{\omega T}\|\Theta\mathcal{R}_{bnd}(0)\|+\] \[+Me^{\omega T}\|\mathcal{R}_{pde}\|_{L^{1}((0;T)\to X)}+Me^{ \omega T}\|A\Theta\|\|\mathcal{R}_{bnd}\|_{L^{1}((0;T)\to X)}+Me^{\omega T} \left\|\frac{d\Theta\mathcal{R}_{bnd}}{dt}\right\|_{L^{1}((0;T)\to X)}\]
With Gronwall-Bellman lemma,
\[\|\hat{w}\|\leq\mathcal{C}e^{\int_{0}^{t}\Lambda(t,w_{\theta}(t),w(t))e^{ \omega(t-\tau)}d\tau}\leq\mathcal{C}e^{\frac{M\|\Lambda(\cdot,w_{\theta}(t),w( )\|_{L^{1}(0;T)})^{\omega t}}{\omega}}\]
for a.e. \(t\in[0;T]\).
Finally,
\[\mathcal{E}\leq\mathcal{C}\int_{0}^{T}e^{\frac{M\|\Lambda(\cdot,w_{\theta}( \cdot),w(\cdot))\|_{L^{1}(0;T)}e^{\omega t}}{\omega}}dt\]
**Remark 11**.: Let us compare this estimation with the theorem 1. First of all, (23) contains \(W^{1,1}((0;T)\to Y)\) norm of boundary conditions residual. Also, it contains a double exponent on \(T\) if the semigroup is not uniformly bounded.
The assumption of right invertibility is natural for "good" boundary conditions, as the following example shows.
**Example 17**.: Let \(y\in C^{1}([a;b])\) and \(By=(y(b)-y(a),y^{\prime}(b)-y^{\prime}(a))^{T}\in\mathbb{R}^{2}\). Let us denote \(f_{1}:=y(b)-y(a)\), \(f_{2}:=y^{\prime}(b)-y^{\prime}(a)\). We need to construct such linear operator \(\Theta:\mathbb{R}^{2}\to\mathcal{D}(B)\), that \(A\Theta\) is bounded for some \(A\).
We just put \(\Theta\) to be a polynomial of degree \(\leq 2\), and
\[\Theta(f_{1},f_{2})(x):=\frac{f_{2}}{2(b-a)}x^{2}+\frac{2f_{1}-(b+a)f_{2}}{2(b -a)}x.\]
One can show that \(B\Theta(f_{1},f_{2})=(f_{1},f_{2})^{T}\). Since \(\mathrm{ran}(\Theta)\) is finite dimensional, \(A\Theta\) is bounded for every linear operator \(A\).
### Generalized parabolic-type equation
For simplicity, we consider submonotonicity/subordinancy on the whole domain. Given the following problem (24)-(26) in \(\mathbb{R}\)-smooth Banach space \(X\)
\[\frac{dw}{dt}+\frac{d}{dt}\sum_{k=1}^{K}U_{k}w=A(t)(w) \tag{24}\]
with initial condition
\[w\Big{|}_{t=0}=w_{0} \tag{25}\]
and boundary condition
\[B(t)(w)=h_{b,0}, \tag{26}\] \[B(t)(U_{k}w)=h_{k,b},\;1\leq k\leq K,\]
where \(A(t):\mathcal{D}(A(t))\to X\) and \(B(t):\mathcal{D}(B(t))\to Y\) are nonlinear operators, \(\mathcal{D}(A(t))\subset\mathcal{D}(B(t))\subset X\) are subspaces, \(Y\) is a Banach space. Also, for some \((p,\psi_{k})\)-powered \(U_{k}\), \(\mathcal{D}(U_{k})\subset\mathcal{D}(A(t))\subset\mathcal{D}\left(U_{k}^{ \frac{1}{p}}\right)\), \(U_{k}\) and \(U_{k}^{\frac{1}{p}}\) commute with \(\frac{d}{dt}\), \(U_{k}^{\frac{1}{p}}\) commute with \(\Big{|}_{t=0}\), \(1\leq k\leq K\).
We assume solution \(w\) of (24)-(26) to be continuous on \([0;T]\) and continuously differentiable on \((0;T]\).
Let us consider the neural network \(w_{\theta}\) with parameter \(\theta\), approximating solution \(w\) of (24)-(26), and the following PINN residuals
\[\begin{split}&\mathcal{R}_{pde}=\frac{dw_{\theta}}{dt}+\frac{d}{ dt}\sum_{k=1}^{K}U_{k}w_{\theta}-A(t)(w_{\theta}),\\ &\mathcal{R}_{in}=w_{\theta}|_{t=0}-w_{0},\\ &\mathcal{R}_{0,bnd}=B(t)(w_{\theta})-h_{b,0},\\ &\mathcal{R}_{k,bnd}=B(t)(U_{k}w_{\theta})-h_{b,k}\end{split} \tag{27}\]
Also, if we take \(\hat{w}:=w_{\theta}-w\), then
\[\begin{split}&\mathcal{R}_{pde}=\frac{d\hat{w}}{dt}+\frac{d}{ dt}\sum_{k=1}^{K}U_{k}\hat{w}-A(t)(w_{\theta})+A(t)(w),\\ &\mathcal{R}_{in}=\hat{w}|_{t=0},\\ &\mathcal{R}_{0,bnd}=B(t)(w_{\theta})-B(t)(w),\\ &\mathcal{R}_{k,bnd}=B(t)(U_{k}w_{\theta})-B(t)(U_{k}w)\end{split}\]
Again, the total error is given by formula
\[\mathcal{E}:=\|\hat{w}\|_{L^{4}((0;T)\to X)}^{q} \tag{28}\]
**Theorem 3**.: _Given a solution \(w\) of a problem (24)-(26), and neural network \(w_{\theta}\) with residuals (27) and total error (28). Let, moreover, \(-A(t)\) be a \((p,\psi(t))\)-submonotone w.r.t. norm \(\|\cdot\|\)\(\sum\limits_{\begin{subarray}{c}\lambda\end{subarray}}U_{k}^{\frac{1}{p}}\), with \((p,\psi_{k})\)-powered operators \(U_{k}\), where \(\psi_{k}\) are subordinate to \(B(t)\), \(\psi(t)\) is subordinate to \(B(t)\) w.r.t. \(U_{1},\ldots,U_{k}\), for every \(t\in[0;T]\). Furthermore, let respective \(\tilde{\gamma}_{k}(\cdot,w_{\theta}(\cdot),w(\cdot)),\gamma_{k}(\cdot,w_{ \theta}(\cdot),w(\cdot))\in C([0;T])\), \(\rho_{k}(t)\equiv\rho_{k}\), \(\tilde{\rho}_{k}(t)\equiv\tilde{\rho}_{k}\), and \(\Lambda(\cdot,w_{\theta}(\cdot),w(\cdot))\in L^{1}((0;T))\). Then we have an estimation_
\[\mathcal{E}\leq\mathcal{C}^{\frac{q}{p}}\left(\frac{p(e^{\frac{q(p-1)T}{p}}-1 )}{q(p-1)}\right)e^{q\|\Lambda(\cdot,w_{\theta}(\cdot),w(\cdot))\|_{L^{1}(0;T )}},\]
_where_
\[\begin{split}&\mathcal{C}=\|\mathcal{R}_{in}\|^{p}+\sum_{k=1}^{K}\|U_{k}^{ \frac{1}{p}}\mathcal{R}_{in}\|^{p}+\|\mathcal{R}_{pde}\|_{L^{p}((0;T)\to X)}^{p} +p\|\gamma_{0}(\cdot,w_{\theta}(\cdot),w(\cdot))\|_{C([0;T])}\|\rho_{0}\left( \mathcal{R}_{0,bnd}\right)\|_{L^{1}((0;T)\to X)}+\\ &\quad+p\sum_{k=1}^{K}\|\gamma_{k}(\cdot,w_{\theta}(\cdot),w( \cdot))\|_{C([0;T])}\|\rho_{k}\left(\mathcal{R}_{k,bnd}\right)\|_{L^{1}((0;T) \to X)}+\\ &\quad+p\left\|\tilde{\gamma}_{k}\left(\frac{dw_{\theta}}{dt}( \cdot),\frac{dw}{dt}(\cdot),w_{\theta}(\cdot),w(\cdot)\right)\right\|_{L^{1}(( 0;T))}\|\tilde{\rho}_{k}\left(\mathcal{R}_{0,bnd}\right)\|_{L^{1}((0;T)\to X)} \end{split}\]
Proof.: With (8) and (9), we have
\[\frac{d}{dt}\|\hat{w}\|^{p}+\sum_{k=1}^{K}\frac{d}{dt}\|U_{k}^{\frac {1}{k}}\hat{w}\|^{p}=p\left\langle\frac{d\hat{w}}{dt},\hat{w}\right\rangle_{p}+p \sum_{k=1}^{K}\left\langle U_{k}^{\frac{1}{k}}\frac{d\hat{w}}{dt},U_{k}^{\frac {1}{p}}\hat{w}\right\rangle_{p}=p\left\langle\frac{d\hat{w}}{dt},\hat{w} \right\rangle_{p}+p\sum_{k=1}^{K}\left\langle U_{k}\frac{d\hat{w}}{dt},\hat{w} \right\rangle_{p}-\] \[-p\sum_{k=1}^{K}\left[\psi_{k}\left(\frac{dw_{\theta}}{dt}(t), \frac{dw}{dt}(t),w_{\theta}(t),w(t)\right)\right]=p\left\langle\mathcal{R}_{ pde},\hat{w}\right\rangle_{p}+p\left\langle A(t)(w_{\theta})-A(t)(w),\hat{w} \right\rangle_{p}-\] \[-p\sum_{k=1}^{K}\left[\psi_{k}\left(\frac{dw_{\theta}}{dt}(t), \frac{dw}{dt}(t),w_{\theta}(t),w(t)\right)\right]\]
Again,
\[p\left|\left\langle\mathcal{R}_{pde},\hat{w}\right\rangle_{p}\right|\leq p\| \mathcal{R}_{pde}\|\|\hat{w}\|^{p-1}\leq\|\mathcal{R}_{pde}\|^{p}+(p-1)\|\hat{ w}\|^{p}\]
Also,
\[\left\langle A(t)(w_{\theta})-A(t)(w),\hat{w}\right\rangle_{p}\leq \psi(t,w(t),w_{\theta}(t))+\Lambda(t,w_{\theta}(t),w(t))\left(\|\hat{w}\|^{p}+ \sum_{k=1}^{K}\left\|U_{k}^{\frac{1}{k}}\hat{w}\right\|^{p}\right)\leq\] \[\leq\gamma_{0}(t,w_{\theta}(t),w(t))\rho_{0}(B(t)(w_{\theta})-B(t )(w))+\sum_{k=1}^{K}\gamma_{k}(t,w_{\theta}(t),w(t))\rho_{k}(B(t)(U_{k}w_{ \theta})-B(t)(U_{k}w))+\] \[+\Lambda(t,w_{\theta}(t),w(t))\left(\|\hat{w}\|^{p}+\sum_{k=1}^{K }\left\|U_{k}^{\frac{1}{k}}\hat{w}\right\|^{p}\right)=\gamma_{0}(t,w_{\theta}( t),w(t))\rho_{0}\left(\mathcal{R}_{0,bnd}\right)+\sum_{k=1}^{K}\gamma_{k}(t,w_{ \theta}(t),w(t))\rho_{k}\left(\mathcal{R}_{k,bnd}\right)+\] \[+\Lambda(t,w_{\theta}(t),w(t))\left(\|\hat{w}\|^{p}+\sum_{k=1}^{K }\left\|U_{k}^{\frac{1}{k}}\hat{w}\right\|^{p}\right)\]
Moreover,
\[\left|\psi_{k}\left(\frac{dw_{\theta}}{dt}(t),\frac{dw}{dt}(t),w_ {\theta}(t),w(t)\right)\right|\leq\tilde{\gamma}_{k}\left(\frac{dw_{\theta}}{ dt}(t),\frac{dw}{dt}(t),w_{\theta}(t),w(t)\right)\tilde{\rho}_{k}(B(t)(w_{ \theta})-B(t)(w))=\] \[=\tilde{\gamma}_{k}\left(\frac{dw_{\theta}}{dt}(t),\frac{dw}{dt}( t),w_{\theta}(t),w(t)\right)\tilde{\rho}_{k}\left(\mathcal{R}_{0,bnd}\right)\]
Hence,
\[\frac{d}{dt}\|\hat{w}\|^{p}+\sum_{k=1}^{K}\frac{d}{dt}\|U_{k}^{ \frac{1}{k}}\hat{w}\|^{p}\leq\|\mathcal{R}_{pde}\|^{p}+p\gamma_{0}(t,w_{\theta} (t),w(t))\rho_{0}\left(\mathcal{R}_{0,bnd}\right)+p\sum_{k=1}^{K}\gamma_{k}(t,w_{\theta}(t),w(t))\rho_{k}\left(\mathcal{R}_{k,bnd}\right)+\] \[+p\tilde{\gamma}_{k}\left(\frac{dw_{\theta}}{dt}(t),\frac{dw}{dt} (t),w_{\theta}(t),w(t)\right)\tilde{\rho}_{k}\left(\mathcal{R}_{0,bnd}\right)+( p-1+p\Lambda(t,w_{\theta}(t),w(t)))\|\hat{w}\|^{p}+p\Lambda(t,w_{\theta}(t),w(t)) \left(\sum_{k=1}^{K}\left\|U_{k}^{\frac{1}{p}}\hat{w}\right\|^{p}\right)\leq\] \[\leq\|\mathcal{R}_{pde}\|^{p}+p\gamma_{0}(t,w_{\theta}(t),w(t)) \rho_{0}\left(\mathcal{R}_{0,bnd}\right)+p\sum_{k=1}^{K}\gamma_{k}(t,w_{ \theta}(t),w(t))\rho_{k}\left(\mathcal{R}_{k,bnd}\right)+\] \[+p\tilde{\gamma}_{k}\left(\frac{dw_{\theta}}{dt}(t),\frac{dw}{dt} (t),w_{\theta}(t),w(t)\right)\tilde{\rho}_{k}\left(\mathcal{R}_{0,bnd}\right)+ (p-1+p\Lambda(t,w_{\theta}(t),w(t)))\left(\|\hat{w}\|^{p}+\sum_{k=1}^{K}\left\|U _{k}^{\frac{1}{p}}\hat{w}\right\|^{p}\right)\]
Integrating from \(0\) to \(t\leq T\),
\[\|\hat{w}\|^{p}+\sum_{k=1}^{K}\|U_{k}^{\frac{1}{p}}\hat{w}\|^{p}\leq\mathcal{C}+ \int_{0}^{t}(p-1+p\Lambda(t,w_{\theta}(t),w(t)))\left(\|\hat{w}\|^{p}+\sum_{k=1}^ {K}\left\|U_{k}^{\frac{1}{p}}\hat{w}\right\|^{p}\right)d\tau,\]
where
\[\mathcal{C} =\|\mathcal{R}_{in}\|^{p}+\sum_{k=1}^{K}\|U_{k}^{\frac{1}{2}}\mathcal{ R}_{in}\|^{p}+\|\mathcal{R}_{pde}\|_{L^{p}((0;T)\to X)}^{p}+p\|\gamma_{0}(\cdot,w_{ \theta}(\cdot),w(\cdot))\|_{C([0;T])}\|\rho_{0}\left(\mathcal{R}_{0,bnd} \right)\|_{L^{1}((0;T)\to X)}+\] \[+p\sum_{k=1}^{K}\|\gamma_{k}(\cdot,w_{\theta}(\cdot),w(\cdot))\|_ {C([0;T])}\|\rho_{k}\left(\mathcal{R}_{k,bnd}\right)\|_{L^{1}((0;T)\to X)}+\] \[+p\left\|\tilde{\gamma}_{k}\left(\frac{dw_{\theta}}{dt}(\cdot), \frac{dw}{dt}(\cdot),w_{\theta}(\cdot),w(\cdot)\right)\right\|_{L^{1}((0;T)) }\|\tilde{\rho}_{k}\left(\mathcal{R}_{0,bnd}\right)\|_{L^{1}((0;T)\to X)}\]
The rest of the proof is similar to theorem 1.
### Hyperbolic-type equation
We apply the classical technique to reduce hyperbolic-type equations to the parabolic-type (see, for instance, [4]). We consider the Hilbert space case only. Also, one can deal with a smooth \(\mathbb{R}\)-smooth Banach space. However, as stated in remark 7, it seems unuseful in practice. Furthermore, for simplicity, we consider the case with exact one \((2,\psi)\)-powered operator and with submonotonicity/subordinancy on the whole domain. Given the following problem (29)-(33) in a Hilbert space \(X\)
\[\frac{d^{2}w}{dt^{2}}=Uw+F(t)(w)+A(t)\left(\frac{dw}{dt}\right) \tag{29}\]
with initial conditions
\[w\Big{|}_{t=0}=w_{0}, \tag{30}\]
\[\frac{dw}{dt}\Big{|}_{t=0}=w_{t,0} \tag{31}\]
and boundary conditions
\[B_{1}(t)(w)=h_{b}, \tag{32}\]
\[B_{2}(t)\left(\frac{dw}{dt}\right)=h_{t,b}, \tag{33}\]
where \(A(t):\mathcal{D}(A(t))\to X\) and \(B_{1,2}(t):\mathcal{D}(B_{1,2}(t))\to Y\) are nonlinear operators, \(\mathcal{D}(F(t))\subset\mathcal{D}(B_{1,2}(t))\subset X\) are subspaces, \(Y\) is a Banach space. Also, for some \((2,\tilde{\psi})\)-powered \(-U\), \(\mathcal{D}(U)\subset\mathcal{D}(F(t))\subset\mathcal{D}\left((-U)^{\frac{1}{ 2}}\right)\), \(\mathcal{D}(A(t))\subset\mathcal{D}\left((-U)^{\frac{1}{2}}\right)\), \(U\) and \((-U)^{\frac{1}{2}}\) commute with \(\frac{d}{dt}\), \(U^{\frac{1}{2}}\) commute with \(\Big{|}_{t=0}\).
We assume solution \(w\) of (29)-(33) to be continuously differentiable on \([0;T]\) and twice continuously differentiable on \((0;T]\).
Let us consider the neural network \(w_{\theta}\) with parameter \(\theta\), approximating solution \(w\) of (29)-(33). We consider the following PINN residuals
\[\mathcal{R}_{pde}=\frac{d^{2}w_{\theta}}{dt^{2}}-Uw_{\theta}-F(t) (w_{\theta})-A(t)\left(\frac{dw_{\theta}}{dt}\right)\] \[\mathcal{R}_{in}=w_{\theta}\Big{|}_{t=0}-w_{0}\] \[\mathcal{R}_{in,t}=\frac{dw_{\theta}}{dt}\Big{|}_{t=0}-w_{t,0} \tag{34}\] \[\mathcal{R}_{bn}=B_{1}(t)(w_{\theta})-h_{b}\] \[\mathcal{R}_{bn,t}=B_{2}(t)\left(\frac{dw_{\theta}}{dt}\right)-h _{t,b}\]
or with \(\hat{w}=w_{\theta}-w\),
\[\mathcal{R}_{pde} =\frac{d^{2}\hat{w}}{dt^{2}}-U\hat{w}-F(t)(w_{\theta})+F(t)(w)-A(t) \left(\frac{dw_{\theta}}{dt}\right)+A(t)\left(\frac{dw}{dt}\right)\] \[\mathcal{R}_{in} =\hat{w}\Big{|}_{t=0}\] \[\mathcal{R}_{in,t} =\frac{d\hat{w}}{dt}\Big{|}_{t=0}\] \[\mathcal{R}_{bn} =B_{1}(t)(w_{\theta})-B_{1}(t)(w)\] \[\mathcal{R}_{bn,t} =B_{2}(t)\left(\frac{dw_{\theta}}{dt}\right)-B_{2}(t)\left(\frac {dw}{dt}\right)\]
Again, the total error is given by formula
\[\mathcal{E}:=\|\hat{w}\|_{L^{q}((0;T)\to X)}^{q} \tag{35}\]
**Theorem 4**.: _Given a solution \(w\) of a problem (29)-(33), and neural network \(w_{\theta}\) with residuals (34) and total error (35). Let, moreover, \(-A(t)\) be \((2,\psi(t))\)-submonotone, \(F(t)\) be conditionally Lipschitz w.r.t. to \((-U)^{\frac{1}{2}}\)-norm (see remark 12), with \((2,\tilde{\psi})\)-powered operator \(U_{2}\), where \(\tilde{\psi}\), \(\psi(t)\) are subordinate to \(B_{2}(t)\), for every \(t\in[0;T]\). Furthermore, let respective \(\tilde{\gamma}(\cdot,w_{\theta}(\cdot),w(\cdot)),\gamma(\cdot,w_{\theta}( \cdot),w(\cdot))\in C([0;T])\), \(\rho(t)\equiv\rho_{k}\), \(\tilde{\rho}(t)\equiv\tilde{\rho}_{k}\), \(\Lambda_{A}(\cdot,w_{\theta}(\cdot),w(\cdot))\in L^{1}((0;T))\) and \(\Lambda_{F}\left(\cdot,w_{\theta}(\cdot),w(\cdot)\right)\in L^{2}((0;T))\). Then we have an estimation_
\[\mathcal{E}\leq\mathcal{C}^{\frac{q}{2}}\frac{(e^{qT}-1)}{q}e^{q\left(\| \Lambda_{A}\left(\cdot,\frac{dw_{\theta}}{dt}(\cdot),\frac{dw}{dt}(\cdot) \right)\right\|_{L^{1}((0;T))}+\|\Lambda_{F}(\cdot,w_{\theta}(\cdot),w(\cdot) )\|_{L^{2}((0;T))}^{2}}\right),\]
_where_
\[\mathcal{C}=\|\mathcal{R}_{in}\|^{2}+\|\mathcal{R}_{in,t}\|^{2} +\|U^{\frac{1}{2}}\mathcal{R}_{in}\|^{2}+\|\mathcal{R}_{pde}\|_{L^{2}((0;T) \to X)}^{2}+\] \[+2\left\|\tilde{\gamma}\left(w_{\theta}(\cdot),w(\cdot),\frac{dw _{\theta}}{dt}(\cdot),\frac{dw}{dt}(\cdot)\right)\right\|_{C([0;T])}\|\tilde{ \rho}\left(\mathcal{R}_{bnd,t}\right)\|_{L^{1}((0;T)\to X)}\]
Proof.: Let us consider \(w_{\theta,1}=w_{\theta}\), \(w_{1}=w\), \(\hat{w}_{1}=\hat{w}\), \(w_{\theta,2}=\frac{dw_{\theta}}{dt}\), \(w_{2}=\frac{dw}{dt}\), \(\hat{w}_{2}=\frac{du}{dt}\). Then we have, as in parabolic-type equation,
\[\left\{\begin{array}{l}\frac{d\hat{w}_{1}}{dt}=\hat{w}_{2}\\ \frac{d\hat{w}_{2}}{dt}=\mathcal{R}_{pde}+U\hat{w}_{1}+F(t)(w_{\theta,1})-F(t )(w_{1})+A(t)(w_{\theta,2})-A(t)(w_{2})\end{array}\right. \tag{36}\]
Hence, one can deal with the space \(\tilde{X}=\mathcal{D}((-U)^{\frac{1}{2}})\times X\) endowed the following norm
\[\left\|\begin{pmatrix}y_{1}\\ y_{2}\end{pmatrix}\right\|_{\tilde{X}}^{2}:=\|y_{1}\|^{2}+\|y_{2}\|^{2}+\|(-U)^ {\frac{1}{2}}y_{1}\|^{2},\]
With (8) and (9), we have
\[\frac{d}{dt}\left\|\begin{pmatrix}\hat{w}_{1}\\ \hat{w}_{2}\end{pmatrix}\right\|_{\tilde{X}}^{2}=\frac{d}{dt}\|\hat{w}_{1}\|^{2 }+\frac{d}{dt}\|\hat{w}_{2}\|^{2}+\frac{d}{dt}\|(-U)^{\frac{1}{2}}\hat{w}_{1} \|^{2}=2\left\langle\frac{d\hat{w}_{1}}{dt},\hat{w}_{1}\right\rangle_{2}+2 \left\langle\frac{d\hat{w}_{2}}{dt},\hat{w}_{2}\right\rangle_{2}+2\left\langle \frac{(-U)^{\frac{1}{2}}d\hat{w}_{1}}{dt},(-U)^{\frac{1}{2}}\hat{w}_{1}\right\rangle _{2}=\] \[=2\langle\hat{w}_{2},\hat{w}_{1}\rangle_{2}+2\langle\mathcal{R}_{ pde},\hat{w}_{2}\rangle_{2}+2\langle U\hat{w}_{1},\hat{w}_{2}\rangle_{2}+2 \langle F(t)(w_{\theta,1})-F(t)(w_{1}),\hat{w}_{2}\rangle_{2}+2\langle A(t)(w_{ \theta,2})-A(t)(w_{2}),\hat{w}_{2}\rangle_{2}+\] \[+2\langle(-U)^{\frac{1}{2}}\hat{w}_{1},(-U)^{\frac{1}{2}}\hat{w}_ {2}\rangle_{2}=2\langle\hat{w}_{2},\hat{w}_{1}\rangle_{2}+2\langle\mathcal{R}_{ pde},\hat{w}_{2}\rangle_{2}+2\langle F(t)(w_{\theta,1})-F(t)(w_{1}),\hat{w}_{2} \rangle_{2}+2\langle A(t)(w_{\theta,2})-A(t)(w_{2}),\hat{w}_{2}\rangle_{2}-\] \[-2\tilde{\psi}(w_{1,\theta}(t),w_{1}(t),w_{2,\theta}(t),w_{2}(t)) \leq\|\hat{w}_{1}\|^{2}+2\|\hat{w}_{2}\|^{2}+\|\mathcal{R}_{pde}\|^{2}+2\Lambda_{F }^{2}(t,w_{\theta,1}(t),w_{1}(t))\left(\|\hat{w}_{1}\|^{2}+\|(-U)^{\frac{1}{2}} \hat{w}_{1}\|^{2}\right)+\] \[+2\psi(t,w_{2,\theta}(t),w_{2}(t))+2\Lambda_{A}(t,w_{2,\theta}(t),w _{2}(t))\|\hat{w}_{2}\|^{2}-2\tilde{\psi}(w_{1,\theta}(t),w_{1}(t),w_{2,\theta}(t ),w_{2}(t))\leq\] \[\leq\|\hat{w}_{1}\|^{2}+2\|\hat{w}_{2}\|^{2}+\|\mathcal{R}_{pde}\|^{ 2}+2\Lambda_{F}^{2}(t,w_{\theta,1}(t),w_{1}(t))\left(\|\hat{w}_{1}\|^{2}+\|(-U)^{ \frac{1}{2}}\hat{w}_{1}\|^{2}\right)\] \[+2\gamma(t,w_{2,\theta}(t),w_{2}(t))\rho(\mathcal{R}_{bnd,t})+2 \Lambda_{A}(t,w_{2,\theta}(t),w_{2}(t))\|\hat{w}_{2}\|^{2}+2\tilde{\gamma}(w_{1, \theta}(t),w_{1}(t),w_{2,\theta}(t),w_{2}(t))\tilde{\rho}(\mathcal{R}_{bnd,t})\]
Hence, after integrating from \(0\) to \(t\leq T\), we obtain
\[\left\|\begin{pmatrix}\hat{w}_{1}\\ \hat{w}_{2}\end{pmatrix}\right\|_{\tilde{X}}^{2}\leq\mathcal{C}+2\int_{0}^{t}(1+ \Lambda_{F}^{2}(t,w_{\theta,1}(t),w_{1}(t))+\Lambda_{A}(t,w_{2,\theta}(t),w_{ 2}(t)))\left\|\begin{pmatrix}\hat{w}_{1}\\ \hat{w}_{2}\end{pmatrix}\right\|_{\tilde{X}}^{2}d\tau\quad,\]
where
\[\mathcal{C}=\|\mathcal{R}_{in}\|^{2}+\|\mathcal{R}_{in,t}\|^{2}+ \|U^{\frac{1}{2}}\mathcal{R}_{in}\|^{2}+\|\mathcal{R}_{pde}\|_{L^{2}((0;T) \to X)}^{2}+2\|\gamma(\cdot,w_{2,\theta}(\cdot),w_{2}(\cdot))\|_{C([0;T])}\|\rho \left(\mathcal{R}_{bnd,t}\right)\|_{L^{1}((0;T)\to X)}+\] \[+2\|\tilde{\gamma}(w_{1,\theta}(\cdot),w_{1}(\cdot),w_{2,\theta }(\cdot),w_{2,}(\cdot))\|_{C([0;T])}\|\tilde{\rho}\left(\mathcal{R}_{bnd,t} \right)\|_{L^{1}((0;T)\to X)}\]
The rest of the proof is similar to theorem 1.
**Remark 12**.: Conditionally Lipschitz continuity of \(F\) w.r.t. \((-U)^{\frac{1}{2}}\)-norm stands for
\[\|F(\chi)-F(y)\|\leq\Lambda_{F}(\chi,y)\left(\|\chi-y\|^{2}+\|(-U)^{\frac{1}{2 }}\chi-(-U)^{\frac{1}{2}}y\|^{2}\right)^{\frac{1}{2}}\]
Moreover, instead of conditionally Lipschitz continuity w.r.t. \((-U)^{\frac{1}{2}}\)-norm, one can consider "weak \((2,\psi_{F})\)-Lipschitz continuity w.r.t. \((-U)^{\frac{1}{2}}\)-norm", that is,
\[|\langle F(\chi_{1})-F(y_{1}),\chi_{2}-y_{2}\rangle_{2}|\leq\psi_{F}(\chi_{1}, y_{1},\chi_{2},y_{2})+\Lambda_{F}^{2}(\chi_{1},y_{1},\chi_{2},y_{2})\left(\| \chi-y\|^{2}+\|(-U)^{\frac{1}{2}}\chi-(-U)^{\frac{1}{2}}y\|^{2}\right)+\| \chi_{2}-y_{2}\|^{2}\]
**Remark 13**.: Let us note that obtained estimation does not depend on \(\mathcal{R}_{bnd}\). However, in practice, we can observe such dependence. For instance, let \(U=\frac{\partial^{2}}{\partial x^{2}}\) in \(L^{2}((a;b))\),
\[\left\langle-Uw,\frac{dw}{dt}\right\rangle_{2}=\frac{\partial w}{\partial x} \frac{dw}{dt}\Big{|}_{a}^{b}+\left\langle U^{\frac{1}{2}}w,U^{\frac{1}{2}} \frac{dw}{dt}\right\rangle_{2}\]
Hence, if we have Neumann-type boundary conditions, then respective \(\tilde{\psi}\) subordinates to it if we swap \(w\) and \(\frac{dw}{dt}\).
### Elliptic equation
Given the following problem (37)-(38) in a \(\mathbb{R}\)-smooth Banach space \(X\)
\[A(y)=f_{pde} \tag{37}\]
and boundary condition
\[B(y)=f_{b}, \tag{38}\]
where \(A:\mathcal{D}(A)\to X\) and \(B:\mathcal{D}(B)\to Y\) are nonlinear operators, \(\mathcal{D}(A)\subset\mathcal{D}(B)\subset X\) are subspaces, \(Y\) is a Banach space. Again, let
\[\mathcal{M}(A,B):=B^{-1}(f_{b})\cap\mathcal{D}(A)\]
Let us consider the neural network \(y_{\theta}\) with parameter \(\theta\), approximating solution \(y\) of (37)-(38) and the following PINN residuals
\[\mathcal{R}_{pde} =A(y_{\theta})-f_{pde}, \tag{39}\] \[\mathcal{R}_{bnd} =B(y_{\theta})-f_{b}.\]
or with \(\hat{y}=y_{\theta}-y\),
\[\mathcal{R}_{pde} =A(y_{\theta})-A(y),\] \[\mathcal{R}_{bnd} =B(y_{\theta})-B(y)\]
The total error is given by formula
\[\mathcal{E}:=\|\hat{y}\|^{q},\;q>1 \tag{40}\]
**Theorem 5**.: _Given a solution \(y\) of a problem (37)-(38), and neural network \(y_{\theta}\) with residuals (39) and total error (40). Let, moreover, \(A\) be \((p,\psi)\)-coercive on \(\mathcal{M}(A,B)\), with \(\psi\) subordinate to \(B\) on \(\mathcal{M}(A,B)\). Then we have an estimation_
\[\mathcal{E}\leq p^{\frac{q}{p}}\left(\gamma(y_{\theta},y)\rho(\mathcal{R}_{ bnd})+\frac{1}{p}\Lambda^{p}(y_{\theta},y)\|\mathcal{R}_{pde}\|^{p}\right)^{ \frac{q}{p}}\]
Proof.: With conditions of the theorem,
\[\|\hat{y}\|^{p}\leq\psi(y_{\theta},y)+\Lambda(y_{\theta},y)\langle A (y_{\theta})-A(y),\hat{y}\rangle_{p}=\psi(y_{\theta},y)+\Lambda(y_{\theta},y) \langle\mathcal{R}_{pde},\hat{y}\rangle_{p}\] \[\leq\gamma(y_{\theta},y)\rho(\mathcal{R}_{bnd})+\Lambda(y_{\theta },y)\|\mathcal{R}_{pde}\|\|\hat{y}\|^{p-1}.\]
Then with Young inequality,
\[\|\hat{y}\|^{p}\leq\gamma(y_{\theta},y)\rho(\mathcal{R}_{bnd})+\frac{\Lambda^{ p}(y_{\theta},y)}{p}\|\mathcal{R}_{pde}\|^{p}+\frac{p-1}{p}\|\hat{y}\|^{p}.\]
Hence,
\[\mathcal{E}\leq p^{\frac{q}{p}}\left(\gamma(y_{\theta},y)\rho(\mathcal{R}_{ bnd})+\frac{1}{p}\Lambda^{p}(y_{\theta},y)\|\mathcal{R}_{pde}\|^{p}\right)^{ \frac{q}{p}}.\]
## 4 PINN's residuals upper bound
In [5], the authors proved existence of a two-layer tahn neural network approximation for a sufficiently regular solution of the Navier-Stokes equation. Also, they obtained \(L^{2}\) upper bounds for the residuals in terms of the neural network width. The core theorem underlying this result claims the tahn neural network existence, approximating a function from \(H^{\sigma}(\Omega)\), where \(\Omega\subset\mathbb{R}^{m}\) is an integer right parallelepiped and \(\sigma\geq 3\). Our goal to obtain a similar theorem in \(W^{\sigma,p}(\Omega)\).
Let \(\Omega\subset\mathbb{R}^{m}\) be a convex bounded domain, and let \(p\geq 1\). First, we need to obtain the Bramble-Hilbert lemma in \(W^{\sigma,p}(\Omega)\), similar to the result of [26]. R. Verfurth in [26] noted that core point for such extension is a Poincare inequality in \(W^{1,p}(\Omega)\) for zero mean value functions with "good" constant. However, for functions with zero mean value (i.e. \(\frac{1}{|\Omega|}\int_{\Omega}ydx=0\)), such a constant is known only for one-dimensional case [10]. We provide slightly another idea: the mean value in \(L^{2}(\Omega)\) is a projection to the set of constant functions. And for a function with vanishing projection, there is a "good" Poincare inequality.
**Statement 4**.: _Let \(p\geq 1\), and \(\Omega\subset\mathbb{R}^{m}\) be a convex bounded domain. Then there exists an operator \(\mathcal{J}_{p}:L^{p}(\Omega)\to\mathbb{R}\),_
\[\mathcal{J}_{p}(y):=\begin{cases}\frac{1}{|\Omega|}\int_{\Omega}ydx,&1\leq p< 2,\\ \arg\min_{s\in\mathbb{R}}\|y-s\|_{L^{p}(\Omega)},&p\geq 2\end{cases}\]
_with the following properties:_
\[\mathcal{J}_{p}(-y)=-\mathcal{J}_{p}(y),\] \[\mathcal{J}_{p}(y-\mathcal{J}_{p}(y))=0.\]
Proof.: If \(1\leq p<2\), the result is clear. Let \(p\geq 2\). Given fixed element \(y\in L^{p}(\Omega)\), let us consider the following function
\[\varphi:\mathbb{R}\to\mathbb{R},\;\varphi(s):=\|y-s\|_{L^{p}(\Omega)}\]
Then \(\varphi\) is a differentiable convex function. Since
\[\varphi(s)\geq|s||\Omega|^{\frac{1}{p}}-\|y\|\]
we have that \(\varphi(+\infty)=\varphi(-\infty)=+\infty\), and, therefore, \(\varphi\) attends a global minimum. We have two possible cases.
If \(y\equiv s_{0}\) for some \(s_{0}\in\mathbb{R}\), then we have an exact one minimum \(\varphi(s_{0})=0\). If \(y\not\equiv const\), then with the equality condition in the Minkowski inequality, we have that \(\varphi\) is strictly convex. Hence, \(\varphi\) attends its global minimum exactly at one point.
Thus, an operator
\[\mathcal{J}_{p}(y):=\arg\min_{s\in\mathbb{R}}\|y-s\|_{L^{p}(\Omega)}\]
is correctly defined.
Also, we have
\[\|-y-(-\mathcal{J}_{p}(y))\|_{L^{p}(\Omega)}=\|y-\mathcal{J}_{p}(y)\|_{L^{p}( \Omega)}\leq\|-y-s\|_{L^{p}(\Omega)}\]
and
\[\|y-\mathcal{J}_{p}(y)\|_{L^{p}(\Omega)}\leq\|y-\mathcal{J}_{p}(y)-s\|_{L^{p}( \Omega)}\]
for every \(s\in\mathbb{R}\). Hence, \(\mathcal{J}_{p}\) has desired properties.
Now, we turn to Poincare inequality.
**Lemma 2**.: _(Poincare inequality) For every \(y\in W^{1,p}(\Omega)\) with \(\mathcal{J}_{p}(y)=0\), the following inequality holds_
\[\|y\|_{L^{p}(\Omega)}\leq\pi_{p}\mathrm{diam}(\Omega)\|\nabla y\|_{L^{p}(\Omega)},\]
_where_
\[\pi_{p}=\begin{cases}\frac{1}{2},&p=1\\ \pi^{\frac{p}{p}-2}2^{1-\frac{2}{p}},&1<p<2\\ \frac{p\sin\left(\frac{p}{p}\right)}{2\pi(p-1)^{\frac{1}{p}}},&p\geq 2\end{cases} \tag{41}\]
Proof.: For \(p=2\), the inequality is a result due to [19] which was correctly proved in [3]. For \(p=1\), it was proved in [1]. In [3], there is a remark that the proof in [1] also contains a similar mistake, but the authors of [1] corrected their text after the paper was posted.
If \(p\geq 2\), then \(\mathcal{J}_{p}(y)=0\) implies \(\int_{\Omega}y|y|^{p-2}dx=0\). Really, since \(s=\mathcal{J}_{p}(y)=0\) is a stationary point of \(\varphi\), we have
\[0=\frac{d\varphi}{ds}\Big{|}_{s=0}=-p\int_{\Omega}y|y|^{p-2}dx\]
For such functions, the inequality was proved in [9].
Finally, in case \(1<p<2\), the constant \(\pi_{p}\) can be obtained by the Riecz-Thorin interpolation theorem between \(L^{1}\) and \(L^{2}\).
As a result, we have
**Statement 5**.: _(Bramble-Hilbert lemma in \(W^{\sigma+1,p}(\Omega)\)) Let \(\Omega\subset\mathbb{R}^{m}\) be a convex bounded domain, \(p\geq 1\), \(y\in W^{\sigma+1,p}(\Omega)\). Then there exists polynomial \(P_{y}\) in \(m\) variables of degree at most \(\sigma\) such that_
\[|y-P_{y}|_{W^{\sigma,p}(\Omega)}\leq c_{\sigma+1,\nu,p}\left(\mathrm{diam}( \Omega)\right)^{\sigma+1-\nu}|y|_{W^{\sigma+1,p}(\Omega)},\;\forall 0\leq\nu\leq\sigma,\]
_where_
\[c_{\sigma,\nu,p}\leq\pi_{p}^{\sigma-\nu}\binom{m+\nu-1}{\nu}\frac{1}{\left( (\sigma-\nu)!\right)^{\frac{1}{p}}}{\left(\big{[}\frac{\sigma-\nu}{m}\big{]}!\right)^{\frac{m}{p}}} \tag{42}\]
\(\pi_{p}\) _is defined in (41)._
Proof.: As in [26], we define polynomials \(P_{y,\sigma}\),...,\(P_{y,0}=:P_{y}\) recursively
\[P_{y,\sigma}:=\sum_{\iota\in\mathbb{N}^{m},\,|\iota|=\sigma}\frac{1}{\iota!}x ^{\iota}\mathcal{J}_{p}(\partial^{\iota}y)\]
and
\[P_{y,\nu-1}:=P_{y,\nu}+\sum_{\iota\in\mathbb{N}^{m},\,|\iota|=\nu-1}\frac{1}{ \iota!}x^{\iota}\mathcal{J}_{p}\left(\partial^{\iota}(y-\mathcal{J}_{p}(y)) \right),\;\sigma\geq\nu\geq 1.\]
As in [26], with properties of \(\mathcal{J}_{p}\), one can show that
\[\partial^{\xi}(P_{y,\sigma-\bar{\sigma}})=P_{y,\sigma-\bar{\sigma}-\nu}( \partial^{\xi}y)\]
and
\[\mathcal{J}_{p}\left(\partial^{\xi}(y-P_{y,\sigma-\bar{\sigma}}(y))\right)=0,\]
for all \(y\in W^{\bar{\sigma},p}(\Omega)\), \(0\leq\nu\leq\tilde{\sigma}\), and \(\xi\in\mathbb{N}^{m}\) with \(|\xi|:=\xi_{1}+\cdots+\xi_{m}=\nu\).
The rest of the proof is similar to [26].
As a corollary, arguing exactly as in [5], one can obtain a theorem on neural network approximation existence in \(W^{\sigma,p}(\Omega)\).
**Theorem 6**.: _Let \(p\geq 1\), \(m\geq 2\), \(\sigma\geq 3\), \(\delta>0\), \(a_{j},b_{j}\in\mathbb{Z}\), \(a_{j}<b_{j}\), for \(1\leq j\leq m\), \(\Omega=\prod_{j=1}^{m}[a_{j},b_{j}]\) and \(y\in W^{\sigma,p}(\Omega)\). Then for every \(N\in\mathbb{N}\) with \(N>5\) there exists a tanh neural network \(y_{\theta,N}\) with two hidden layers, one of width at most \(3\left\lceil\frac{\sigma}{2}\right\rceil\binom{\sigma+m-1}{m+1}+\sum_{j=1}^{m }(b_{j}-a_{j})(N-1)\) and another of width at most \(3\left\lceil\frac{m+2}{2}\right\rceil\binom{2m+1}{m+1}N^{m+1}\prod_{i=1}^{m+1} (b_{j}-a_{j})\), such that for \(\nu\in\{0,1,2\}\) it holds that,_
\[\|y-y_{\theta,N}\|_{W^{\nu,p}(\Omega)}\leq 2^{\nu}3^{n}\mathcal{A}_{\nu, \sigma,m,y}(1+\delta)\ln^{\nu}\left(\mathcal{B}_{\nu,\delta,d,y}N^{m+\sigma+2 }\right)N^{-\sigma+\nu}, \tag{43}\]
_where_
\[\mathcal{B}_{\nu,\delta,m,y}=\frac{5\cdot 2^{\nu m}\max\left\{\prod_{j=1}^{m }(b_{j}-a_{j}),m\right\}\max\{\|y\|_{W^{\nu,\infty}(\Omega)},1\}}{3^{m}\delta \min\{1,\mathcal{A}_{\nu,\sigma,m,y}\}}, \tag{44}\]
_and_
\[\mathcal{A}_{\nu,\sigma,m,y}=\max_{0\leq l\leq\nu}c_{\sigma,l,p}\left(3\sqrt{ m}\right)^{\sigma-l}|y|_{W^{\sigma,p}(\Omega)}, \tag{45}\]
_where \(c_{\sigma,l,p}\) is defined in (42)._
_Furthermore, the weights of \(y_{\theta,N}\) scale as \(O\left(N^{\max\left\{\frac{\sigma^{2}}{2},m\left(1+\frac{\sigma}{2}+\frac{m}{ 2}\right)\right\}}\right)\)._
## 5 Conclusion
Hence, PINN's error estimation technic, applied in papers due to S. Mishra, R. Molinaro, et al., is a powerful tool that can be used for a wide range of PDEs.
|
2304.06007 | GPr-Net: Geometric Prototypical Network for Point Cloud Few-Shot
Learning | In the realm of 3D-computer vision applications, point cloud few-shot
learning plays a critical role. However, it poses an arduous challenge due to
the sparsity, irregularity, and unordered nature of the data. Current methods
rely on complex local geometric extraction techniques such as convolution,
graph, and attention mechanisms, along with extensive data-driven pre-training
tasks. These approaches contradict the fundamental goal of few-shot learning,
which is to facilitate efficient learning. To address this issue, we propose
GPr-Net (Geometric Prototypical Network), a lightweight and computationally
efficient geometric prototypical network that captures the intrinsic topology
of point clouds and achieves superior performance. Our proposed method, IGI++
(Intrinsic Geometry Interpreter++) employs vector-based hand-crafted intrinsic
geometry interpreters and Laplace vectors to extract and evaluate point cloud
morphology, resulting in improved representations for FSL (Few-Shot Learning).
Additionally, Laplace vectors enable the extraction of valuable features from
point clouds with fewer points. To tackle the distribution drift challenge in
few-shot metric learning, we leverage hyperbolic space and demonstrate that our
approach handles intra and inter-class variance better than existing point
cloud few-shot learning methods. Experimental results on the ModelNet40 dataset
show that GPr-Net outperforms state-of-the-art methods in few-shot learning on
point clouds, achieving utmost computational efficiency that is $170\times$
better than all existing works. The code is publicly available at
https://github.com/TejasAnvekar/GPr-Net. | Tejas Anvekar, Dena Bazazian | 2023-04-12T17:32:18Z | http://arxiv.org/abs/2304.06007v1 | # GPr-Net: Geometric Prototypical Network
###### Abstract
In the realm of 3D-computer vision applications, point cloud few-shot learning plays a critical role. However, it poses an arduous challenge due to the sparsity, irregularity, and unordered nature of the data. Current methods rely on complex local geometric extraction techniques such as convolution, graph, and attention mechanisms, along with extensive data-driven pre-training tasks. These approaches contradict the fundamental goal of few-shot learning, which is to facilitate efficient learning. To address this issue, we propose GPr-Net (Geometric Prototypical Network), a lightweight and computationally efficient geometric prototypical network that captures the intrinsic topology of point clouds and achieves superior performance. Our proposed method, IGI++ (Intrinsic Geometry Interprert++) employs vector-based hand-crafted intrinsic geometry interpreters and Laplace vectors to extract and evaluate point cloud morphology, resulting in improved representations for FSL (Few-Shot Learning). Additionally, Laplace vectors enable the extraction of valuable features from point clouds with fewer points. To tackle the distribution drift challenge in few-shot metric learning, we leverage hyperbolic space and demonstrate that our approach handles intra and inter-class variance better than existing point cloud few-shot learning methods. Experimental results on the ModelNet40 dataset show that GPr-Net outperforms state-of-the-art methods in few-shot learning on point clouds, achieving utmost computational efficiency that is \(170\times\) better than all existing works. The code is publicly available at [https://github.com/TejasAnvekar/GPr-Net](https://github.com/TejasAnvekar/GPr-Net).
## 1 Introduction
The domain of computer vision has witnessed a remarkable surge in the significance of 3D data processing, with point cloud data emerging as a prominent representation obtained via real-time acquisition using LiDAR scanners. Point cloud object classification plays a critical role in several applications such as indoor SLAM [44], robotics [27], and autonomous vehicles [18], facilitating efficient navigation and decision-making. Deep learning-based techniques [24][22] have revolutionized 3D point cloud classification by enabling the extraction of representative features from shape projections [32] or raw points, thereby enhancing performance compared to traditional handcrafted feature-based methods [26][42].
Despite the recent progress in geometric deep learning, the need for large amounts of labeled training data remains a significant challenge, both in terms of cost and practicality [25]. While self-supervised approaches [1][41], data augmentation [15][29], and regularisation [22] techniques have helped alleviate the aforementioned issue, they may not perform well on new tasks or unseen classes without sufficient labeled training data. This has led to a growing demand for methods that enable geometric deep networks
Figure 1: We demonstrate Laplace vectors, a simple yet effective geometric signature that captures the statistics of group deviations, facilitating the abstraction of edges and corners in point clouds. The efficacy of Laplace vectors is highlighted in the visualization of an airplane and a tetrahedron. In the airplane, Laplace vectors capture the uni-directed high group deviation at the ends of the wings, indicating a sudden change or an edge in local topology. Similarly, the tetrahedron exhibits high and uniform group deviation at its ends, indicating a corner.
to quickly adapt to novel settings with limited labeled data, much like humans who can learn new concepts with only a few examples by drawing on prior knowledge and inductive bias [3].
To address this challenge, few-shot learning (FSL) techniques [30][33][9] have shown remarkable progress in 2D visual understanding tasks such as image classification [12], object detection [11], and semantic segmentation [7]. However, FSL on 3D data is still in its nascent stages and presents unique challenges. Previous approaches to 3D-FSL [28][8][39] have focused on determining the best FSL algorithm, network design, and deep learning methodology, often relying on complex pre-training tasks or intricate deep learning modules. These approaches may not effectively capture the human-inspired characteristics that researchers aim to incorporate, leading to limited generalization.
Towards addressing the aforementioned challenges, we introduce a novel 3D-FSL approach, the Geometric Prototypical Network (Gpr-Net), which leverages geometric priors to achieve fast and efficient point cloud few-shot learning. Unlike conventional approaches that rely on complex pre-training [28] procedures or sophisticated deep learning modules [39], Gpr-Net is engineered to transfer geometric prior knowledge directly to novel tasks with minimal training. To capture these valuable geometric priors for 3D-FSL, we propose Intrinsic Geometry Interpreters++ (IGI++), which efficiently captures the local intrinsic topology of the point cloud using the IGI features inspired by the VG-VAE [2]. Additionally, we propose Laplace vectors to extract abstract information about the edges and corners present in point clouds. The coherently combined intrinsic and Laplace vectors of IGI++ provide a comprehensive representation of the crucial geometric properties for few-shot learning on point clouds as shown in Figure 1. Furthermore, we address the distribution shift in prototypical networks by mapping our geometric priors to the Hyperbolic metric. Extensive experiments on the ModelNet40 dataset demonstrate the superiority of Gpr-Net in few-shot learning on point clouds compared to state-of-the-art methods. GPr-Net achieves up to \(170\times\) fewer parameters that facilitate faster performance and a 5% increase in accuracy compared to related works.
Our contributions can be summarized as:
* We propose Gpr-Net: a lightweight Geometric Prototypical Network designed for fast and efficient point cloud few-shot learning.
* We propose an Intrinsic Geometry Interpreters++ (IGI++) to cohere intrinsic and high-frequency geometric signatures of a point cloud which comprises the following modules: 1) an Intrinsic Geometric Interpreter (IGI) to efficiently capture the local topology of the point cloud; 2) our proposed novel Laplace vectors to capture the abstraction of edges and corners in point clouds.
* We propose employing the Hyperbolic / poincare metric to mitigate the challenge of distribution shift in prototypical networks.
* We demonstrate the impact of our derived geometric signatures on ModelNet40 and outperforms existing state-of-the-art few-shot learning techniques by 5% in accuracy with \(170\times\) fewer parameters.
## 2 Related Works
**Point Cloud Analysis** has been revolutionized by deep learning models for 3D point cloud classification by allowing us to learn more intricate and representative features. Unlike traditionally handcrafted methods [26][42], these models can learn these features without any human intervention. There are two types of deep learning methods: projection-based and point-based networks. Projection-based [32] networks first transform irregular points into a structured representation such as voxel [43] or lattices and then use standard convolution neural networks to extract view-wise or structural features. However, they may encounter explicit information loss or higher memory consumption. Point-based methods have become more popular, exemplified by the likes of PointNet [24] and other approaches like PointNet++ [25] and PointCNN [16], and DGCNN [34], Point-MLP [20], HyCoRe [22], EDC-Net [5], DCG-Net [4], which utilize convolution or graph-based networks to achieve state-of-the-art performance. While these deep learning methods require a significant amount of annotated data, their generalization capabilities on novel classes during training may be limited. This limitation could potentially be a subject of future research.
**Few-shot learning** (FSL) has emerged as a crucial field in machine learning that aims to overcome the limitations of traditional supervised learning methods, which require large labeled datasets to generalize to new tasks. To achieve this, several approaches have been proposed, including Prototypical Networks [30], which introduced the concept of prototypes for few-shot classification, and Relation Networks [33], which proposed a novel architecture that captures relations between different instances to improve accuracy. Model-Agnostic Meta-Learning (MAML) [9] takes a meta-learning approach to few-shot classification, learning an initialization of the model that can be quickly adapted to new tasks with only a few labeled examples. Recent research has tackled the challenge of 3D point cloud learning with limited training data. Sharma et al. [28] explore feature representation through self-supervision, while LSSB [31] aim to
learn a discriminative embedding space for 3D model multi-view images. However, authors in Enrich-Features [8] proposed a novel few-shot point cloud classification paradigm that effectively combines current fully supervised methods. This approach utilizes feature fusion and channel-wise attention to improve feature learning accuracy. These works represent important strides in addressing the challenge of 3D point cloud learning with limited data.
**Hyperbolic Metric Learning** embeds hierarchical structures with low distortion [23]. It has been used for non-Euclidean manifolds in various representation learning frameworks. Early on, it was used for natural language processing. Hyperbolic neural network [10] layers have been shown to be better than Euclidean ones, and hyperbolic variants have been explored for images and graphs [19]. Euclidean embeddings are insufficient for complex visual data. Hyperbolic Image Embeddings [12] address this by capturing hierarchical relationships with negative curvature, improving few-shot classification accuracy on benchmarks like miniImageNet [6] and CUB [35]. HyCoRe [22] introduced a new method for using hyperbolic space embeddings to capture the part-whole hierarchy of 3D objects in point clouds. This approach significantly improves the performance of point cloud classification models. To the best of our knowledge, no research has explored using hyperbolic representations for few-shot learning of point clouds, despite their inherent hierarchical structure and ability to mitigate distribution drift of Prototypical networks. There exists a need for learning hyperbolic embeddings to capture the compositional nature of 3D objects with can facilitate the capture tree-like geometric hierarchy of the data, making them a superior prior for 3D-FSL.
Our method for point cloud few-shot learning introduces a novel approach that utilizes geometric signatures and Hyperbolic space to improve performance. It is distinguished from existing methods by its lightweight, fast, and pragmatic nature, requiring only a few episodes to train the challenging few-shot classification task.
## 3 GPr-Net
We present GPr-Net, a lightweight Geometric Prototypical Network designed for fast and efficient point cloud few-shot learning. By leveraging intrinsic geometric features, GPr-Net captures abstract information necessary for superior few-shot learning. Our proposed Intrinsic Geometry Interpreters++ (IGI++) extracts fundamental features like local topology, edges, and corners, while a single fully connected layer maps aggregation of geometric features to a higher dimensional point for episodic few-shot classification. Furthermore, we enhance our model's performance by incorporating Hyperbolic space that yields sharp logits for few-shot learning as depicted in Figure 2. Unlike previous methods, GPr-Net relies on statistical geometric fea
Figure 2: We present an overview of the proposed GPr-Net framework, which processes point clouds in a few-shot episodic paradigm using the proposed IGI [2] and Laplace vectors to generate geometric feature sets. These features are then mapped to a higher dimensional permutation invariant feature using the symmetric operation \(\mathcal{A}\) and a single Multilayer Perceptron (MLP) \(f_{\theta}\). The Prototypical network \(f_{\theta}\), utilizes the support and query geometric embeddings \(\widetilde{L}(\Psi(x_{s}))=\mathcal{S}_{e}\) and \(\widetilde{L}(\Psi(x_{q}))=\mathcal{Q}_{e}\) to predict few-shot labels. To overcome the distribution drift challenge in Prototypical Networks, we employ the Hyperbolic Distance of Euclidean. For more details, please refer to Section 3.2.2.
tures and is trained using a few-shot paradigm to simulate real-life scenarios.
### Notations and Strategies
Let \(P\) denote point cloud such that \(P=\{p_{1},...p_{n}\}\) where \(p_{i}\in\mathbb{R}^{d}\) and \(n\) represents total number of points. Inspired by the episodic paradigm of the Few-Shot Classification (FSL) task on Images [12] we incorporate a similar algorithm with minimal changes for point cloud FSL. The train set \(D_{train}\) and test set \(D_{test}\) are designed such that the categories of \(D_{train}\cap D_{test}=\emptyset\).
FSL is optimized for episodes containing pair of \(K\)-ways and \(N\)-shots of the support set and query set. \(N_{\mathcal{S}}\) samples are drawn from \(N_{C}\) at randomly selected \(K\) categories to form the support set \(\mathcal{S}=\{(P_{s}^{1},y_{s}^{1}),...(P_{s}^{N_{\mathcal{S}}\times K},y_{s}^ {N_{\mathcal{S}}\times K})\}\). The remaining \(N_{\mathcal{Q}}\) form the query set \(\mathcal{Q}=\{(P_{q}^{1},y_{q}^{1}),...(P_{q}^{N_{\mathcal{Q}}\times K},y_{q}^ {N_{\mathcal{Q}}\times K})\}\). The goal is to predict \(y_{q}^{i}\) via a model \(f_{\theta}(\mathcal{S},\mathcal{Q})\) by only utilizing labels support set \(y_{s}^{i}\).
### Network Design
We advocate using Prototypical Networks [30] as a method of choice for point cloud few-shot learning due to its simplicity and remarkable generalization of metric learning. Prototypical Networks large rely on network design as it plays a key role in initializing representation metrics that significantly enhances the performance of few-shot learning. Our network design comprises two fundamental components: geometric feature extraction using proposed IGI++ and metric learning with a single fully connected MLP, both of which synergistically facilitate point cloud few-shot learning.
#### 3.2.1 Intrinsic Geometry Interpreter++
The FSL hypothesis of our network is notably aided by learning the basic local geometric interpreters IGI++ that incorporate Intrinsic Geometry Interpreters \(\Psi\) and Laplace vectors \(\vec{L}\). Following IGI which is proposed by VGVAE [2], we introduce IGI++ and introduced group deviation vectors to facilitate the extraction of essential topological features as shown in Figure 3. Laplace vectors play a crucial role in capturing abstract information related to edges and corners of point clouds, which are essential for few-shot learning.
**Intrinsic Geometry Interpreter \(\Psi\)** is a set of basic local geometric features that are fast to compute and capture the local intrinsic topology of the point cloud. \(\Psi\) of a point cloud \(P\) is given by:
\[\Psi=\begin{cases}p_{i}=x,y,z;&p_{i}\in\mathbb{R}^{3},\\ \vec{e_{1}}=\frac{p_{1}-p_{i}}{||e_{1}||_{2}};&\vec{e_{1}}\in\mathbb{R}^{3},\\ \vec{e_{2}}=\frac{p_{1}-p_{2}}{||e_{2}||_{2}};&\vec{e_{2}}\in\mathbb{R}^{3}, \\ \hat{n}=\vec{e_{1}}\times\vec{e_{2}};&\hat{n}\in\mathbb{R}^{3}.\\ \vec{s}=std(p_{j});&\vec{s}\in\mathbb{R}^{3}.\end{cases} \tag{1}\]
where (\(\vec{e_{1}},\vec{e_{2}}\)) represents edge (relative positions) (1, 2) respectively of a given point (position vector) \(p_{i}\), (\(|\vec{e_{1}}|,|\vec{e_{2}}|\)) represents edge lengths, \(\hat{n}\) represent normals of point cloud \(p_{i}\) and \(\vec{s}\) represent group deviation vector as illustrated in Figure 3. Our proposed \(\Psi\in\mathbb{R}^{15}\) captures a superior geometric features due to group deviation vector \(\vec{s}\), unlike Enrich Features [8] where \(\Psi\in\mathbb{R}^{14}\). The relative positions and normals along with the position vector facilitate to capture of local geometric information that aids point cloud FSL. However, to capture abstract information such as edges and corners, we propose Laplace vectors, which extract high-frequency information to further improve our network's performance.
**Laplace Vectors \(\vec{L}\)** are simple yet effective geometric signatures that capture the distribution, magnitude, and direction of the query to group deviation that facilitates extracting abstract information of edges and corners in a point clouds as shown in Figure1 2. Laplace vectors are given by:
Footnote 1: We acknowledge Ruben Wiersma for assisting us with these vector renderings.
\[\vec{L}=p_{i}\bigoplus\frac{1}{K}\sum_{j=0}^{K}(p_{j}-p_{i}) \tag{2}\]
Figure 3: Illustration of proposed framework Intrinsic Geometry Interpreters++ (IGI++). **Left**: \(\Psi\) depicts computation of IGI features (normal \(\hat{n}\), std vector \(\vec{s}\), and edge vectors \(\vec{e_{1}},\vec{e_{2}}\)) using query point \(p_{i}\) and two nearest neighbour \(p_{j1},p_{j2}\). **Right**: depicts computation of Laplace vectors \(\vec{L}\) using query point \(p_{i}\) and neighbour points \(p_{j1\to k-1}\).
where \(p_{j}\) are local points of \(p_{i}\) in a given k-NN (k-Nearest Neighbors) and \(p_{i}\in\mathbb{R}^{3}\), \(\vec{L}\in\mathbb{R}^{6}\) and \(\oplus\) is concatenation operation. The Illustration of Laplace vectors is depicted in Figure 3. Laplace vectors allow us to operate with lower point density since they capture changes in local neighbourhoods. We propose to extract 30-dimensional Laplace vectors of 15-dimensional \(\Psi\), i,e. \(\vec{L}(\Psi)\) capturing superior geometric features as shown in Figure 1 towards facilitating Point Cloud FSL.
#### 3.2.2 Metric Learning
Metric learning is essential for few-shot learning since it enables the computation of similarity metrics with limited labeled data [30][33]. To achieve this, we aim to learn a distance metric that can robustly compare the similarity between examples. In contrast to Euclidean spaces, hyperbolic spaces offer unique properties, such as exponential growth of volume with distance, that allow for more effective modeling of complex hierarchical structures. To achieve this, we propose using the Poincare ball model to embed features in hyperbolic spaces and perform efficient computations such as distance measurements and gradient updates. This is particularly useful in few-shot learning scenarios with limited labeled data and complex hierarchical structures [12]. Hyperbolic distance considers hierarchical relations between support and query examples, leading to improved discrimination, as shown in Figure 2. The distance between two points \(x,y\) with curvature \(\kappa\) in Hyperbolic (Poincare) manifolds \(\mathbb{P}_{\kappa}\) is given by:
\[d_{\mathbb{P}_{\kappa}}(x,y)=\tfrac{1}{\sqrt{-\kappa}}arcosh\Bigg{(}\tfrac{-2 \kappa\|x-y\|_{2}}{(1+\kappa\|x\|_{2})(1+\kappa\|y\|_{2})}+1\Bigg{)} \tag{3}\]
### Prototypical Classification
We transform support and query point clouds into geometric feature vectors by \(\vec{L}(\Psi(p^{i}))\) given by Equation 1 and 2. The geometric features are aggregated using \(\mathcal{A}\) a symmetric operation (max, mean and, sum) to get a permutation invariant global feature. The invariant global feature is used as inputs to a single fully connected MLP \(f_{\theta}\), which makes predictions on the label of the query point cloud by computing metric/embedding distance \(d(,)\) between prototypes \(\mu\) and query embedding \(f_{\theta}\big{(}\vec{L}(\Psi(p^{i}_{q})\big{)}\big{)}\), where \(\mu\) represents the mean of support embedding \(f_{\theta}\big{(}\vec{L}(\Psi(p^{i}_{s})\big{)}\big{)}\) as depicted in Algorithm 1. To ensure that the predictions are accurate, we normalize the results into a probability distribution [40] and calculate the cross-entropy loss [14] between this distribution and the actual ground truth labels. This process facilitates the optimization of the network and improves its ability to perform few-shot learning on point clouds.
```
Input:\(\mathcal{D}=\{(p_{1},y_{1}),...,(p_{N},y_{N})\}\), \(d(.)\) /* where each \(y_{i}\in\{1,...,K\}\) and \(d(.)\) is distance metric (Euclidean or Hyperbolic) as mentioned in Eq.(3) */ Output: The loss \(J\) for a randomly generated training episode.
1\(V\leftarrow\) RANDOMSAMPLE(\(\{1,\),...,K\}\), \(N_{C}\)
2for\(k=1\)to\(K\)do
3\(\mathcal{S}_{k}\leftarrow\) RANDOMSAMPLE(\(\mathcal{D}_{k}\), \(N_{\mathcal{S}}\))
4\(\mathcal{Q}_{k}\leftarrow\) RANDOMSAMPLE(\(\mathcal{D}_{k}\)/\(\mathcal{S}_{k}\), \(N_{\mathcal{Q}}\))
5\(\mu\leftarrow\tfrac{1}{N_{C}}\sum_{(p_{i},y_{i})\in\mathcal{S}_{k}}f_{\phi}(p_{i})\) /* where \(\mu\) is point cloud prototype and \(f_{\phi}(0\) = \(f_{\phi}(\vec{L}(\Psi))\) as mentioned in Eq.(1) and Eq.(2) */
6\(J\gets 0\)for\(k=1\)to\(K\)do
7for(\(p,y\)) in \(\mathcal{Q}_{k}\)do
8\(J\gets J+\tfrac{1}{N_{C}N_{\mathcal{Q}}}\Big{[}d(f_{\phi}(p),\mu)+\)\(\log\sum_{k^{\prime}}\exp-d(f_{\phi}(p),\mu)\Big{]}\)
```
**Algorithm 1**Training episode loss computation
## 4 Experiments
In this section, we investigate the effectiveness of the topological point embeddings generated by our classifier \(f_{\theta}\) for few-shot 3D object classification, using the dataset ModelNet40 [37]. ModelNet40 dataset encompasses 40 object categories that include a collection of 12,311 models. Note that our model is trained on an Nvidia GTX 1050ti GPU and PyTorch 1.11 and we use geoopt [13] for the hyperbolic operations.
### Few-Shot 3D object classification
We evaluate the impact of the proposed IGI++ in our network for point cloud few-shot learning. We report the mean and standard deviation of our results with 95% confidence scores across 6 experiments with different seeds towards better reproducibility in Table 1. Unlike Enrich Features [8] and SS-FSL [28] our model is trained and tested in a few-shot setting, using only the coordinates (\(x,y,z\)) of each point. To compute Laplace vectors the number of k for the nearest neighbors is set to k = 40. The 30-dimensional Laplace vector is mapped to a 32-dimensional point using single MLP \(f_{\theta}\) as explained in Section 3.3. We use the SGD (Stochastic Gradient Descent) optimizer for Euclidean metric and RSGD [13] for hyperbolic metric, with a momentum of 0.9 and weight decay of 0.0001, with the learning rate reduced from 0.1 to 0.001 through the cosine annealing in Algorithm 1 for 50 epochs of 4 train and 300 test few-shot episodes. We ensure the FSL paradigm via maintain
ing \(D_{train}\cap D_{test}=\emptyset\) such that, for each experiment, we randomly sample \(\mathcal{T}\) categories of data to form \(D_{train}\) and rest categories without replacement form \(D_{test}\) this meta-training strategy aids in understanding true robustness of the proposed method in FSL. For our experiments, we randomly sampled \(\mathcal{T}=24\) categories for training and 16 for testing in ModelNet40 as suggested by [8]. Note that the results of all other networks except Enrich-Features [8] in Table 1 are derived from SS-FSL [28].
### Comparison with State-of-the-art Methods
Our novel GPr-Net not only outperforms existing methods that rely on data-heavy pre-training tasks like SS-FSL [28] or complex feature extractors [8][39] in terms of accuracy, but it also operates much faster. To demonstrate this, we compared our proposed backbone architecture to several open-source backbones and evaluated parameters, few-shot classification accuracy, and parameters on the ModelNet40 dataset [37], as suggested by PointMLP [20]. For example, SS-FSL (DGCNN) is a cumbersome model that achieves impressive results with 1.82M parameters and a forward/backward pass of 53GB, as shown in Table 1. In contrast, our GPr-Net achieves state-of-the-art FSL accuracy on point clouds while maintaining only 1.24K parameters, which is 280 times less than SS-FSL, and a forward/backward pass of 50KB. This is particularly essential for applications like robotics, self-driving cars, and others that require deploying these models efficiently.
Our results with Hyperbolic-metric, presented in Table 1, demonstrate that our method surpasses Enrich-Features [8] by 5% for 5-way 10-shots and 3% for 10-way 10-shots. However, when compared to SS-FSL [28], our method achieves significant improvements of 18% for 5-way 10-shots, 14% for 5-way 20-shots, 22% for 10-way 10-shots, and 21% for 10-way 20-shots. These results highlight the efficacy of our approach in addressing the challenging problem of few-shot learning for point clouds. _Notice that we achieve this by only 50 epochs, 4 training episodes and only 512 points_. Remarkably, our method achieves the smallest standard deviation across 6 experiments with different random seeds, indicating its robustness and lack of bias towards a particular category.
### Ablation Studies
This section presents ablation studies to analyse the impact of different designs of our proposed module on few-shot point cloud classification.
**Significance Laplace Vectors**\(\vec{L}\) is demonstrated in the left of Figure 4. The results show that the GPr-Net with \(\vec{L}\) outperforms the one without in all cases of point density variation, as reported by the mean and standard deviation accuracy of 6 experiments for 5-way 10-shot tasks on the Hyperbolic variant of GPr-Net. The experiments were conducted with 128, 256, 512, and 1024 points, and the superior performance of the classifier with \(\vec{L}\) suggests the significance of extracting geometric features using Laplace vectors for effective few-shot learning on point clouds.
\begin{table}
\begin{tabular}{r c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Num Points**} & \multicolumn{3}{c}{**5-way**} & \multicolumn{3}{c}{**10-ways**} & \multirow{2}{*}{**\#Params**} & \multirow{2}{*}{**F/B pass**} \\ & & **10-shots** & & **20-shots** & & **10-shots** & \\ \hline
**3D-GAN**[1] & 1024 & 55.80 \(\pm\) 10.68 & 65.80 \(\pm\) 09.90 & 40.25 \(\pm\) 06.49 & 48.35 \(\pm\) 05.59 & - & - \\
**Latent-GAN**[36] & 1024 & 41.60 \(\pm\) 16.91 & 46.20 \(\pm\) 19.68 & 32.90 \(\pm\) 09.16 & 25.45 \(\pm\) 09.90 & - & - \\
**PointCapsNet**[41] & 1024 & 42.30 \(\pm\) 17.37 & 53.00 \(\pm\) 18.72 & 38.00 \(\pm\) 14.30 & 27.15 \(\pm\) 14.86 & 2.15M & 39GB \\
**FoldingNet**[38] & 1024 & 33.40 \(\pm\) 13.11 & 35.80 \(\pm\) 18.19 & 18.55 \(\pm\) 06.49 & 15.44 \(\pm\) 06.82 & 0.67M & 5.7GB \\
**PointNet++**[25] & 1024 & 38.53 \(\pm\) 15.98 & 42.39 \(\pm\) 14.18 & 23.05 \(\pm\) 06.97 & 18.80 \(\pm\) 05.41 & 1.48M & 149GB \\
**PointCNN**[16] & 1024 & 65.41 \(\pm\) 08.92 & 68.64 \(\pm\) 07.00 & 46.60 \(\pm\) 04.84 & 49.95 \(\pm\) 07.22 & - & - \\
**PointNet**[25] & 1024 & 51.97 \(\pm\) 12.17 & 57.81 \(\pm\) 15.45 & 46.60 \(\pm\) 13.54 & 35.20 \(\pm\) 15.25 & 3.47M & 8.5GB \\
**DGCNN**[34] & 1024 & 31.60 \(\pm\) 08.97 & 40.80 \(\pm\) 14.60 & 19.85 \(\pm\) 06.45 & 16.85 \(\pm\) 04.83 & 1.82M & 53GB \\
**SS-FSL (PointNet)**[28] & 1024 & 63.20 \(\pm\) 10.72 & 68.90 \(\pm\) 09.41 & 49.15 \(\pm\) 06.09 & 50.10 \(\pm\) 05.00 & 3.47M & 8.5GB \\
**SS-FSL (DGCNN)**[28] & 1024 & 60.00 \(\pm\) 08.87 & 65.70 \(\pm\) 08.37 & 48.50 \(\pm\) 05.63 & 53.00 \(\pm\) 04.08 & 1.82M & 53GB \\
**Enrich-Features**[8] & 1024 & 76.69 \(\pm\) NA & **85.76 \(\pm\) NA** & 68.76 \(\pm\) NA & **80.72 \(\pm\) NA** & - & - \\ \hline
**GPr-Net (Euc)** & 1024 & 74.37 \(\pm\) 02.00 & 75.12 \(\pm\) 02.08 & 62.14 \(\pm\) 01.91 & 63.43 \(\pm\) 02.05 & **1.24K** & **50KB** \\
**GPr-Net (Hyp)** & 1024 & **80.40 \(\pm\) 00.55** & 81.99 \(\pm\) 00.91 & **70.42 \(\pm\) 01.80** & 72.83 \(\pm\) 01.78 & **1.24K** & **50KB** \\
**GPr-Net (Euc)** & 512 & 74.04 \(\pm\) 02.33 & 74.98 \(\pm\) 02.42 & 62.31 \(\pm\) 02.01 & 63.33 \(\pm\) 02.21 & **1.24K** & **50KB** \\
**GPr-Net (Hyp)** & 512 & **81.13 \(\pm\) 01.51** & **82.71 \(\pm\) 01.28** & **71.59 \(\pm\) 01.16** & **73.78 \(\pm\) 01.99** & **1.24K** & **50KB** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Few-shot classification results on ModelNet40 [37] dataset. With only 512 points, our method GPr-Net (Hyp) achieves state-of-the-art accuracy on 5-way 10-shots and 10-way 10-shots, where Hyp represents Hyperbolic and Euc represnts Euclidean distance metric in Algorithm 1. Additionally, we have provided the parameters and performed Forward/Backward pass evaluations for the methods their source code was publicly available, using PyTorch-summary for a batch size of 150 indicating a 5-way 10-shot 20-query setting. The quantitative results (accuracies in \(\%\)) are represented in three styles: (1) **best**, (2) **second best**, (3) **third best** and \(\pm\) represents mean and standard deviation of 6 experiments with a different random seed.
**Local Neighbourhood Size** justifies the effectiveness of the Laplace vectors in our proposed GPr-Net for few-shot learning on point clouds, we conducted additional experiments to determine the appropriate value of k in k-NN for computing Laplace vectors. We aim to determine the best value of k that describes local topological changes such as edges and corners, which are largely dependent on the size of the local neighborhood. In the center of Figure 4, we present our findings on the need for selecting an appropriate value of k. We report the mean and standard deviation accuracy of 6 experiments for 5-way 10-shot and 5-way 20-shot tasks on the Hyperbolic variant of GPr-Net. The experiments were conducted with k=10,20,40 and 80 for 512 points in a point cloud. Our findings suggest that k=40 is the optimal value for 512-point density in both 10 and 20-shot cases.
**Influence of Curvature**\(\kappa\) plays a critical role in the performance of hyperbolic metrics for point cloud few-shot learning. The negative curvature allows for more efficient space utilization, increasing the ability to distinguish between points. We perform experiments that show an increase in curvature \(\kappa\) in Eq. 3 results in improved performance for few-shot learning tasks, as the embeddings are better able to capture the similarities and differences between point clouds. However, as \(\kappa\) approaches zero, the hyperbolic space approaches a Euclidean space, and the benefits of the negative curvature are lost as Depicted in the right of Figure 4. Therefore, it is essential to find the optimal curvature for a given few-shot learning task to achieve the best results. Our findings indicate that \(\kappa=1.0\) is the best suited for 5-way 10-shots and 5-way 20-shots tasks for 512 points with Laplace vectors and k = 40.
**Performance Efficiency** of proposed method GPr-Net is conducted and we compare with CGNN [17] and CIA [39] in a 5-way 1-shot and 5-way 5-shot learning setting. Our method was trained for only 4 episodes, and we observed competitive performance with the other two models, as shown in Table 2. CGNN [17] and CIA [39] also aim to learn representations and relations between prototypes and query features by utilizing feature-matching methods such as graph neural networks or self-attention mechanisms, respectively. Although CIA [39] achieves state-of-the-art accuracy in 5-way, 1-shot and 5-shot settings, we still achieved the second-best result with significantly fewer parameters and faster training speed. Unfortunately, since the code for CIA [39] was not open-source, we were unable to make a direct comparison in terms of speed.
### Embedding Visualization
In the context of a _5way-10shot-50query_ setting on the ModelNet40 [37] dataset, we present a visualization of the features generated by our proposed GPr-Net. Specifically, we compare the features obtained using the hyperbolic/Poincare metric with those obtained using the Euclidean metric. The left-hand side of Figure 5 corresponds to the former, while the right-hand side depicts the latter.
It is worth noting that there exists a significant distribution shift between the support and query features in this set
\begin{table}
\begin{tabular}{c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**5-way**} \\ \cline{2-3} & **1-shot** & **5-shots** \\ \hline
**CGNN [17]** & - & 76.85 \(\pm\) NA \\
**CIA [39]** & **75.70 \(\pm\) 0.74** & **87.15 \(\pm\) 0.47** \\ \hline
**GPr-Net (Euc)** & 64.12 \(\pm\) 0.73 & 74.56 \(\pm\) 1.03 \\
**GPr-Net (Hyp)** & 67.91 \(\pm\) 1.07 & 79.09 \(\pm\) 0.97 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The comparison of our proposed GPr-Net with state-of-the-art methods CGNN [17] and CIA [39] in a 5-way 1-shot and 5-way 5-shot learning setting. The results demonstrate the competitive performance of our method with significantly fewer parameters and faster training speed. Although CIA [39] achieves state-of-the-art accuracy in both settings, our method achieved the second-best result, trained with only 4 episodes.
Figure 4: The **Left**: compares the few-shot accuracy of GPr-Net with and without Laplace vectors \(\vec{L}\) as the point density varies. The results indicate that incorporating Laplace vectors leads to superior performance. The **Center**: impact of k-Nearest Neighbors (k-NN) is analyzed by varying k on GPr-Net with Laplace vectors. It is observed that k=40 yields the best results for 5-way 10 and 20-shots. Finally, the **Right**: investigates the effect of hyperbolic curvature on few-shot accuracy. GPr-Net with \(\kappa\to 1\) hyperbolic metric outperforms the Euclidean metric with \(\kappa\to 0\) for both 5-way 10 and 20-shots. Further details are provided in Section 4.3.
ting. However, our proposed hyperbolic metric helps to mitigate the challenge of inter-class ambiguity by causing the features to move towards the boundary of the Poincare manifold. This allows for better differentiation between classes, leading to more accurate predicted labels.
### Limitations
Notwithstanding the promising results achieved in our study, we acknowledge certain limitations that need to be addressed. First, due to the nature of our approach incorporating Hyperbolic Projection, it is currently hard to perform part segmentation on per-point embeddings. This is a known limitation also faced by other works in the field, such as HyCoRe [22]. As a result, we were unable to apply our method to datasets that require part segmentation, limiting the scope of our study. Another limitation of our method is susceptibility towards noise due to the use of k-NN for local grouping. Therefore, we exclude certain real-world datasets, such as the Sydney and ScanObjectNN datasets, which contain large amounts of noise.
Despite these limitations, we believe that our study offers valuable insights into few-shot learning in point cloud settings. We hope that our findings will inspire further research to address these limitations and lead to the development of more robust and effective methods for few-shot learning in point clouds.
## 5 Conclusions
In this work, we have proposed a new perspective on point cloud few-shot learning by challenging the assumption of complex network designs and training strategies. Our proposed lightweight Geometric Prototypical Network, GPr-Net, leverages simple yet effective geometric signatures, including the Intrinsic Geometry Interpreter and Laplace vectors, to efficiently capture the intrinsic topology of point clouds and achieve superior performance on the ModelNet40 dataset. Additionally, employing a hyperbolic/poincare metric for learning 3D few-shot features further improves the effectiveness of our approach. Our experimental results demonstrate that GPr-Net outperforms state-of-the-art point cloud few-shot learning techniques with 5% higher accuracy and \(170\times\) fewer parameters than all existing works.
## 6 Broader Impact
The focus of this study is on local geometric features and selecting appropriate metric space that can enhance the performance of few-shot learning in point cloud settings, even when labeled data is limited.
Point cloud few-shot learning has the potential to impact a wide range of fields. For example, robotics can accelerate the training of robots to recognize new objects with a small number of labeled examples, resulting in faster and more cost-effective learning. In architecture and engineering, it can facilitate the design of complex structures with limited labeled data by allowing shape analysis and thermal analysis. Additionally, it can assist in environmental studies, such as land surveying, by making it easier to classify and analyze 3D point cloud data with minimal labels.
|
2310.05491 | Complete determination of $SU(3)_F$ amplitudes and strong phase in
$Λ_c^+ \to Ξ^0 K^+$ | The BESIII collaboration has recently reported the first time measurement of
the decay asymmetry $\alpha(\Lambda_c^+ \to \Xi^0 K^+) = 0.01 \pm 0.16(stat.)
\pm 0.03(syst.)$ and also a sizable phase shift of $\delta_P-\delta_S = -1.55
\pm 0.25$ or $1.59\pm 0.25$ between S- and P-wave amplitudes. This implies
significant strong phase shifts in the decay amplitudes. The strong phases
indicate the existence of rescattering or loop effects, which are challenging
to calculate due to non-perturbative effects. By employing the flavor $SU(3)_F$
symmetry and applying the K\"orner-Pati-Woo theorem to reduce the number of
parameters, we find that the current data already allow us to obtain, for the
first time, model-independent decay amplitudes and their strong phases. The
establishment of the existence of sizable strong phases opens a window for
future investigations into CP violation. In our fit, a notable discrepancy
emerges in the branching ratio of $\Xi_c^0 \to \Xi^- \pi^+$. The direct
relationship between $\Gamma (\Lambda_c^+ \to \Lambda e^+\nu_e)$ and $\Gamma
(\Xi_c^0 \to \Xi^- e^+\nu_e)$, along with newly discovered $SU(3)_F$ relations,
collectively suggests an underestimation of $\mathcal{B}(\Xi_c^0 \to \Xi^-
\pi^+)$ in experimental findings. | Chao-Qiang Geng, Xiao-Gang He, Xiang-Nan Jin, Chia-Wei Liu, Chang Yang | 2023-10-09T07:54:45Z | http://arxiv.org/abs/2310.05491v3 | # Strong phase in \(\Lambda_{c}^{+}\rightarrow\Xi^{0}K^{+}\) decay with flavor \(SU(3)_{F}\) symmetry
###### Abstract
BESIII collaboration has recently reported the first time measurement of the decay asymmetry \(\alpha(\Lambda_{c}^{+}\rightarrow\Xi^{0}K^{+})\) with a value \(0.01\pm 0.16(stat.)\pm 0.03(syst.)\). In contrast, prior studies of global analyses based on the flavor \(SU(3)_{F}\) symmetry with real decay amplitudes predict it to be close to one. The new measurement implies a significant strong phase shift between S- and P-wave amplitudes. This calls for a new full analysis with complex decay amplitudes to accommodate for strong phases. We find that such an analysis is possible to the leading order with the help of color symmetry to reduce the number of independent decay amplitudes. The establishment of sizable strong phases is an indispensable ingredient for future investigations into CP violation. While explaining a small value for \(\alpha(\Lambda_{c}^{+}\rightarrow\Xi^{0}K^{+})\), a notable discrepancy emerges in the branching ratio for \(\Xi_{0}^{0}\rightarrow\Xi^{-}\pi^{+}\) for a good overall fit. The direct relationship between \(\Gamma(\Lambda_{c}^{+}\rightarrow\Lambda\varepsilon^{+}\nu_{\varepsilon})\) and \(\Gamma(\Xi_{c}^{0}\rightarrow\Xi^{-}\varepsilon^{+}\nu_{\varepsilon})\) as well as newly discovered \(SU(3)_{F}\) relations collectively indicate an underestimation of \(\mathcal{B}(\Xi_{c}^{0}\rightarrow\Xi^{-}\pi^{+})\) in experimental findings which needs to be further tested by new data.
Recent results from the BESIII collaboration have reported \(\alpha(\Lambda_{c}^{+}\rightarrow\Xi^{0}K^{+})=0.01\pm 0.16(stat.)\pm 0.03(syst.)\)[1]. This supplements the previously established \(\mathcal{B}(\Lambda_{c}^{+}\rightarrow\Xi^{0}K^{+})=(0.55\pm 0.07)\%\)[2], highlighting the importance of this channel in deepening our understanding of baryon decays. In contrast, prior studies based on a global flavor \(SU(3)_{F}\) symmetry predicted a large value close to one assuming real decay amplitudes [3; 4; 5; 6]. Moreover, BESIII data also indicates a non-zero asymmetric parameter \(\beta(\Lambda_{c}^{+}\rightarrow\Xi^{0}K^{+})\), implying a strong phase shift between the S- and P-waves of \(\delta_{P}-\delta_{S}=-1.55\pm 0.25\)[1]. The large difference in central values is a concern and calls for a new theoretical understanding. In this work, we further the \(SU(3)_{F}\) analysis with more complete analysis without assuming the decay amplitudes to be real.
\(\Lambda_{c}^{+}\rightarrow\Xi^{0}K^{+}\) is one of the weak decays of a charmed anti-triplet baryon \(T_{\bar{c}\bar{3}}(\Xi_{c}^{0},\ \Xi_{c,\ \Lambda}^{+},\ \Lambda_{c}^{+})\) to a nonet pseudoscalar \(P(\pi^{0,\pm},\ K^{\pm},\ K^{0},\ K^{0},\ \bar{K}^{0}\ \eta_{8};\ \eta_{1})\) and an octet charmless baryon \(\mathbf{B}(p,\ n,\ \Sigma^{0,\pm},\ \Xi^{0,-},\ \Lambda)\), \(T_{c\bar{3}}\rightarrow\mathbf{B}\ P\). First principle calculations for the decay amplitudes for these decays are extremely difficult due the low energy scale involved. A complete theoretical understanding for such decays needs to wait for a full lattice calculation. In the mean time, analyses of low energy physics have been useful [7] with the help of the flavor \(SU(3)_{F}\) symmetry [8; 9].
The decay amplitude for an initial baryon \(\mathbf{B}_{i}\) to a final baryon \(\mathbf{B}_{f}\) and a meson \(P\), can be written as:
\[\mathcal{M}=\langle\mathbf{B}_{f}P|\mathcal{H}_{\mathrm{eff}}|\mathbf{B}_{i} \rangle=i\overline{u}_{f}\left(F-G\gamma_{5}\right)u_{i}\,, \tag{1}\]
where \(u_{i(f)}\) denotes the Dirac spinor for the initial (final) baryon and \(F\) (\(G\)) is the amplitude which violates (conserves) parity, associated with the S (P) partial wave. The decay width, \(\Gamma\), and the other decay observables are given by:
\[\Gamma =\frac{p_{f}}{8\pi}\frac{(M_{i}+M_{f})^{2}-M_{P}^{2}}{M_{i}^{2}} \left(|F|^{2}+\kappa^{2}|G|^{2}\right),\] \[\alpha =\frac{2\kappa\mathrm{Re}(F^{*}G)}{|F|^{2}+\kappa^{2}|G|^{2}}, \qquad\beta=\frac{2\kappa\mathrm{Im}(F^{*}G)}{|F|^{2}+\kappa^{2}|G|^{2}},\] \[\gamma =\frac{|F|^{2}-\kappa^{2}|G|^{2}}{|F|^{2}+\kappa^{2}|G|^{2}}, \tag{2}\]
where \(M_{i,f}\) and \(M_{P}\) are the respective masses of \(\mathbf{B}_{i,f}\) and \(P\), \(\kappa=p_{f}/(E_{f}+M_{f})\), and \(p_{f}(E_{f})\) is the 3-momentum (energy) of \(\mathbf{B}_{f}\) in the rest frame of \(\mathbf{B}_{i}\). A non-zero value of \(\delta_{P}-\delta_{S}\) as indicated by BESIII data [1], would imply complex decay amplitudes \(F\) and \(G\).
The effective Hamiltonian inducing a charmed anti-triplet baryon weak decay is given by [9]
\[\mathcal{H}_{eff} = \tag{3}\] \[(\overline{q}_{i}q^{k})_{V-A}(\overline{q}_{j}c)_{V-A}\,,\]
where \(c_{\pm}\) are the Wilson coefficients and \((q_{1},q_{2},q_{3})=(u,d,s)\). The parentheses of \(\mathcal{H}(\overline{\mathbf{6}})\) and \(\mathcal{H}(\mathbf{15})\) denote the \(SU(3)_{F}\) representations satisfying \(\sum_{i}H(\mathbf{15})_{i}^{(ij)}=0\). When interchanging \(\overline{q}_{i}\) and \(\overline{q}_{j}\), the first and second terms are symmetric and antisymmetric, respectively. It leads to \(\overline{q}_{i}\) and \(\overline{q}_{j}\) are color-symmetric for the term originated from \(\mathcal{H}(\mathbf{15})\) whereas antisymmetric from \(\mathcal{H}(\overline{\mathbf{6}})\). The same also applies to \(q_{k}\) and \(c\). Here we have omitted \(\mathcal{H}(\overline{\mathbf{3}})=(0,0,V_{ub}V_{cb}^{*})\), which has a minimal impact on CP-even quantities.
Under the \(SU(3)_{F}\) symmetry, the decay amplitudes
can be decomposed into several invariant amplitudes as [3; 6]
\[F = \tilde{f}^{a}(P^{\dagger})^{l}_{l}{\cal H}(\overline{\bf 6})_{ij}T^{ ik}_{c}({\bf B}^{\dagger})^{j}_{k}+\tilde{f}^{b}{\cal H}(\overline{\bf 6})_{ij}T^{ik}_{c}({\bf B}^{\dagger})^{l}_{k}(P^{ \dagger})^{j}_{l}+\tilde{f}^{c}{\cal H}(\overline{\bf 6})_{ij}T^{ik}_{c}(P^{ \dagger})^{l}_{k}({\bf B}^{\dagger})^{j}_{l}\] \[+\tilde{f}^{d}{\cal H}(\overline{\bf 6})_{ij}({\bf B}^{\dagger})^{i}_{k}(P^{ \dagger})^{j}_{l}T^{kl}+\tilde{f}^{e}({\bf B}^{\dagger})^{i}_{l}{\cal H}({\bf 15 })^{\{ik\}}_{l}(P^{\dagger})^{l}_{k}(T_{c3})_{j}\,,\] \[G = F(\tilde{f}^{x}\rightarrow\tilde{g}^{x})\,, \tag{4}\]
with \(T^{ij}_{c}=\epsilon^{ijk}(T_{c3})_{k}\) and \(x\in\{a,b,c,d,e\}\). These amplitudes will be expressed as \(\tilde{f}^{x}=f^{x}\exp(\delta^{x}_{f})\) and \(\tilde{g}^{x}=g^{x}\exp(\delta^{x}_{g})\), where \(f^{x}\) and \(g^{x}\) are strictly positive. Throughout this work, we adopt the same convention with Ref. [9] for the \(SU(3)_{F}\) tensors.
Considering only the flavor structure, there are five different ways of contracting the \(SU(3)_{F}\) indices for \({\cal H}({\bf 15})\): \((T_{c3})_{i}{\cal H}({\bf 15})^{\{ik\}}_{j}({\bf B}^{\dagger})^{j}_{k}P^{l}_{l}\), \((T_{c3})_{i}{\cal H}({\bf 15})^{\{ik\}}_{j}({\bf B}^{\dagger})^{l}_{k}\,P^{j}_{l}\), \((T_{c3})_{i}{\cal H}({\bf 15})^{\{ik\}}_{j}({\bf B}^{\dagger})^{j}_{l}\)\(P^{l}_{k}\), \((T_{c3})_{i}{\cal H}({\bf 15})^{\{jk\}}_{l}({\bf B}^{\dagger})^{l}_{j}P^{l}_{k}\) and \((T_{c3})_{i}\)\({\cal H}({\bf 15})^{\{jk\}}_{l}({\bf B}^{\dagger})^{j}_{l}P^{l}_{k}\). However, after taking into account of that the color indices of the quarks originated from \({\cal H}({\bf 15})\) (baryons) must be (anti)symmetric, these five terms can be reduced into one proportional to \(({\bf B}^{\dagger})^{j}_{l}{\cal H}({\bf 15})^{\{ik\}}_{l}(P^{\dagger})^{l}_{k}(T_{c3})_{j}\) implied by the Korner-Pati-Woo Theorem [10; 11]. If the initial or final baryons has gluon contents, the above argument does not apply and all five terms can in principle contribute making a full complex amplitudes analysis impossible [4]. In our following analysis such sub-leading contributions will be neglected.
In the following, we elucidate the configuration of the \(SU(3)_{F}\) global fit. Given that both \(F\) and \(G\) encompass five complex amplitudes each, by omitting one unphysical overall phase shift, say \(\delta^{b}_{f}\), we are left with a total of 19 parameters. If one does not consider decays involving \(\eta\) and \(\eta^{\prime}\), one can further neglect \(\tilde{f}^{a}\) and \(\tilde{g}^{a}\). In that case there are only 15 parameters to work with. On the other hand, there are in total 29 (23 without \(\eta\) and \(\eta^{\prime}\) data points) experimental data points [12; 13]. The \(SU(3)_{F}\) invariant amplitudes can therefore be completely determined from a global fit. The experimental data are listed in TABLE 1. Note that had one kept the sub-leading terms in \({\cal H}({\bf 15})\), a global analysis becomes impossible at present.
We determine the best fit values for the decay amplitudes \(\tilde{f}^{x}\) and \(\tilde{g}^{x}\) by minimizing the \(\chi^{2}\) function defined as
\[\chi^{2}(\tilde{f}^{x},\tilde{g}^{x})=\sum_{\rm exp}\left(\frac{O_{\rm th}( \tilde{f}^{x},\tilde{g}^{x})-O_{\rm exp}}{\sigma_{\rm exp}}\right)^{2}\,, \tag{5}\]
where \(O_{\rm th}\) is the theoretical value of an observable, and \(O_{\rm exp}\) is the experimental value with the standard deviation of \(\sigma_{\rm exp}\). In conducting the global fit, we incorporate all of the experimental branching ratios and asymmetry parameters, \(\alpha_{i}\), available to date. For the decays of \(\Lambda_{c}^{+}\) and \(\Xi_{c}^{+}\), we rely on the absolute branching ratios documented in the Particle Data Group [12]. Conversely, for \(\Xi_{c}^{+}\) decays, we solely employ \({\cal B}(\Xi_{c}^{0}\rightarrow\Xi^{-}\pi^{+})=(1.43\pm 0.32)\%\) as an absolute branching ratio input [12], while for the others, the reported ratios of \({\cal R}_{X}:={\cal B}(\Xi_{c}^{0}\to X)/{\cal B}(\Xi_{c}^{0}\rightarrow\Xi^{-} \pi^{+})\) are utilized. Employing \({\cal R}_{X}\) as opposed to \({\cal B}(\Xi_{c}^{0}\to X)\) is crucial, as the former is what has been actually measured at Belle [14; 15]. Although measurements exist for \(\beta_{i}\), their associated uncertainties are substantial [16], making them insignificant to \(\chi^{2}\). Consequently, they will not be incorporated into the fit.
The resultant best fit values of the decay amplitude parameters and the error bars are given as follows:
\[f^{x}=1.80(35),0.91(30),0.96(5),0.31(31),0.55(63),\ g^{x}=6.11(1.67),7.01(29),0.6 9(43),1.31(39),1.62(1.34)\,, \tag{6}\] \[\delta^{x}_{f}=1.66(31),0,-2.20(39),-0.57(31),-0.58(50)\,,\ \delta^{x}_{g}=-1.77(34),2.60(0.37),2.03(0.43),2.39(0.74),1.98(1.03)\,,\]
in the order of \(x=a,b,c,d,e\), respectively, and both \(f^{x}\) and \(g^{x}\) are in units of \(10^{-2}G_{F}\) GeV\({}^{2}\). We note that without measurement results for \(\beta_{i}\), \(\chi^{2}\) suffers from a \(Z_{2}\) ambiguity of \((\delta^{x}_{f},\delta^{x}_{g})\rightarrow(-\delta^{x}_{f},-\delta^{x}_{g})\). To break this degeneracy, we fix it by using \(\beta(\Lambda_{c}^{+}\rightarrow\Sigma^{0}\pi^{+})>0\) from the experiment [16]. The alternative choice will flip the signs of \(\beta\) but leaves \({\cal B}\), \(\alpha\) and \(\gamma\) unchanged.
The fitting values for \({\cal B}\), \(\alpha\), \(\beta\) and \(\gamma\) are collected in TABLE 1 for the observed decay modes. The presence of empty cells signifies that either the corresponding \(\alpha_{\rm exp}\) values are absent, or the theoretical framework does not impose any constraints on those particular quantities. Asterisks are used to denote the number of standard deviations by which the observed values deviate from the theoretical central values. Predictions for the unobserved decays with 148 decay observables are collected in TA
BLE II for the future experiment verification.
The global fit yields a \(\chi^{2}\) per degree of freedom(\(\chi^{2}/d.o.f\)) value of 3.7. This cannot be considered to be a good fit. We find that the largest contributions to \(\chi^{2}\) come from the experimental ratio of \({\cal R}^{\rm exp}_{\Xi^{-}K^{+}}=0.275\pm 0.057\) and \({\cal B}(\Xi^{0}_{c}\to\Xi^{-}\pi^{+})=(1.43\pm 0.32)\%\). If one removes these two data points, \(\chi^{2}/d.o.f\) is reduced to 1.5 which is a much better fit. Meanwhile, in that case the predictions for other quantities do not change much.
Note that the phases of \(\delta^{c}_{f,g}\) are sizable which give phase shift to the decay amplitude of \(F(\Lambda^{+}_{c}\to\Xi^{0}K^{+})=-\tilde{f}^{c}\) and \(G(\Lambda^{+}_{c}\to\Xi^{0}K^{+})=-\tilde{g}^{c}\). In particular, we have \(\delta_{P}-\delta_{S}=-2.06\pm 0.50\), which is consistent with the experimental finding of \(-1.55\pm 0.25\). As \(\alpha\propto\cos(\delta_{P}-\delta_{S})\), this is crucial in obtaining a small value of \(\alpha(\Lambda^{+}_{c}\to\Xi^{0}K^{+})=-0.15\pm 0.14\) to be consistent with the BESIII measurement. In addition, in contrast to the previous \(SU(3)_{F}\) literature with \(\alpha(\Lambda^{+}_{c}\to\Sigma^{0}K^{+})\approx-1\)[3; 4; 5; 6], we find that \(\alpha(\Lambda^{+}_{c}\to\Sigma^{0}K^{+})=-0.52\pm 0.10\) which is consistent with the current experimental data.
The establishment of sizable strong phases makes the study of CP violations possible [17; 18]. The direct CP asymmetries are expected to be at the size of \(10^{-3}\) with the details given elsewhere.
There are several direct relations appear when the color symmetry is considered. In particular, \(\Gamma^{\Lambda^{\pm}_{c}}_{\Sigma^{+}K_{S}}=\Gamma^{\Lambda^{+}_{c}}_{\Sigma ^{0}K^{+}}\) is well satisfied by the experimental data [12], which partly justifies our approach in this work. An important new relation is
\[\frac{\tau_{\Lambda^{+}_{c}}}{\tau_{\Xi^{0}_{c}}}{\cal B}(\Xi^{0} _{c}\to\Xi^{-}\pi^{+})={\cal B}(\Lambda^{+}_{c}\to\Sigma^{0}\pi^{+}) \tag{7}\] \[\quad+3{\cal B}(\Lambda^{+}_{c}\to\Lambda\pi^{+})-\frac{1}{s^{2} _{c}}{\cal B}(\Lambda^{+}_{c}\to n\pi^{+})\,.\]
By plugging the measured data at BESIII for \(\Lambda^{+}_{c}\) decays on the right, we obtain \({\cal B}(\Xi^{0}_{c}\to\Xi^{-}\pi^{+})=(2.96\pm 0.31)\%\), which differs significantly to \((1.43\pm 0.32)\%\) from PDG [12]. On the other hand, with the relation of \(\Gamma^{\Lambda^{+}_{c}}_{\Xi^{0}K^{+}}=\Gamma^{\Xi^{0}_{c}}_{\Sigma^{+}K^{-}}\), the experimental values of \({\cal B}(\Lambda^{+}_{c}\to\Xi^{0}K^{+})\) and \({\cal R}^{\rm exp}_{\Sigma^{+}K^{-}}=0.123\pm 0.012\)[15] collectively lead to \({\cal B}(\Xi^{0}_{c}\to\Xi^{-}\pi^{+})=(3.37\pm 0.52)\%\), echoing with the discussion above. Intriguingly, the current algebra approach, exemplary in the \(\Lambda^{+}_{c}\) sector, predicts \({\cal B}(\Xi^{0}_{c}\to\Xi^{-}\pi^{+})=6.47\%\)[19].
The study of semileptonic decays might offer further insights on \({\cal B}(\Xi^{0}_{c}\to\Xi^{-}\pi^{+})\). Using the experimental ratio \({\cal R}^{\rm exp}_{\Xi^{-}e^{+}\nu_{e}}=0.730\pm 0.044\)[20] alongside \({\cal B}(\Xi^{0}_{c}\to\Xi^{-}\pi^{+})=(2.71\pm 0.08)\%\) from Table 1, we derive \({\cal B}(\Xi^{0}_{c}\to\Xi^{-}e^{+}\nu_{e})=(1.98\pm 0.12)\%\). This aligns with \({\cal B}(\Xi^{0}_{c}\to\Xi^{-}e^{+}\nu_{e})=(2.38\pm 0.44)\%\) from lattice QCD [21] and \({\cal B}(\Xi^{0}_{c}\to\Xi^{-}e^{+}\nu_{e})=(2.17\pm 0.20)\%\) under the exact \(SU(3)_{F}\) symmetry [22]. Conversely, \({\cal B}(\Xi^{0}_{c}\to\Xi^{-}\pi^{+})=(1.43\pm 0.32)\%\) implies \({\cal B}(\Xi^{0}_{c}\to\Xi^{-}e^{+}\nu_{e})=(1.04\pm 0.24)\%\), a deviation from the lattice QCD by \(2.6\sigma\). We note that the latest preliminary result of lattice QCD indicates also a larger \({\cal B}(\Xi^{0}_{c}\to\Xi^{-}e^{+}\nu_{e})\)[23].
Alternatively, the combination of \({\cal R}^{\rm exp}_{\Xi^{-}e^{+}\nu_{e}}\) and \({\cal B}(\Xi^{0}_{c}\to\Xi^{-}e^{+}\nu_{e})=(2.38\pm 0.44)\%\) from the lattice QCD leads to \({\cal B}(\Xi^{0}_{c}\to\Xi^{-}\pi^{+})=(3.26\pm 0.63)\%\). Using it as an input in the global fit instead would result in \(\chi^{2}/d.o.f=1.6\). Similar outcome is obtained by using \({\cal B}(\Xi^{0}_{c}\to\Xi^{-}e^{+}\nu_{e})=(2.17\pm 0.20)\%\)[22].
The above analysis indicates that the current experimental value for \({\cal B}(\Xi^{0}_{c}\to\Xi^{-}\pi^{+})\) might have underestimated it true value which needs to be scrutinized more carefully further. Should forthcoming experimental results confirm the diminished magnitude of \({\cal B}(\Xi^{0}_{c}\to\Xi^{-}\pi^{+})\), the presence of a substantial gluon component within the \(\Xi^{0}_{c}\) should be considered. In such a scenario, a rigorous examination of the sub-leading terms from \({\cal H}({\bf 15})\) would become imperative in theoretical discussions.
On the other hand, we have shown that \(SU(3)_{F}\) symmetry can accommodate the strong phases needed to explain recent data from BESIII for charmed anti-triplet baryon weak two body decays. We would like to emphasis that our analysis hints that \({\cal B}(\Xi^{0}_{c}\to\Xi^{-}\pi^{+})=(1.43\pm 0.32)\%\) is inconsistent with the direct relation of the nonleptonic decays and the indirect relation from the semileptonic decays, meriting in-depth exploration both theoretically and experimentally.
Finally we would also like to point out that the establishment of strong phases in the decay amplitudes have far reaching implications for opportunities of finding CP violations in systems involve charmed baryon decays. One expects to have non-zero CP violating rate asymmetries for charmed baryon decay. Experimental searches should be carried out. We will present related studies elsewhere.
## Acknowledgments
This work is supported in part by the National Key Research and Development Program of China under Grant No. 2020YFC2201501, by the Fundamental Research Funds for the Central Universities, by National Natural Science Foundation of P.R. China (No.12090064, 11735010, 11985149 and 12205063). XGH is also supported in part by MOST 109-2112-M-002-017-MY3. |
2302.05680 | Deriving a genetic regulatory network from an optimization principle | Many biological systems approach physical limits to their performance,
motivating the idea that their behavior and underlying mechanisms could be
determined by such optimality. Nevertheless, optimization as a predictive
principle has only been applied in very simplified setups. Here, in contrast,
we explore a mechanistically-detailed class of models for the gap gene network
of the Drosophila embryo, and determine its 50+ parameters by optimizing the
information that gene expression levels convey about nuclear positions, subject
to physical constraints on the number of available molecules. Optimal networks
recapitulate the architecture and spatial gene expression profiles of the real
organism. Our framework makes precise the many tradeoffs involved in maximizing
functional performance, and allows us to explore alternative networks to
address the questions of necessity vs contingency. Multiple solutions to the
optimization problem may be realized in closely related organisms. | Thomas R Sokolowski, Thomas Gregor, William Bialek, Gašper Tkačik | 2023-02-11T12:28:12Z | http://arxiv.org/abs/2302.05680v2 | # Deriving a genetic regulatory network from an optimization principle
###### Abstract
Many biological systems approach physical limits to their performance, motivating the idea that their behavior and underlying mechanisms could be determined by such optimality. Nevertheless, optimization as a predictive principle has only been applied in very simplified setups. Here, in contrast, we explore a mechanistically-detailed class of models for the gap gene network of the _Drosophila_ embryo, and determine its 50+ parameters by optimizing the information that gene expression levels convey about nuclear positions, subject to physical constraints on the number of available molecules. Optimal networks recapitulate the architecture and spatial gene expression profiles of the real organism. Our framework makes precise the many tradeoffs involved in maximizing functional performance, and allows us to explore alternative networks to address the questions of necessity vs contingency. Multiple solutions to the optimization problem may be realized in closely related organisms.
Gene regulatory networks | Optimization | Evolution | Drosophila Optimization is the mathematical language of choice for a number of fundamental problems in physical and statistical sciences. Stochastic optimization likewise constitutes the foundation of evolutionary theory, where selection continually improves organismal fitness by favoring adaptive traits [(1, 2)]. This evolutionary force pushes against quantifiable physical constraints and there are many examples where the organisms we see today operate very close to the physical limit: photon counting in vision [(3)], diffraction limited imaging in insect eyes [(4)], molecule counting in bacterial chemotaxis [(5)], and more. Experimental evidence for optimal performance can be promoted to an optimization principle from which one can derive non-trivial predictions about the functional behavior and underlying mechanisms, sometimes with no free parameters [(6, 7)]. Attempts at such ambitious _ab initio_ predictions include the optimization of coding efficiency in visual and auditory sensory processing [(8, 9, 10, 11)]; growth rates in metabolic networks [(12)]; matter flux in transport networks [(13)]; information transmission in regulatory networks [(14)]; and the design of molecular machines and assemblies [(15)].
We are unaware of any successful optimization predictions for complex, multi-component biological systems whose interactions are described in molecular detail. Whether _any_ first principles prediction is even possible at this level remains unclear. As a consequence, we cannot determine whether the existence of a particular gene, genetic interaction or regulatory logic is an evolutionary necessity or merely a historical contingency [(16)]. This difficulty is not resolved by genetic tests for necessity, since these cannot rule out alternative evolutionary histories that would have unfolded without (or with modified) molecular components.
Here we address these issues during the early stages of development in the _Drosophila_ embryo [(17)]. About two hours post fertilization, the four major gap genes _hunchback_, _Kruppel_, _giant_, and _knirps_ are expressed in an elaborate spatiotemporal pattern along the anterior-posterior (AP) axis of the embryo [(18)]. The gap genes regulate one another, forming a network that responds to the anterior (Bicoid), posterior (Nanos), and terminal (Torso-like) maternal morphogen gradients [(17, 19)]. The states of the gap gene network in turn drive the expression of pair rule genes in striped patterns that
Figure 1: **Deriving a genetic regulatory network from an optimization principle.** We simulate patterning during early fly development in a biologically realistic, spatial–stochastic gap gene expression model (bottom; see Box 1) that accounts for the stochastic gene expression dynamics in individual nuclei along the anterior-posterior (AP) axis of the embryo. Regulatory interactions among four gap genes (arrows between colored circles in each nucleus), their response to three maternal morphogen gradients, and spatial coupling between neighboring nuclei are parameterized by a set of over 50 parameters \(\theta\). For each parameter set, we numerically simulate the resulting noisy gap gene expression patterns, compute the system’s positional information \(I(\mathbf{g};x)\), and adjust \(\theta\) using stochastic optimization to iteratively maximize the encoded \(I\) (top).
presage the segmented body plan of the fully developed organism [(20)]. At readout time, about 40 minutes into nuclear cycle 14 (NC14), the local gap gene expression levels peak and encode \(4.3\pm 0.1\) bits of positional information [(21, 22, 23)]. This information is necessary and sufficient for the specification of downstream pair-rule expression stripes and other positional markers with a positional error as small as \(\sim 1\%\) of the embryo length (EL) [(24)], roughly corresponding to the nuclear spacing. Multiple lines of evidence further suggest that the flow of positional information through this system - comprising both its encoding into gap gene profiles and its readout by the pair-rule genes - is nearly optimal [(22, 24, 25)]. These empirical observations lead us to the hypothesis that the gap gene network itself may be derivable from an optimization principle.
Quantitative experiments, genetic manipulations, and attempts to fit mathematical models of the gap gene network to data have uncovered a wealth of detail about this system [(26, 27, 28, 29, 30, 31, 32, 33, 34, 35)]. These facts are, in part, what an optimization theory for the gap gene network should explain. But there are also major conceptual questions: Is behavior of the network more constrained by its evolutionary history [(36)] or by the developmental constraints and physical limits that arise from the limited numbers of mRNA [(37, 38)] and protein [(39)] molecules? Are all three maternal morphogens and four gap genes necessary? Most importantly for our discussion, are the interactions among gap genes and the resulting expression patterns coincidental, or determined by some underlying theoretical principle [(25)]? In simpler terms, can we derive the behavior of the gap gene network, rather than fitting its parameters to data?
### Optimization in a realistic context
To answer the questions outlined above, we have formulated a detailed and realistic spatial-stochastic model of patterning that encompasses gap gene regulation by maternal morphogens; gap gene cross-regulation; discrete nuclei, including their divisions; transcription, translation, and degradation processes; and diffusion of gap gene products (Fig. 1 and Box 1). Within this class of models, we search for the networks that transmit the maximum positional information given limits on the number of molecules that can be synthesized. Here we give a preliminary account of this work, with subsequent analyses to be reported in a longer paper.
We considered the three maternal morphogens as well as maximal gap gene transcription, translation, and degradation rates to be physical constraints fixed to their measured or estimated values (Box 1). This leaves more than 50 parameters which govern how gap genes integrate transcriptional regulatory signals from other gap genes and from their morphogen inputs; we refer to all these parameters together as \(\mathbf{\theta}\). As an example, for each gene regulated by another, there is a parameter that measures the concentration at which the regulator exerts half-maximal activating or repressive effect on its target, and another parameter that measures the strength of this regulatory interaction. Different points in this 50+ dimensional space describe a wide spectrum of regulatory networks and their diverse expression patterns, most of which are nothing like the real fly embryo but nonetheless are _possible_ networks given the known component parts. For any set of parameters we simulate the time evolution of our model, evaluating the mean spatial pattern of expression for all four gap genes as well as the gap gene (co)variability at every nuclear location along the AP axis. These calculations, carried out in the Langevin formalism, are complex yet numerically tractable; they properly account for maternal morphogen gradient variability and intrinsic biochemical stochasticity.
Positional information \(I(\mathbf{g};x)\) can be formalized as the mutual information between the set of gap gene expression levels \(\mathbf{g}\equiv\{g_{1},g_{2},g_{3},g_{4}\}\) and the AP coordinate \(x\)[(25, 22, 23, 7)]. This quantity can be computed from the means and covariances of gap gene expression, which are the results of our model simulation at fixed \(\mathbf{\theta}\) (see Box 1). If the gap gene system indeed has been strongly selected to maximize positional information at some readout time \(T\), then the real network should be near the optimal setting of parameters, \(\mathbf{\theta}^{*}=\operatorname*{argmax}_{\mathbf{\theta}}I\left(\mathbf{g}(T);x\right)\). This problem is well posed because there are physical limits: the maximal rates of molecular synthesis combine with degradation rates to limit the maximum number of molecules for each species, setting the scale of the noise which in turn limits information transmission. We have previously solved simplified versions of this optimization problem on small subnetworks [(40, 41, 46, 47, 48, 49)], but understanding the whole network at the level where comparisons with data are possible required a new computational strategy (Box 1). This larger scale numerical approach, combining simulation and optimization (Fig. 1), provides a route to derive the first _ab initio_ prediction for a gene network in a realistic context.
### Comparing optimal networks with the real network
We used a custom simulated annealing code to optimize the gap gene system for positional information (Fig. 2A). We first biased the search towards solutions that might exist in the proximity of the wild-type (WT) _Drosophila_ gap gene expression pattern using a recently developed statistical methodology [(50)], and then removed the bias to be sure that we have found a true optimum. Figure 2B compares the gap gene expression profiles generated by the optimized network to data. The match in mean expression profiles is very good (Fig. 2C), although not perfect. Mismatches--e.g., double-peaked anterior giant domain, posterior-most hunchback bump, linear anterior ramp of hunchback--likely trace their origin to the fact that the class of models we consider still is a bit too simple; even if we fit the parameters of the model to the data we cannot resolve these discrepancies. The predicted gap gene variability similarly recapitulates the measured behavior.
The mechanistic nature of our model allows us to rationalize how the optimal pattern emerges. For example, the precision of the system output, manifested in the low variability (\(\sim 10\%\)) of gap gene expression levels at fixed position, is achieved through a combination of temporal averaging and spatial averaging via diffusion, which substantially reduces noise components transmitted from upstream regulators and morphogens [(51, 40, 52)]. The spatial patterns of expression in the optimal solution are shaped significantly by mutual repression and self-activation, closely mimicking what had been inferred about the structure of the network from genetic
interventions (Fig. 2D). Optimization correctly predicts strong mutual repression between _hunchback_ and _knirps_, between _giant_ and _Kruppel_, as well as most weak repressive interactions and self-activation of _hunchback_ (52). Together, these factors combine to encode positional information nearly unambiguously, with a median positional error of \(\sim 1.5\%\) (Fig. 2E); even the elevated positional uncertainty around the cephalic furrow and in the far posterior is consistent between the optimal
prediction and the real embryo [24].
_Ab initio_ optimization performed here makes minimal use of empirical data to derive a wide range of predictions, in stark contrast to traditional model fitting [50]. This has three further important consequences. First, when fitting, objective functions are purely statistical (e.g., maximum likelihood, mean-square-error, etc.), lacking any biological interpretation. In contrast, positional information used in optimization constitutes a meaningful and independently measurable phenotype of the patterning system. For example, our optimal solution (Fig. 2A,B) reaches 4.2 bits, to be compared with \(4.2-4.3\) bits estimated directly from data [22, 23]. Second, if fitting is performed instead of optimization, e.g., by minimizing the mean-square-error of the predicted mean gap gene expression, the best fits underestimate the positional information (Fig. 2F). This is because fitting fails to take into account the functional consequences of noise and pattern variability. Third, optimization can identify locally optimal solutions that are qualitatively different from the gene expression patterns observed in the embryo but functionally near degenerate.
Multiple optimization runs indeed produce diverse solutions that locally maximize positional information while not exceeding the resource utilization of the wild type pattern. Together, these solutions constitute the _optimal ensemble_. A natural comparison is provided by the _random ensemble_, where parameters \(\mathbf{\theta}\) are drawn independently and uniformly from broad but realistic intervals. Optimization for positional information automatically leads to significantly lower positional error (Fig. 3A), higher number of boundaries where gap gene expression switches from low to high or vice-versa ("slopes"), more uniform utilization of resources across gap genes, as well a slight but significant over-allocation of resources in the anterior, as can be seen in data as well. Within the optimal ensemble, solutions with higher information tend to be more dynamically stable at readout time, which we quantify by pattern rate-of-change (RoC), i.e., mean temporal derivative of gap gene expression [53]. Low RoC is relevant since pair-rule genes appear to read out gap gene expression via fixed decoding rules [27, 24], implying that temporally varying solutions could cause larger spatial drifts in pair-rule stripes.
Networks in the random ensemble that transmit large amounts of information are exceedingly rare: the probability of drawing a network with positional information of 4 bits or more by chance is \(\ll 10^{-6}\) (Fig. 3B). In contrast, optimization strongly and robustly enriches for solutions above 4 bits (Fig. 3B). In our optimization we have constrained the maximum numbers of molecules, and the real embryo uses \(\sim 20\%\) (\(\text{RU}=0.2\)) of this maximum, on average. This resource
Figure 2: **Networks that maximize information transmission recapitulate the measured gap gene expression patterns and the regulatory network topology: (A) Positional information increases during a single optimization run, starting with the homogeneous profile at 0 bits (1), proceeding through more complex spatial patterns (2-4), to the final solution (5, pattern in panel B) that replaces \(\sim 4.2\) bits (dashed blue line). (B) Predicted optimal (left) vs. measured gap gene expression pattern (right, (45), 40 min into NC14 (Blue - branchback-H); green - plain/VC; yellow - Kolepel/Kr; red - kning/Kn; shade - standard deviation in gene expression). Positional information estimate from data is consistent with that reported in [22]. (C) Measured vs. predicted mean expression (top) and variability (bottom) are highly correlated (color code as in B; Pearson \(p<10^{-3}\)). (D) Predicted gap gene regulatory network (left; blur arrows - regression; circular arrows - self-activation) vs. Iterative-based reconstruction (right, (18)). (E) Predicted (left) vs. measured (right) decoding map (bottom) shows a nearly unambiguous code (diagonal band) with \(\sim 1.5\%\) median positional error and have outlier regions (top inset) [24]. (F) Fitting the model to mean WT (gap gene expression profiles yields a good fit but lower positional information values (black bars - distribution over replicate fits) compared to the optimized solution (blue dashed line).**
utilization appears necessary for high-information solutions, whereas permitting more utilization within the same maximal rate limits does not noticeably increase positional information. In fact, among \(>10^{3}\) optimization runs we never found a solution exceeding 5 bits, indicating that such information values likely cannot be accessed within realistic constraints.
The random and the optimal ensemble are closely related to the evolutionary concepts of the _neutral_ and the _selected_ phenotype distributions (54). The random ensemble delineates what is physically possible in absence of selection for function, while the optimal ensemble delineates solutions that maximize function within fixed physical constraints. How closely natural selection _could_ approach this optimality (as quantified by the selected phenotype distribution), or indeed _has_ approached it (via the actual WT pattern), depends on selection strength and its history, genetic load, linkage disequilibrium, and other limitations that are of negligible concern to _in silico_ optimization. Successful prediction of the pattern in Fig. 2B implies that selection was sufficiently strong to overcome such limitations and push the gap gene system beyond evolutionary tinkering (55) towards optimality (6, 50, 56). Even in strictly _ab initio_ runs with zero bias towards the WT pattern we repeatedly find solutions that closely reproduce the overall size and placement of expression domains in _Drosophila_ (Fig. 3D), the encoded positional information, as well as the regulatory interactions. Tantalizing early experimental work suggests that dipteran
Fig. 3: **Optimal and random gap gene network ensembles.****(A)** Patterning phenotypes for optimal ensemble (color, solutions from ‘WT RUF in panel C) vs. random ensemble (gray, including only solutions \(>0.5\) bit that are at or below WT resource utilization, delineated by dashed yellow lines in panel C) reveal that high positional information (leftmost; violet, red, yellow indicate lowest, middle, highest third of the information interval) correlates with low positional error, high number of gap gene “slopes”, and a more uniform utilization of resources across the four gap genes (red numbers – ensemble medians), (B) Within the optimal ensemble, higher information correlates with higher dynamical stability, i.e., lower pattern rate–change (each dot – one optimal solution; red ellipse – 1 SD contour in the two. RoC plane; color code and optimal ensemble as in A). (C) Random (gray) and various optimal ensembles (red – resource utilization bounded by _Drosophila_ WT denoted by dashed horizontal yellow line; magenta – progressively smaller resource utilization; blue – WT resource utilization plus a bound on pattern rate–of-change; green – no resource utilization or rate–of-change bounds) depicted in the information vs. resource utilization plane (each dot – unique parameter combination). Histograms in the margins show the raw counts of evaluated parameter combinations. Inset: Information vs. resource utilization (median, 0.1–0.9-quartile intervals over ensembles in the main panel shown as central white squares and ribbons, respectively). (D) Example optimal solutions (1–3) from panel C optimized at fixed gap product diffusion (\(D=0.5\)\(\mu\)m\({}^{2}\)/s), and an example solution (4) where \(D\) was also optimized from the ensemble in panel E, qualitatively match WT gap gene expression domains (top) and the regulatory network architecture. (E) Positional information (top) and pattern rate–of–change (bottom) as a function of gap gene diffusion constant \(D\) (empty circles – mean across optimal ensembles), capped at WT resource utilization (red) or with an additional “rate–of-change constraint” (blue). Solid circles – mean values for the case where \(D\) best is also optimized; yellow shade – broad range of \(D\) consistent with literature reports. (F) Two example solutions optimized at lower-than-optimal (top) and higher-than-optimal (bottom) diffusion constant values.
species related to _Drosophila_ may feature a broadly consistent gap gene domain arrangement whose expression domains are, however, shifted [(57, 58)] or swapped [(59)], as we find in our optimal ensembles.
Taken together, our results paint a nuanced picture of the "necessity vs. contingency" dichotomy. In the \(50+\) dimensional parameter space of possible networks, there is a highly non-random, locally optimal solution which produces expression patterns very similar to what we see in real fly embryos, but there are many other local optima that transmit about the same amount of positional information; all of these solutions are rare in the random ensemble. It is an open question whether alternative optima quantitatively recapitulate gap gene patterns seen in other dipterans or whether the degeneracy is removed by selection for additional phenotypes beyond positional information.
### Alternatives to the real network
Our theory provides a framework within which we can explore tradeoffs beyond the structure of the gap gene network. As a first example, we have taken the effective diffusion constant of gap gene products to be a fixed physical property of the cytoplasm, \(D=0.5\)\(\mu\)m\({}^{2}\)/s, in line with existing measurements [(39)]. But we can view \(D\) as one more parameter to be optimized, and remarkably we find that there is a broad optimum at the experimentally estimated value (Fig. 3E). Larger diffusion constants lead to a precipitous drop in information even when all other parameters \(\boldsymbol{\theta}\) are re-optimized, because high diffusion smooths gap gene profiles to the extent that adjacent nuclei can no longer be distinguished reliably (Fig. 3F, bottom). On the other hand, slower diffusion does not serve as effectively to average over local super-Poisson noise sources; the optimization algorithm compensates by finding parameters that generate more and steeper transitions between high and low expression levels (Fig. 3F, top), but even these unrealistic patterns do not transmit quite as much information. Thus, a single parameter displaced away from its optimum causes significant decreases in positional information; to lessen the impact the optimization algorithm adjusts other network parameters, driving the predicted patterns of gene expression away from what we see in the real embryo.
We next address the question of evolutionary necessity and sufficiency. To this end, we make structural changes to the network and then re-optimize all of its parameters to explore "alternative evolutionary histories" that could have unfolded with changed molecular components or mechanisms. As an example, Figure 4A characterizes solutions obtained using 1, 2, \(\cdots\), 5 gap genes, subject to the same _total_ resource utilization as the WT, plotting the positional information vs. the rate at which expression patterns are changing at readout time. Networks that transmit 4 bits or more--as in the real embryo--are completely inaccessible using only one or two gap genes, even though these networks are allowed to utilize the same total number of molecules as in the optimal four gene networks above. With three gap genes the optimized networks can transmit a total information comparable to what is seen experimentally, but detailed analysis reveals that three-gene networks all have local defects where the positional error spikes above \(5-10\%\) of the embryo length, in contrast to the much more uniform distribution of precision along the length of the real embryo [(22)]; we can quantify this by looking at the variations in the positional error along the AP axis (Fig. 4A, inset). This failure of the three gene networks arises because they cannot realize a sufficient number of slopes or switches between high and low expression levels. Four gap genes thus are necessary to ensure that high positional information translates into defect-free patterning not just on average, but across the entire AP axis of every embryo [(22)]. The marginal benefit of the putative fifth gap gene appears small and may not be sufficient to establish the required additional regulatory mechanisms or to maintain them at mutation-selection balance [(60)].
We can explore, in the same spirit, the role of the multiple maternal morphogens. In the fly embryo, the anterior (A, Bicoid), posterior (P, Nanos), and terminal (T, Torso-like) systems jointly regulate gap gene expression [(24)]. In our model, we can remove one or two of these inputs and re-optimize all the parameters of the gap gene network, and find that there are moderate yet statistically significant losses in both positional information and stability (Fig. 4B). The impact of primary morphogen deletions is limited because the optimization algorithm adjusts the gap gene cross-regulation parameters to restore informative spatial patterns. This ability, however, disappears entirely if gap gene cross-regulation is not permitted and the gap gene network is feed forward (FF) only (light gray arrows in Box figure, Fig. 4B); in the absence of feedback, removal of each primary morphogen system is associated with a large decrease in positional information.
Figure 4B suggests that stable, high information patterns could be generated by utilizing all three maternal morphogens even without the ability of gap genes to regulate one another. But in the absence of cross-regulation, the time scale for variations in the pattern is determined solely by the intrinsic lifetime of the most stable species (mRNA). In contrast, feedback in the gap gene network allows for the emergence of longer time scales which both slow the variations and can reduce noise by temporal averaging [(47)]; possible evidence for these effects has been discussed previously [(53)]. Evolutionarily, adding gap gene cross-regulation creates variability in the rate-of-change phenotype that could additionally be selected for. Indeed, the WT-like solution of Fig. 2B falls close to the accessibility frontier of Fig. 4B, suggesting such a preference.
Lastly, we varied the maximal allowed strength of regulatory interactions, \(H_{\mathrm{max}}\) (see Box 1), in our model. This parameter determines how strongly each individual input, either a morphogen or a gap gene acting via self- or cross-regulation, can impact the expression of a target gap gene. In simple microscopic models, this parameter measures the number of transcription factor molecules that bind cooperatively to their target sites as they regulate a single gene, and correlates with the steepness (or sensitivity) of the resulting induction curve. Optimizations presented so far used \(H_{\mathrm{max}}=50\), sufficiently high not to impose any functional constraint. As \(H_{\mathrm{max}}\) is lowered and the constraint kicks in, the optimal feed forward solution of Fig. 4B (dark blue) suffers large drops in encoded positional information (Fig. 4C); optimal feed forward architectures are thus heavily reliant on levels of effective cooperativity that appear unrealistic. Further, one might have been tempted to interpret Fig. 4B by saying that cross-regulation and multiple input morphogens provide alternative or even redundant paths to high information transmission, but we see that this degeneracy is lifted when we limit the effective cooperativity to more realistic levels. From an evolutionary perspective, gap
gene cross-regulation therefore is favourable for two reasons: first, it generates temporally stable phenotypes at the accessibility frontier (as in Fig. 4B); and second, it permits high information solutions also in networks where the strength of individual regulatory interactions is limited (as in Fig. 4C).
## Discussion
The idea that living systems can approach fundamental physical limits to their performance, and hence optimality, goes back at least to explorations of the diffraction limit in insect eyes and the ability of the human visual system to count single photons [(6)]. The specific idea that biological systems optimize information transmission emerged shortly after Shannon's formulation of information theory, in the context of neural coding [(61; 7)]. Despite this long history, most optimality analyses in biological systems have been carried out in very simplified contexts, using functional models with a small number of parameters. Here we have instantiated these ideas in a much more realistic setup, using mechanistic models for genetic regulatory networks that permit direct interpretation in terms of molecular mechanisms and interactions.
We focused on the _Drosophila_ gap gene system, one of the paradigms for developmental biology and for physical precision measurements in living systems [(62)]. Our work extends previous mathematical models of this system [(63; 64; 65; 66; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66), as well as attempts to predict it _ab initio_[(67; 68; 69)]. In contrast to previously studied models, we systematically incorporate the unavoidable physical sources of noise, highlighting how patterning precision can emerge from noisy signals by a synergistic combination of multiple mechanisms. This novel contribution addresses a key question in developmental biology and provides a key mathematical ingredient for computing positional information. Crucially, we do not _fit_ the parameters of the model to data, but rather _derive_ them _ab initio_ via optimization. In contrast to previous prediction attempts, our constraints and comparisons to data are not stylized, but fully quantitative and commensurate with the precision of the corresponding experiments.
We have found networks that maximize positional information with a limited number of molecules, and there is at least one local optimum quantitatively matching a large range of observations in the wild-type _Drosophila_ system: its spatial patterns of expression and variability, the resulting decoding map, the molecular architecture of the network, as well as subtler biases in spatial resource utilization. Our optimization framework furthermore provides a platform for exploring the necessity and sufficiency of various network components that ensure maximal information transmission. Using this framework to deliver on our introductory questions, we have established that four gap genes appear necessary for defect-free patterning and that the apparent redundancy between the three maternal morphogens and gap gene cross-regulation is lifted under a developmental constraint on the strength of regulatory interactions.
Numerical optimization clearly is not evolutionary adaptation, yet its results nevertheless provide perspective on evolutionary questions. Discussions about the interplay of evolutionary optimization and developmental constraints, necessity versus contingency, and limits to selection have a venerable history [(70; 36)]. Rather than discussing these questions in qualitative terms, here we explored the role of physical constraints and tradeoffs quantitatively, in the context of an expressive mechanistic model, using the powerful concepts of the random and the optimal ensembles. In the words of Jacob [(55)], the random ensemble delineates the space of the "possible." Within this space, our optimization principle acts as a proxy for strong selection for high positional information, thereby identifying a much more restricted optimal ensemble. It is surprising that this principle alone is sufficient to ensure that the optimal ensemble contains a solution very close to Jacob's "actual", the _Drosophila_ gap gene network that we observe and measure.
###### Acknowledgements.
We thank Nicholas H Barton for his comments on the manuscript, Benjamin Zoller for inspiring discus
Figure 4: **Necessity and sufficiency of gap gene regulatory network mechanisms. (A) Optimal ensembles (transparent symbols = individual optimal solutions; solid symbols = ensemble medians) for networks with \(1,\,2,\cdots,5\) gap genes (legend colors) optimized at the WT resource utilization (for reference, red diamond + red ellipse at 1 SD = WT-like optimal ensemble from Fig. 3B). Solutions delineate the accessibility frontier (dotted black line for visual guidance) in positional information (I) vs. pattern rate-of-change (RoC) plane. (Inset) While the median positional error (white squares) plateaus for optimal networks with three gap genes or more, the variability in positional error (fibends denote 0.1– and 0.9- quartiles across AP positions in individual embryos) significantly shrinks only with 4 gap genes or more (red arrow). (B) Optimal ensembles for networks responding to different subsets of the three morphogens (A = anterior; P = posterior; T = terminal; red/yellow circle symbols = ensemble median; red dots, diamond, ellipse = WT-like ensemble as in A). Optimal ensembles for purely feed forward networks (FF only; i.e., no gap gene self- or cross-regulation) denoted in bluish huses (legend). (C) Positional information in optimal ensembles with (red; white squares and ribbons denote median and 0.1–0.9-quantile intervals, respectively) or without (blue; FF networks) gap gene self- and cross-regulation (legend), for different maximal regulatory strength, \(H_{\text{max}}\). Compared to feed forward networks, full regulation supports higher-information solutions, particularly at lower values for \(H_{\text{max}}\).**
sions, and Aleksandra Walczak and Curtis Callan for early discussions that shaped this work. This work was supported in part by the Human Frontiers Science Program, the Austrian Science Fund (FWF P28844), U.S. National Science Foundation, through the Center for the Physics of Biological Function (PHY-1734030); by National Institutes of Health Grants R01GM07275, U01DA047730, and U01DK127429; by the Simons Foundation; and by the John Simon Guggenheim Memorial Foundation.
|
2301.03979 | Apodized photonic crystals: A non-dissipative system hosting multiple
exceptional points | Optical systems obeying non-Hermitian dynamics have been the subject of
intense and concerted investigation over the last two decades owing to their
broad implications in photonics, acoustics, electronics as well as atomic
physics. A vast majority of such investigations rely on a dissipative, balanced
loss-gain system which introduces unavoidable noise and consequently, this
limits the coherent control of propagation dynamics. Here, we show that an
all-dielectric, non-dissipative photonic crystal (PC) could host, at least two
exceptional points in its eigenvalue spectrum. By introducing optimum
apodization in the PC architecture, namely 1D-APC, we show that such a
configuration supports a spectrum of exceptional points which distinctly
demarcates the PT-symmetric region from the region where PT -symmetry is broken
in the parameter space. The analytical framework allows us to estimate the
geometric phase of the reflected beam and derive the constraint that governs
the excitation of topologically-protected optical Tamm-plasmon modes in
1D-APCs. | Abhishek Mondal, Shailja Sharma, Ritwick Das | 2023-01-10T14:19:57Z | http://arxiv.org/abs/2301.03979v1 | # Apodized photonic crystals: A non-dissipative system hosting multiple exceptional points
###### Abstract
Optical systems obeying non-Hermitian dynamics have been the subject of intense and concerted investigation over the last two decades owing to their broad implications in photonics, acoustics, electronics as well as atomic physics. A vast majority of such investigations rely on a dissipative, balanced loss-gain system which introduces unavoidable noise and consequently, this limits the coherent control of propagation dynamics. Here, we show that an all-dielectric, non-dissipative photonic crystal (PC) could host, at least two exceptional points in its eigenvalue spectrum. By introducing optimum apodization in the PC architecture, namely 1D-APC, we show that such a configuration supports a spectrum of exceptional points which distinctly demarcates the \(\mathcal{PT}\)-symmetric region from the region where \(\mathcal{PT}\)-symmetry is broken in the parameter space. The analytical framework allows us to estimate the geometric phase of the reflected beam and derive the constraint that governs the excitation of topologically-protected optical Tamm-plasmon modes in 1D-APCs.
## I Introduction
Optical systems which are governed by non-Hermitian Hamiltonian dynamics through an engineered gain and dissipation mechanism, provide a route to overcome the limitations imposed by closed optical systems that obey the Hermitian-Hamiltonian led dynamics. Such non-Hermitian systems give rise to a real eigenvalue spectrum when the Hamiltonian commutes with the parity-time (\(\mathcal{PT}\)) operator. A continuous change in the parameter governing the Hermiticity (of the Hamiltonian) breaks the \(\mathcal{PT}\) symmetry which manifests in the form of complex eigenvalues for the system. In the phase space, such points where the real and complex eigenvalues coalesce are termed as exceptional points (EPs) [1; 2]. This spontaneous \(\mathcal{PT}\)-symmetry breaking has catalyzed a plethora of non-intuitive outcomes such as directional invisibility [3; 4], coherent perfect lasing and absorption [5; 6; 7; 8; 9], negative refraction [10], single-particle based sensing [11; 12; 13], distortion-free wireless optical power transfer [14] and a few more [15; 16; 17; 18; 19]. It is, however, worth noting that the incommensurate gain and loss distribution in non-Hermitian systems impose the primary limitation on the practical applications due to unpredictable signal-to-noise ratio near EP [20; 21; 22; 23]. In order to circumvent such bottlenecks, a few possibilities have been explored. One such promising route is to create an asymmetric loss in the system (without gain) whose dynamics could be explored using a non-Hermitian Hamiltonian with a uniform background loss [24; 25; 20]. Such a configuration would exhibit \(\mathcal{PT}\)-symmetry which could be broken through scaling up the loss asymmetry. In a different scheme, a pseudo-Hermitian system was explored which allowed strong coupling between a large number of modes via manipulation of the parameters governing the Hamiltonian [24]. This led to the existence of EPs of multiple order and the interaction of eigenvalues around each EP provides a robust control on the propagation dynamics [26; 27]. In spite of the aforementioned developments, a useful and practical proposition would be to devise a configuration hosting a multitude of EPs with the constraint that the electromagnetic (\(EM\)) energy lost due to the non-Hermitian dynamics is stored in a reservoir. This essentially implies that the dissipative channel associated with a non-Hermitian system drives a separate Hermitian system which could allow reverse flow of \(EM\) energy by virtue of cyclical dynamics. Such systems have been explored in the area of parametric frequency conversion processes where the \(EM\) energy lost in one of the parametric processes (obeying non-Hermitian dynamics) is coherently added to the other parametric process that follows a Hermitian dynamics [28]. A plausible translation of such an idea in the non-absorptive linear systems would be to introduce a _virtual_ loss in an intermodal interaction process thereby generating multiple EPs in the parameter space. One of the simplest configurations imitating such a process is a multimodal interaction in an all-dielectric one-dimensional (1D) photonic-crystal (PC) with a gradually varying duty cycle (for each unit cell). In such an apodized 1D-PC, the forward (source) to backward (sink) mode-coupling dynamics is essentially governed by a pseudo-Hermitian Hamiltonian whose Hermiticity is determined by the apodization along the propagation direction. In the present work, we show the existence of multiple EPs in an apodized 1D-PC and develop an analytical framework for ascertaining the possibility of exciting topologically-protected optical edge modes in such aperiodically stratified configurations.
Theoretical Framework and Coupled-Mode Formalism
We consider a 1D-PC comprised of periodic bilayers with refractive indices \(n_{1}\) and \(n_{2}\) with thicknesses \(d_{1}\) and \(d_{2}\). Such conventional 1D-PCs or alternatively, distributed Bragg reflectors (DBRs) are usually characterized by photonic bandgaps (PBGs) which are separated from each other by high transmission (or pass) bands. In order to appreciate the \(EM\) wave propagation dynamics, we consider the coupling between \(p^{th}\)-mode (\(|p\rangle\)) with \(q^{th}\)-mode (\(|q\rangle\)) which could be represented employing the coupled-amplitude equations given by [29]
\[\frac{dA_{q}}{dz}=-i\frac{\beta_{q}}{|\beta_{q}|}\sum_{p}\sum_{m}\tilde{\kappa }_{qp}^{(m)}A_{p}e^{-i(\beta_{q}-\beta_{p}-m\frac{2\pi}{\Lambda})z} \tag{1}\]
where \(\beta_{p}\) and \(\beta_{q}\) are the longitudinal (\(z\)) components of wavevector \(k_{p}\) and \(k_{q}\) respectively. \(\tilde{\kappa}_{qp}^{(m)}\) defines the strength of coupling (or coupling coefficient) between the \(p^{th}\) and \(q^{th}\) mode that is coupled through the \(m^{th}\) Fourier component of the periodic dielectric distribution ( \(\Lambda=d_{1}+d_{2}\)). The factor \(\Delta\beta=\beta_{q}-\beta_{p}-m\frac{2\pi}{\Lambda}\) (known as the phase-mismatch) is one of the dynamical variables (along with \(\kappa_{qp}\)) which dictate the measure of optical power transferred from one mode to the other. For the present work, we consider a contra-directional coupling set-up where a forward (along \(+z\)) propagating mode (\(|p\rangle\equiv|f\rangle\)) is coupled to a backward (along \(-z\)) propagating mode (\(|q\rangle\equiv|b\rangle\)). Accordingly, it could asserted that \(\beta_{b}=-\beta_{f}\) or alternatively \(\Delta\beta=2\beta_{f}-\frac{2\pi}{\Lambda}\) and therefore, Eq. (1) could be simplified to [29]
\[\frac{dA_{b}}{dz}=i\tilde{\kappa}A_{f}e^{-i\Delta\beta z} \tag{2}\]
\[\frac{dA_{f}}{dz}=-i\tilde{\kappa}^{*}A_{b}e^{i\Delta\beta z} \tag{3}\]
where \(\tilde{\kappa}~{}=~{}\frac{i(1-\cos 2\pi\zeta)~{}(\frac{n_{1}}{2}-m_{2}^{2})}{ \tilde{n}}=i\kappa\) and \(\zeta\) is the dielectric filling fraction of layer with refractive index \(n_{1}\) in the unit cell _i.e._\(\frac{d_{1}}{\Lambda}\). The mean refractive index for an unit cell of thickness \(\Lambda\) is \(\bar{n}~{}=~{}\sqrt{\frac{d_{1}n_{1}^{2}+d_{2}n_{2}^{2}}{\Lambda}}\). By using a Gauge transformation given by \([A_{f},A_{b}]\rightarrow[\tilde{A}_{f},\tilde{A}_{b}]e^{i\beta(2|\Delta\beta _{0}z-\int_{0}^{z}q(z^{\prime})dz^{\prime}|)}\), we obtain [30]
\[i\frac{d}{dz}\begin{pmatrix}\tilde{A}_{b}\\ \tilde{A}_{f}\end{pmatrix}=\begin{pmatrix}-\Delta k&-\tilde{\kappa}\\ \tilde{\kappa}^{*}&\Delta k\end{pmatrix}\begin{pmatrix}\tilde{A}_{b}\\ \tilde{A}_{f}\end{pmatrix} \tag{4}\]
Equation (4) is analogous to time-dependent Schrodinger's equation with \(t\)-coordinate being replaced by \(z\)-coordinate. Here, \(\Delta k\) (\(=\frac{\Delta\beta}{2}\)) and \(q(z)=0\) remains constant (for a given frequency) across the 1D-PC which has a fixed duty cycle. The autonomous Hamiltonian \(\hat{H}=-\vec{\sigma}\cdot\vec{B}\) with \(\vec{\sigma}\equiv[\sigma_{x},\sigma_{y},\sigma_{z}]\) are the Pauli's spin matrices and \(\vec{B}\equiv[0,\kappa,\Delta k]\) (magnetic field analog) represents a pseudo-Hermitian evolution dynamics. In order to appreciate this point, we note that the eigenvalues of \(\hat{H}\) which are given by \(e_{1,2}~{}=~{}\pm\sqrt{\Delta k^{2}-\kappa^{2}}\) whereas the eigenfunctions are \(|\psi_{1}\rangle~{}=~{}\begin{pmatrix}-i\frac{(\Delta k+\sqrt{\Delta k^{2}- \kappa^{2}})}{1}\\ 1\end{pmatrix}\) and \(|\psi_{2}\rangle~{}=~{}\begin{pmatrix}+i\frac{(-\Delta k+\sqrt{\Delta k^{2}- \kappa^{2}})}{1}\\ 1\end{pmatrix}\). Here, \(\tilde{\kappa}~{}=~{}i\kappa\). A closer look into the eigenvectors reveals that the equality \(\kappa~{}=~{}\pm\Delta k\) manifests as coalescing of eigenvectors accompanied by vanishing eigenvalues. Such points in parameter space where \(\kappa\) equals \(\pm\Delta k\) are termed as exceptional points (EPs) and they distinctly demarcate the regions exhibiting Hermitian (\(\mathcal{PT}\)-symmetric phase) and non-Hermitian (\(\mathcal{PT}\)-broken phase) dynamical evolution of states (or modes).
In order to appreciate the aforementioned idea, we consider a practical 1D-PC with \(n_{1}~{}\equiv~{}TiO_{2}\) layer and \(n_{2}~{}\equiv~{}SiO_{2}\) layer. The layer thicknesses are \(d_{1}=d_{2}=150~{}nm\). The reflection spectrum for \(N=20\) unit cells is plotted in Fig. 1(a) which exhibits a high reflection band (or PBG) spreading over a \(75~{}THz\) bandwidth. In order to obtain the reflection spectrum, finite element method (FEM) based simulations were carried out using the commercially available computational tool (COMSOL Multiphysics). In the simulations, the periodic boundary condition is imposed along the transverse direction and a mesh size of \(5~{}nm\) is considered. We ignore the material dispersion for the simulations and assume \(n_{1}=2.5\) (\(\equiv TiO_{2}\)) and \(n_{2}=1.5\) (\(\equiv SiO_{2}\)) across the entire spectrum. For this 1D-PC, we also plotted the eigenvalues \(e_{1}\) and \(e_{2}\) (see Fig. 1(b)) as a function of the frequency of the incident electromagnetic wave. It is apparent that the eigenvalues vanish at \(\nu_{1}\approx 210~{}THz\) and \(\nu_{2}\approx 285~{}THz\). These two frequencies (\(\nu_{1}\) and \(\nu_{2}\)) define the EPs (\(\kappa~{}=~{}+\Delta k\) and \(\kappa~{}=~{}-\Delta k\)) for the periodic 1D-PC. A closer look would also reveal that the eigenvalues are purely imaginary within the PBG and the band edges (Fig. 1 (a)) coincide with \(\nu_{1}\) and \(\nu_{2}\). The mode fields for frequencies lying inside the PBG (\(240~{}THz\)) and outside the PBG (\(310~{}THz\)) are presented in Figs. 1(c) and (d) respectively. It is worth noting that the investigations on systems exhibiting \(\mathcal{PT}\)-symmetry (or \(\mathcal{PT}\)-broken symmetry) led dynamics in photonics essentially involve optimally balanced gain-loss architectures such as segmented waveguides and photonic crystals. In such systems, a complex relative permittivity in different sections depicting _actual_ gain or loss for the propagating light beam gives rise to the \(\mathcal{PT}\)-symmetry (or \(\mathcal{PT}\)-broken symmetry). The present configuration involving 1D-PC does not include an _actual_ dissipative component for achieving the \(\mathcal{PT}\)-symmetric to \(\mathcal{PT}\)-symmetry broken phase transition. Alternatively, the coupling of optical power to the backscattered mode \(|b\rangle\) is analogous to a _virtual_ loss for a forward propagating \(|f\rangle\) mode. When this coupling is relatively weak _i.e._\(\Delta k>\tilde{\kappa}\), \(|f\rangle\) and \(|b\rangle\) exhibits cyclic exchange of optical power (as a function of \(z\)) which is a primitive outcome for a \(\mathcal{PT}\)-symmetric dynamics. On the other hand, a strong coupling regime
where \(\Delta k<\tilde{\kappa}\) manifests through a monotonic growth of backscattered mode (\(|b\rangle\)) that is a signature of \(\mathcal{PT}\)-symmetry broken phase. It is worthwhile to reiterate the point that the two regimes depicted by the inequality of \(\Delta k\) and \(\tilde{\kappa}\) (in the parameter space) could be mapped onto the PBG and pass or transmission band (s) in the reflected spectrum. Subsequently, each PBG is necessarily bounded by two EPs in this framework. Additionally, these two EPs are fixed and could not be tailored for a given 1D-PC with a fixed duty cycle and fixed period. Also, the conventional 1D-PC geometry excludes the possibility of realizing higher-order exceptional points [31]. Taking a cue from this critical viewpoint, we note that a small apodization or gradual change in dielectric filling fraction (\(\zeta\)) of each unit cell of the 1D-PC would allow us to realize discretely spaced (multiple) EPs at different optical frequencies (or wavelengths). In order to elucidate this point, we recall that \(\Delta k\) as well as \(\tilde{\kappa}\) is a function of \(\zeta\). An optimum spatial variation in \(\zeta\) could essentially give rise to the possibility of EPs at different physical locations (along \(z\)) in a 1D-PC. As an example, we show below that an optimally apodized 1D-PC (1D-APC) which satisfies the adiabatic constraints enables us to observe EPs at discretely separated points along \(z\).
### Design of an 1D apodized PC and intermodal coupling
We consider a 1D-PC configuration that exhibits varying dielectric filling fraction (\(\zeta\)) in each unit cell. This variation is essentially dictated through the relation \(d_{1M}=d_{1}-M\delta\) and \(d_{2M}=\Lambda-d_{1M}\). Here, \(d_{1M}\) and \(d_{2M}\) are the thickness of \(TiO_{2}\) and \(SiO_{2}\) layers respectively in \(M^{th}\) unit cell (\(M=0,1,2,3,...,(N-1)\) for \(N\) number of unit cells). The unit cell period, however remains unchanged _i.e._\(\Lambda=d_{1M}+d_{2M}=d_{1}+d_{2}\). This apodization in 1D-PC could be visualized through a longitudinal variation in \(\Delta k\) as well as \(\tilde{\kappa}\) by virtue of a monotonic change in average refractive index (\(\bar{n}\)) for an _unit cell_. This variation in \(\Delta k\) and \(\tilde{\kappa}\) in a 1D-APC geometry leads to an adiabatic evolution of the Stokes vector along the propagation direction and manifests through a broader PBG (\(\approx 140\)\(THz\)) in comparison with a conventional (periodic) 1D-PC [30]. This is presented in Fig. 2(a) which shows a broader reflection spectrum for the 1D-APC in comparison with the conventional 1D-PC (Fig.1(a)). In addition, a flat transmission band and the absence of sharp transmission resonances is a distinct feature of 1D-APC. The mode-propagation characteristics for the frequencies within the PBG (of 1D-APC) is explored by drawing a comparison with the mode-field distributions for the equivalent modes within the PBG of a conventional 1D-PC. Figures 2(b) and (c) shows the mode-field distribution for two frequencies \(\nu_{a}=250\)\(THz\) and \(\nu_{b}=300\)\(THz\) which are within the PBG of 1D-APC. In comparison with the mode-field distribution shown in Fig. 1(c), it could be observed that different modes are reflected from spatially separated \(z\) values. The smaller frequency (\(\nu_{a}=250\)\(THz\)) is reflected from the regions which are closer to \(z=0\) edge of the 1D-APC in comparison to that for \(\nu_{b}=300\)\(THz\). This variation is indicative of the fact that the field is localized and exhibits instantaneous localization in different 1D-APC sections. From a different perspective, it is apparent that the variation in dielectric filling fraction (\(\zeta\)) would result in different eigenvalues (and corresponding eigenvectors) for each unit cell. Accordingly, we plot the eigenvalues \(e_{1}\) and \(e_{2}\) as a function of \(d_{1M}\) for two frequencies \(\nu_{a}=250\)\(THz\) (Fig. 2(d)) and \(\nu_{b}=300\)\(THz\) (Fig. 2(e)) which are within the PBG of 1D-APC. Each one of the figures shows that the eigen
Figure 1: a) Shows the reflection spectrum of a conventional (periodic) 1D-PC. b) Shows the variation in \(Re(e_{1})\) (dotted black curve), \(Im(e_{1})\) (dotted maroon curve), \(Re(e_{2})\) (solid black curve) and \(Im(e_{2})\) (solid maroon curve) as a function of frequency (\(\nu\)). c) and d) Shows the mode-field intensity for frequencies within the PBG \(240\)\(THz\) and that outside the PBG \(310\)\(THz\) respectively. The solid red arrow represents the direction of incidence of light.
Figure 2: a) Shows the reflection spectrum for designed 1D-APC. (b) and (c) Shows the mode-field intensities for two different frequencies \(\nu_{a}=250\)\(THz\) and \(\nu_{b}=300\)\(THz\) which are within the PBG of 1D-APC. (d) and (e) Shows the variation in \(Re(e_{1})\) (dotted black curve), \(Im(e_{1})\) (dotted maroon curve), \(Re(e_{2})\) (solid black curve) and \(Re(e_{2})\) (solid maroon curve) as a function of \(TiO_{2}\) layer thickness for each unit cell (_i.e._\(d_{1M}\)) at frequencies \(\nu_{a}=250\)\(THz\) and \(\nu_{b}=300\)\(THz\) respectively.
values (\(e_{1}\) and \(e_{2}\)) vanish at two different values of \(d_{1M}\)_i.e._ at the location of two different unit cells. Therefore, the 1D-APC geometry hosts two EPs for every \(d_{1M}\). Consequently, for a multitude of \(\zeta\), there would be multiple EPs in the 1D-APC for a forward-propagating mode to a backscattered mode-coupling process. As discussed before, the regions where \(\Re e_{1}\) and \(\Re e_{2}\) are non-zero in Figs. 2(d) and 2(e) exhibit a \(\mathcal{PT}\)-symmetric coupling dynamics between the forward-propagating and backscattered modes. On the other hand, in the regions where \(e_{1}\) and \(e_{2}\) are purely imaginary, the mode-coupling process exhibits \(\mathcal{PT}\)-symmetry broken manifolds. The illustrations presented in Figs. 2(d) and 2(e) show that for each frequency within the PBG, the 1D-APC hosts two EPs at two different \(d_{1M}\). This essentially implies that there exists one or more than one EPs hosted by each unit cell of the 1D-APC. Therefore, an 1D-APC is expected to host multiple EPs which are spectrally as well as spatially separated from each other. In order ascertain the spectral location of EPs in the 1D-APC, we plot the evolution of \(\vec{B}\) in the parameter space for three different frequencies \(\nu_{1}~{}=~{}400~{}THz\), \(\nu_{2}~{}=~{}250~{}THz\), and \(\nu_{3}~{}=~{}160~{}THz\) as shown in Fig.3(a). It could be noted at \(\nu_{1}\) and \(\nu_{3}\) are situated outside PBG of 1D-APC (see Fig. 2(a)). Since, the EPs are depicted by the condition \(\Delta k=|\kappa|\), Fig.3(a) also contains the curve \(\Delta k=\pm\kappa\) (solid blue and green curves). It is apparent that \(\Delta k=\pm\kappa\) curve intersects \(\vec{B}_{\nu_{2}}\) at two points and it does not intersect the \(\vec{B}_{\nu_{1}}\) curve as well as the \(\vec{B}_{\nu_{3}}\) curve in the parameter space. For frequencies close to the band-edge of 1D-APC (say \(200~{}THz\) or \(350~{}THz\)), it could be ascertained that there exists only one EP in the eigenvalue spectrum. This is primarily due to the adiabatic constraints followed by the 1D-APC design. In other words, for the band-edge frequencies, the forward and backward propagating modes are decoupled (\(\tilde{\kappa}\)) at entry (\(z=0\)) and exit (\(z=L\)) face of the crystal. Additionally, \(d_{1M}=\Lambda\) for \(m=0\) (or \(d_{2M}=\Lambda\) for \(m=N\)) in case of band-edge frequencies that leads to \(\Delta k=0\) for \(\zeta=\) (or \(\zeta=1\)). Therefore, \(\tilde{\kappa}=\Delta k=0\) depicts the only EP for the band-edge frequencies.
In order to elucidate the aforementioned point, we present the spectral location of EPs as a function of dielectric filling fraction (\(\zeta\)) or propagation direction (\(z\)) in Fig. 3(b). It could be observed that there exists two (2) EPs (at different \(\zeta\) or \(z\)) for all the frequencies well within the PBG of 1D-APC. However, for the band-edge frequencies (\(\nu_{l}=200~{}THz\) and \(\nu_{h}=330~{}THz\)), the 1D-APC hosts one EP only. Nevertheless, the area enclosed by the EPs in Fig. 3(b) represents the region of \(\mathcal{PT}\)-symmetry broken phase for the 1D-APC. It is interesting to note that the separation between the two EPs for frequencies closer to the band-edges (say \(\nu\leq 210~{}THz\) or \(\nu\geq 310~{}THz\)) very less and they tend to overlap at the same filling fraction. It is important to note that these EPs are physically positioned _close to_ the entry (\(z=0\)) and exit (\(z=L\)) face of the 1D-APC where \(\tilde{\kappa}\) is very small. By virtue of this, the PBG corresponding to that unit cell of 1D-APC is relatively smaller in comparison with the PBG for a unit cell close to the center (\(z\approx\frac{L}{2}\)) of 1D-APC. Due to the fact that the EPs exist at the band-edges of PBG for each unit cell of APC, a smaller PBG would essentially imply closely spaced EPs near the band-edges (see Fig. 3(b)).
### Geometric phase estimation of reflection band
It is well known that the geometric phase of a passband (or transmission band) for a one-dimensional conventional photonic crystal is quantized (0 or \(\pi\)) and it is known as the 'Zak' phase. However, the geometric interpretation of backscattered (or reflection) phase from a 1D-PC remains irrelevant. However, in case of 1D-APC, the reflection of different spectral components (within the PBG) takes place from different unit cells (or \(z\)) along the propagation direction [30]. For example, the adiabatic following constraint leads to conversion of optical power from the forward-propagating to the backscattered mode predominantly towards the exit face of 1D-APC for frequency \(\nu=250~{}THz\) which could be seen in Fig. 4(a). Through a similar route, it could be shown that different spectral components within the PBG are reflected strongly from different unit cells of 1D-APC [30]. The primary underlying reason could be traced to the variation in \(\tilde{\kappa}\) and \(\Delta k\) for each spectral component in the PBG which are non-identical. Consequently, the estimation of geometric phase acquired by different backscattered modes is expected to be different and must play a crucial role in establishing the _bulk-boundary_ correspondence in case of 1D-APC. In order to obtain the geometric phase \(\gamma\), we consider a triad defining the state vector \(\vec{S}\) (\(\equiv[u,v,w]\)) where \(u=\tilde{A}_{i}\tilde{A}_{r}^{*}+\tilde{A}_{r}\tilde{A}_{i}^{*}\), \(v=-i[\tilde{A}_{i}\tilde{A}_{r}^{*}-\tilde{A}_{r}\tilde{A}_{i}^{*}]\) and \(w=\left|\tilde{A}_{r}\right|^{2}-\left|\tilde{A}_{i}\right|^{2}\)[30]. The \(z\)-component of the state-vector (\(w\)) represents the conversion efficiency of optical power from a forward-propagating to a backscattered mode [30]. It is also worth
Figure 3: (a) Shows the variation of \(\vec{B}\) in parameter space (spanned by \(\kappa\) and \(\Delta k\)) at different operating frequencies (\(\nu_{1}~{}=~{}400~{}THz\), \(\nu_{2}~{}=~{}250~{}THz\), \(\nu_{3}~{}=~{}160~{}THz\)) for the designed 1D-APC. The blue and green solid lines represent the \(\Delta k=\kappa\) and \(\Delta k=-\kappa\) curves. (b) Shows the location of EPs in different unit cells (with different filling fraction \(\zeta\)) as a function of frequency (\(\nu\)).
noting that the trajectory of state-vector (\(\vec{S}\)) corresponding to the frequencies within the PBG is non-closed. Alternatively, the geometric phase is not a conserved quantity during the dynamical evolution of states owing to the \(\mathcal{PT}\)-symmetry broken phase. In general, the solid angle subtended by the state-vector trajectory at the center of the Bloch sphere is used for computing the geometric phase. However, in case of an adiabatic evolution, the state-vector trajectory could be very complicated. In Fig. 4(b), we have plotted such a state-vector trajectory (on the Bloch sphere) corresponding to a frequency \(\nu=250\ THz\) (which is within the PBG of 1D-APC). It is important to note that \(\vec{S}=[0,0,-1]\) and \(\vec{S}=[0,0,1]\) represent states in which all the optical power (\(\propto|\tilde{A}_{f,b}|^{2}\)) is present in the forward-propagating and backscattered mode respectively. Although, the adiabatic evolution of state-vector results in complete optical power transfer from the forward to backward-propagating mode _i.e._\(w=-1\) to \(w=1\), the estimation of acquired geometric phase is quite complicated owing to the spiralling trajectory of \(\vec{S}\) on the Bloch-sphere. However, it is interesting to note that \(\vec{S}\) goes from \([0,0,-1]\) to \([0,0,1]\) for all the frequencies within the PBG of 1D-APC by virtue of satisfying the adiabatic following constraints. The most important point is to note that the conversion efficiency (or reflectivity) is 'unity' for all the frequencies within the PBG of 1D-APC [30]. In other words, \(\vec{B}\) goes from \([0,0,-\Delta k]\) to \([0,0,\Delta k]\) in the parameter space for all the PBG frequencies (through any trajectory) when the adiabatic following constraints are satisfied [30].
By virtue of the fact that the state-vector \(\vec{S}\) adiabatically follows \(\vec{B}\) (as per the Bloch equation), the initial and the final value of \(\vec{B}\) could also yield the geometric phase (\(\gamma\)). It is known that \(\gamma\) is estimated from angle \(\phi\) (subtended by \(\vec{B}\) at the origin \(\Delta k=\tilde{\kappa}=0\)) through the relation \(\gamma=\frac{\phi}{2}\). In that case, the geometric phase for each spectral component within the PBG is \(\frac{\pi}{2}\). In order to elucidate this point, we plot \(\vec{B}\) at different \(z\) of 1D-APC in the parameter space for \(\nu=250\ THz\) as shown in Fig. 5(a). At the entry face of 1D-APC (\(z=0\)), \(\vec{B}(z=0)=[0,0,-2.7\ \mu m^{-1}]\) (black arrow) and gradually goes to \(\vec{B}(z=L)=[0,0,+2.7\ \mu m^{-1}]\) (red arrow) at \(z=L\). At \(z=\frac{L}{2}\), \(\Delta k=0\) and \(\tilde{\kappa}\) is maximum (green arrow in Fig. 5(a)) The evolution of \(\vec{B}\) in Fig. 5(a) yields \(\phi=\pi\) and consequently, \(\gamma=\frac{\pi}{2}\). In a similar manner, \(\gamma\) for all the frequencies within the PBG would be \(\frac{\pi}{2}\) by virtue of adhering to the constraints imposed by adiabatic following. Hence, it could be asserted that a geometric phase of \(\frac{\pi}{2}\) is acquired by a reflected beam in a 1D-APC for the values of parameters which results in \(\mathcal{PT}\)-symmetry broken phase. On the contrary, the variation in \(\vec{B}\) is plotted as a function of \(z\) for \(\nu=180\ THz\) which is outside the PBG of 1D-APC (see Fig. 5(b)). \(\vec{B}(z=0)\) (black arrow) and \(\vec{B}(z=L)\) (red dashed arrow) are both negative as well as co-parallel in this case. Consequently, the geometric phase \(\gamma=\frac{\phi}{2}=0\) for \(\nu=180\ THz\). In addition, it is apparent that \(\Delta k\neq 0\) at any point (or any \(z\)) in the 1D-APC.
### Tamm-plasmon excitations in 1D-APC and topological connection
The presence of a plasmon-active layer adjacent to the all-dielectric 1D-APC results in excitation of multiple Tamm-plasmon modes which are non-degenerate. As an example, we consider a thin (\(d_{Au}=30\ nm\)) layer of gold placed in contact with high index layer (\(TiO_{2}\)) of 1D-APC (see Fig.6(a)). The simulated reflection spectrum (using transfer matrix method) exhibits a sharp resonance within the PBG as shown in Fig.6(b). These resonances are essentially due to Tamm-plasmon mode excitations which are highly localized electromagnetic states. Figure 6(b) depicts the existence of 10 Tamm-plasmon modes within the PBG of 1D-APC. Although there are a few sharp resonances outside the PBG, their mode-field signatures do not resemble that for a Tamm-plasmon mode [32]. In general, the existence of Tamm-plasmon modes is governed by the condition \(\phi_{APC}+\phi_{Au}\ =\ 2s\pi\) where \(s\ =\ 0,\ 1,\ 2,\ 3....\) is an integer [33; 34; 35]. Here, \(\phi_{APC}\) is the total phase acquired by the reflected beam from the 1D-APC (light incident from \(Au\) side), and \(\phi_{Au}\)
Figure 4: a) Shows the variation in conversion efficiency (\(\frac{w+1}{2}\)) for optical power transfer between a forward-propagating mode to a backscattered mode as a function of 1D-APC length (\(z\)) for a frequency \(\nu_{2}=250\ THz\) which is within the PBG. (b) Presents the state-vector (\(\vec{S}=[u,\ v,\ w]\)) trajectory on the Bloch sphere for \(\nu_{2}\ =\ 250\ THz\).
Figure 5: Represents the evolution of \(\vec{B}\) as a function of length (\(L\)) of 1D-APC in parameter (\(\Delta k-\kappa\)) space for a) \(\nu_{2}=250\ THz\) and b) \(\nu_{4}=180\ THz\). \(\phi\) represents the angle subtended by curve \(\vec{B}\) at the origin.
is the phase acquired by reflected beam at the \(Au-TiO_{2}\) interface. It is worthwhile to reiterate that the dielectric layer (of 1D-APC) adjacent to the \(Au\)-film is \(TiO_{2}\) which is the high index layer. In the present context \(\phi_{APC}=\gamma+\alpha\), where \(\alpha\) is the dynamic phase acquired by the reflected beam [30]. This could be estimated by noting the fact that the EPs (for a given frequency) are situated in different unit cells (or \(\zeta\)) of the 1D-APC. For a frequency \(\nu\), if the nearest EP (with respect to \(z=0\)) is present in the \(p^{th}\)-unit cell of 1D-APC, then \(\alpha\) could be determined using
\[\alpha=\frac{2\pi\nu}{c}\sum_{M=0}^{p}[n_{1}d_{1M}+n_{2}d_{2M}] \tag{5}\]
The knowledge of location for EPs in the 1D-APC (obtained from the eigenvalue spectrum of \(\hat{H}\)) would accurately yield the dynamic phase (\(\alpha\)) for any frequency of operation (\(\nu\)). In conjunction with the estimate of \(\gamma\), this information would allow us to determine the Tamm-plasmon mode resonance frequencies (\(\nu_{r}\)). This recipe provides a flexibility in terms of designing an 1D-APC which would facilitate excitation of Tamm-plasmon mode at a target (desirable) frequency (or wavelength) of operation. One such application could be the generation of higher harmonics or frequency downconversion using optical surface states [36]. In this case, the 1D-APC could be designed such that the Tamm-plasmon modes (localized modes) have resonance frequencies that are governed by the energy conservation and phase-matching constraints imposed by the frequency conversion process.
## III Conclusions
In conclusion, we presented an all-dielectric 1D-APC design which hosts multiple exceptional points in its eigenvalue spectrum by virtue of exhibiting a non-Hermitian dynamics for a mode-coupling process between a forward-propagating mode to its backscattered counterpart. Although, the 1D-APC does not include any dissipative component, the intermodal coupling mechanism could be classified in terms of \(\mathcal{PT}\)-symmetric and \(\mathcal{PT}\)-broken phases which are connected through a quantum phase-transition. We also showed that the reflected beam (within the PBG) acquires a geometric phase of \(\frac{\pi}{2}\) in the \(\mathcal{PT}\)-symmetry broken phase. As a consequence of this outcome, the 1D-APC could be designed for exciting the optical Tamm-plasmon modes at any desirable frequency within the PBG. This design flexibility allows us to employ such architectures for quite a few applications such as efficiently carrying out optical frequency conversion using surface states [36].
## IV Disclosures
The authors declare that there are no conflicts of interest related to this article.
|
2305.18110 | State preparation in quantum algorithms for fragment-based quantum
chemistry | State preparation for quantum algorithms is crucial for achieving high
accuracy in quantum chemistry and competing with classical algorithms. The
localized active space unitary coupled cluster (LAS-UCC) algorithm iteratively
loads a fragment-based multireference wave function onto a quantum computer. In
this study, we compare two state preparation methods, quantum phase estimation
(QPE) and direct initialization (DI), for each fragment. We analyze the impact
of QPE parameters, such as the number of ancilla qubits and Trotter steps, on
the prepared state. We find a trade-off between the methods, where DI requires
fewer resources for smaller fragments, while QPE is more efficient for larger
fragments. Our resource estimates highlight the benefits of system
fragmentation in state preparation for subsequent quantum chemical
calculations. These findings have broad applications for preparing
multireference quantum chemical wave functions on quantum circuits,
particularly via QPE circuits. | Ruhee D'Cunha, Matthew Otten, Matthew R. Hermes, Laura Gagliardi, Stephen K. Gray | 2023-05-29T14:25:15Z | http://arxiv.org/abs/2305.18110v2 | # State preparation in quantum algorithms for fragment-based quantum chemistry
###### Abstract
State preparation for quantum algorithms is crucial for achieving high accuracy in quantum chemistry and competing with classical algorithms. The localized active space unitary coupled cluster (LAS-UCC) algorithm iteratively loads a fragment-based multireference wave function onto a quantum computer. In this study, we compare two state preparation methods, quantum phase estimation (QPE) and direct initialization (DI), for each fragment. We analyze the impact of QPE parameters, such as the number of ancilla qubits and Trotter steps, on the prepared state. We find a trade-off between the methods, where DI requires fewer resources for smaller fragments, while QPE is more efficient for larger fragments. Our resource estimates highlight the benefits of system fragmentation in state preparation for subsequent quantum chemical calculations. These findings have broad applications for preparing multireference quantum chemical wave functions on quantum circuits, particularly via QPE circuits.
## 1 Introduction
In recent years, quantum chemistry has witnessed remarkable progress in quantum computing, driven by advancements in hardware and algorithms [1, 2, 3]. Quantum computers offer a notable advantage by leveraging the exponential reduction in required qubits compared to classical bits for storage and manipulation of quantum information, thanks to the inherent quantum mechanical characteristics of qubits. This potential enables the simulation of complex chemical systems that may be impractical to compute on classical computers, at least in theory.
In many cases, the description of complex systems with numerous degenerate electronic states requires the use of multireference methods based on a multiconfigurational wave function. Examples of such methods are multireference configuration interation (MRCI) [4, 5], multireference perturbation theory [6, 7], and the complete active space self-consistent field method (CASSCF) [8]. The computational cost of these methods scales exponentially with the number of electrons and orbitals in the active space, making accurate calculations for large systems intractable.
When dealing with chemical systems that consist of multiple fragments exhibiting local strong correlation, while being surrounded by a weakly correlated environment, active space-based fragmentation methods can serve as an alternative to CASSCF. These methods can help reduce the computational cost of the calculation, while still maintaining an accurate description of the individual fragments. One such method is the localized active-space self-consistent field (LASSCF) method [9, 10]. One of the limitations of fragment-based methods like LASSCF is the inability to recover correlation between fragments. This drawback can be addressed by introducing entanglement between fragments, as demonstrated in the localized active space state interaction (LASSI) method [11], which however,
reintroduces the factorial scaling of CASSCF. A more efficient way to improve the LASSCF description is to use an inter-fragment correlator implemented on a quantum computer. Quantum computers are particularly suitable for simulating multireference wave functions due to the compact representation and manipulation of vectors containing multiple electronic configurations on a quantum register. To accurately represent complex chemical systems, multireference states must first be prepared on a quantum computer. Current methods do not rely on classical algorithms to improve the prepared state; instead, they utilize chemical insights to select crucial configurations and simplify the state preparation step [12, 13, 14].
To leverage the capabilities of quantum computers in capturing fragment correlation following a LASSCF calculation, we have developed the localized active-space unitary coupled cluster (LAS-UCC) method [15]. The original LAS-UCC method incorporates quantum phase estimation (QPE) [16, 17] circuits to load multireference fragment wave functions, which are subsequently coupled with a unitary coupled cluster (UCC) ansatz [18, 19, 20]. The variational quantum eigensolver (VQE) [21] is used to iteratively determine the UCC parameters and minimize the energy. This involves loading all fragment wave functions at the beginning of each VQE iteration containing a UCC circuit. In principle QPE can be utilized for high-fidelity state preparation, by employing additional (ancilla) qubits than those required to represent the wave function. The measurement of the ancilla qubits induces a collapse of the system register to an eigenstate of the relevant Hamiltonian applied to the system via controlled unitary operations [22]. The loading of a converged multireference wave function utilizes state-of-the-art classical methods to capture strong correlation within a fragment, which is challenging to achieve on a quantum computer with an ansatz due to gate depth and optimization issues. Consequently, the reliable preparation of a state that accurately reproduces the LASSCF wave function on a quantum circuit is an important consideration in refining the LAS-UCC algorithm.
In this study, we present comprehensive resource and error estimates for LAS-UCC by directly compiling quantum circuits for noise-free quantum devices. We investigate two distinct schemes of state preparation to load the LASSCF wave function onto the quantum circuit prior to conducting a VQE iteration. First, we use QPE in the fragments to prepare the ground state using a fragment Hamiltonian derived from the LASSCF calculation (defined in Section 2.2 below). Second, we directly initialize the circuit with the converged LASSCF wave function using one- and two-qubit gates. We validate our code using real chemical systems that demonstrate the impact of increasing fragment numbers and the level of strong correlation within and between fragments. Additionally, we define a threshold number of qubits that distinguishes regions where it is more cost-effective on a quantum computer to perform initial state loading through DI versus fragmented QPE. While our primary focus is state preparation for the LAS-UCC algorithm, our results offer insights into any QPE-based algorithm, as effective state preparation techniques are vital for the success of QPE [22].
The paper is structured as follows:
Section 2 provides the theoretical background of the LAS-UCC algorithm, along with a description of the state preparation circuits and computational details.
Section 3 presents LAS-UCC results using both methods of state preparation, an analysis of the QPE-based state preparation, resource estimates, and a study on spin states of a transition metal complex using LAS-UCC.
Lastly, Section 4 includes a discussion of the obtained results and concluding remarks.
## 2 Theoretical Background
### Lasscf
The LASSCF [9, 10] method is a classical fragment-based approach that incorporates user-selected fragment active spaces, along with treating the inactive space and inter-fragment interactions at a mean-field level, such as the restricted Hartree-Fock method (RHF). [23].
The wave function is thus an anti-symmetrized product of the \(K\)-fragment CAS wave functions \(|\Psi_{A_{K}}\rangle\) and the inactive mean-field wave function \(|\Phi_{D}\rangle\):
\[|\text{LAS}\rangle=(\bigwedge_{K}|\Psi_{A_{K}}\rangle)\wedge|\Phi_{D}\rangle \tag{1}\]
It is variationally optimized to obtain the LASSCF energy.
LASSCF is a more advantageous starting point for hybrid quantum-classical methods compared to the commonly used Hartree-Fock wave function, as it begins from a mean-field reference and incorporates intra-fragment correlation, with the size of the active space and choice of fragments as tunable parameters. As a fragmentation method, it provides a clear advantage over other classical ways of state preparation for a quantum algorithm, allowing the fragment-wise loading of the multireference wave function obtained.
### Las-Ucc
The LAS-UCC algorithm combines LASSCF with a whole-system VQE enabling the inclusion of both intra- and inter-fragment correlation effects.
The algorithm currently begins by performing a LASSCF calculation to convergence on a classical computer. The LASSCF wave function thus obtained can be written as in Eq. (1) as a product of fragment wave functions. The fragment wave functions are individually loaded onto a quantum computer and measurements are made of a circuit comprising the fragment circuits as well as a parameterized ansatz, such as the UCC ansatz [18, 19, 20].
The localized active space of a single fragment \(K\), containing spin orbitals denoted by \((i,j,k,l)\), together with the active spaces of the other fragments \(L\), with the corresponding spin orbitals denoted by \((m,n)\) is used to create the fragment Hamiltonian \(H_{K}\):
\[H_{K}= \sum_{ij}(h_{ij}+\sum_{u}g_{iu}^{ju}+\sum_{mn}g_{im}^{jn}\gamma_{m} ^{n})\ \hat{a}_{j}^{\dagger}\hat{a}_{i} \tag{2}\] \[+\frac{1}{4}\sum_{ijkl}g_{il}^{kl}\hat{a}_{k}^{\dagger}\hat{a}_{ l}^{\dagger}\hat{a}_{j}\hat{a}_{i}\]
where \(h_{ij}\) and \(g_{ij}^{kl}\) represent the one- and two-particle components of the Hamiltonian, \(u\) represents the set of inactive orbitals, and \(\hat{a}_{j}^{\dagger}\) and \(\hat{a}_{i}\) are fermionic second-quantized creation and annihilation operators. The qubit Hamiltonian \(\tilde{H}_{K}\) is created by mapping the fermionic Hamiltonian \(H_{K}\) to the qubit space via a fermion-to-spin transformation, such as the Jordan-Wigner transformation [24].
Figure 1 provides a flowchart for the quantum portion of the algorithm, once the LASSCF is converged and all conversions to the qubit basis have taken place. The fragment Hamiltonians \(\tilde{H}_{K}\) are used to load the fragment wave functions via QPE circuits, while the total system Hamiltonian \(\tilde{H}\) and a generalized UCC ansatz \(|\Psi(\theta)\rangle\) are used to compute the VQE energy during the optimization process. A classical optimizer is used to generate new ansatz parameters \(\theta^{\prime}\) to improve the energy measured at each iteration. In this flowchart, the state preparation can be done via either a QPE procedure for each fragment (Scheme 1 in Fig. 1) as originally suggested [15], or a direct initialization (DI) of fragment state vectors (Scheme 2), with both sets of circuits depicted as implemented in this work. Other methods may also be used to load the wave function, such as loading individual Slater determinants [25], or a state containing a few Slater determinants, which may be chosen via chemical intuition or an efficient selected configuration interaction algorithm [14, 26].
### State Preparation
We explore the challenges of achieving both high accuracy in wave function parameters and energies, as well as minimizing the number of qubits and circuit depth needed when using a multireference wave function for state preparation. We investigate two distinct methods of state preparation, which are described below, placing particular emphasis on the necessary resources and the resulting error in the obtained LAS-UCC energies.
#### 2.3.1 Scheme 1: QPE-based State Preparation
State preparation begins with a QPE circuit performed on each individual fragment. The unitary operator for the QPE is given by:
\[\hat{U}_{K}=e^{i\tilde{H}_{K}b}\quad. \tag{3}\]
A series of gates controlled by the ancilla qubits and incorporating this unitary operator is applied on the fragment qubits in order to retrieve the phase. After an inverse quantum Fourier transform and measuring the ancilla qubits, the phases \(\phi_{k}\) are obtained as values between 0 and 1 by phase kickback. The eigenvalues \(E_{k}\) of the fragment Hamiltonian \(\tilde{H}_{K}\) can then be obtained as:
\[E_{k}=\frac{2\pi\phi_{k}}{b}\quad. \tag{4}\]
The scaling parameter \(b\) must be estimated to lead to a 1:1 mapping of phase and energy eigenvalues. Measurement of the ancilla qubits leads to the collapse of the system qubits into one of the eigenstates of the fragment Hamiltonian, with a probability dependent on the overlap of the initial state with the specific eigenstate.
Because the initial state is generally not an eigenstate of the fragment Hamiltonian, the circuit must be run enough times to obtain the ground state energy (and collapse the system qubits into the ground state with measurement) with high probability. The ancilla qubit phase corresponding to the ground state is stored. Finally, for the execution of the VQE iteration, the QPE circuit for each fragment is run using the fragment Hamiltonians \(\tilde{H}_{K}\) until the ancilla phase corresponding to the ground state of the fragment Hamiltonian is reproduced.
#### 2.3.2 Scheme 2: DI-based State Preparation
DI is a more straightforward method of state preparation, which entails loading the CI vectors of each individual fragment onto fragment circuits of size \(N_{\text{frag}}\), where \(N_{\text{frag}}\) represents the number of spin orbitals in each fragment's active space. This process involves resetting the fragment qubits to \(\ket{0}\) and subsequently applying combinations of one- and two-qubit gates. The angles of these gates are determined classically through a recursive algorithm, allowing for precise setup of the desired state vector on the specified qubits.[27].
DI of a fragment wave function offers the advantage of entangling only the fragment qubits during state preparation, eliminating the need for ancilla qubits. Unlike the QPE-based method, it does not necessitate running the circuit multiple times to achieve the ground state. However, one drawback of initialization circuits is the exponential number of CNOT gates required. Additionally, DI relies on performing the LASSCF calcula
Figure 1: Flowchart describing the LAS-UCC algorithm, with example state preparation and measurement circuits containing two 2-qubit fragments, using a single ancilla qubit each for the QPEs. Scheme 1 represents the LAS-UCC algorithm from Ref [15] and is referred to within as QPE-based LAS-UCC (QPE-LAS-UCC), and Scheme 2 as direct initialization LAS-UCC (DI-LAS-UCC). The boxes marked ‘QPE’, ‘Initialize’ and ‘UCC’ represent circuits containing one- and two-qubit gates that perform the respective operations. The fragment Hamiltonians \(\tilde{H}_{K}\) and the total system Hamiltonian \(\tilde{H}\), both in the qubit basis, as well as the parameterized wave function \(\ket{\Psi(\theta)}\), are inputs to the UCC circuit. \(\ket{a_{i}}\) are ancilla qubits, \(\ket{x_{i}}\) and \(\ket{y_{i}}\) are qubits belonging to fragment 1 and fragment 2 respectively. The classical optimizer suggests new parameters for the UCC circuits and the overall VQE procedure to minimize the coupled fragments’ energy.
tion on a classical computer. This approach may face limitations if the fragments are too large to be calculated classically, thus posing a challenge in utilizing DI effectively.
### Computational Details
Restricted Hartree-Fock (RHF) calculations are run to obtain reference wave functions for LASSCF calculations using the PySCF program [28]. The LASSCF wave function is obtained as described in Section 2.1 using the MRH code [29]. Complete active space configuration interaction (CASCI) reference values were generated utilizing the same localized orbital space as LASSCF [30]. The active space for the CASCI included all fragment active spaces. The reference curves are used to benchmark the new methods presented in this work. Noise-free simulations of the state preparation and measurement circuits were carried out using the Qiskit framework and the Aer state vector simulator [31]. The matrix exponentials \(e^{i\tilde{H}_{K}b}\) for the fragment QPEs were approximated by replacing the exponentiated sums of Pauli operators by a transformation into the computational basis, a rotation around the z axis, and a back transformation, combined with a Trotterization with \(n\) steps, referred to here as Trotter steps [32].
The systems studied using LAS-UCC in this work are shown in Figure 2 and include (a) a set of interacting hydrogen molecules, in order to study an ideal fragment system with increasing numbers of fragments and the effect of moving the molecules nearer or further away and thus increasing the amount of entanglement between fragments, (b) the trans-butadiene molecule, which is a model system of increasingly stronger correlation within each fragment as the C-C double bonds are simultaneously broken, and (c) a bimetallic system containing copper and manganese, [Mn(NH\({}_{3}\))\({}_{4}\)]oxamide[Cu(NH\({}_{3}\))\({}_{2}\)]\({}^{2+}\), a transition metal complex with two chemically logical fragments whose spin states are relatively close in energy. An [Fe(H\({}_{2}\)O)\({}_{4}\)]\({}_{2}\)bypm\({}^{+4}\) (bypm = 2,2'-bipyrimidine) system was used for resource estimations in Section 3.4. Molecular coordinates are provided in the SI.
The fermionic Hamiltonian operator for the VQE in the active space is given by
\[H_{\text{eff}}=\sum_{ij}(h_{ij}+\sum_{u}g_{iu}^{ju})\;\hat{a}_{j}^{\dagger} \hat{a}_{i}+\frac{1}{4}\sum_{ijkl}g_{ij}^{kl}\hat{a}_{k}^{\dagger}\hat{a}_{l} ^{\dagger}\hat{a}_{j}\hat{a}_{i}. \tag{5}\]
This fermionic Hamiltonian \(H_{\text{eff}}\) is then mapped to a qubit Hamiltonian \(\tilde{H}\) using the Jordan-Wigner mapping [33].
The VQE is performed using the circuit needed to load the state vectors as the initial state for each Hamiltonian measurement. The VQE energy is the total (coupled fragments) energy of the system.
## 3 Results
### Hydrogen systems
The potential energy curves for the H\({}_{2}\) dimer and trimer were calculated by varying the separation distance \(R\) between the individual H\({}_{2}\) molecules at their center of mass. At each geometry, the ground state energy was computed to obtain the respective potential energy curves.
The energy curves for (H\({}_{2}\))\({}_{2}\) obtained using CASCI, LASSCF, the numerically simulated LAS-UCC code, and the QPE-based code (labeled as QPE-LAS-UCC) are depicted in Figure 3. Here, by "numerically simulated", we refer to the code that classically minimizes the LAS-UCC energy value with respect to all parameters rather than mapping to a quantum circuit. This
Figure 2: Systems studied in this work: (a) a set of interacting hydrogen molecules, (b) the trans-butadiene molecule, and (c) a bimetallic complex containing Cu and Mn (Orange: Cu, Purple: Mn, Blue: N, Gray: C). Shaded boxes indicate the fragments used for each system. Arrows represent intermolecular and interatomic distances used to increase or decrease inter- and intra-fragment correlation for the hydrogen and trans-butadiene systems respectively.
approach is a direct measure of the quality of the LAS-UCC algorithm, with no state preparation errors. The QPE-LAS-UCC results are using six Trotter steps and eight ancilla qubits in each fragment. A similar comparison for the (H\({}_{2}\))\({}_{3}\) system can be found in the SI.
Based on Figure 3, it is observed that the numerically simulated LAS-UCC method (blue curve) accurately reproduces the reference curve with high fidelity. On the other hand, the QPE-LAS-UCC method (teal curve) introduces a systematic error across the potential energy curve. However, by utilizing 6 Trotter steps in the QPE-LAS-UCC code, it is possible to achieve an error below chemical accuracy, specifically 1.6 mEh, at all points on the potential energy curve.
The error in the numerically simulated LAS-UCC method roughly follows the error pattern observed in the LASSCF method. It is more significant when the hydrogen molecules are closer together, indicating a less accurate representation of the CASSCF wave function due to stronger interactions between the fragments. As the fragments are pulled apart, the error decreases in magnitude, suggesting an improved accuracy of the LASSCF wave function.
Similarly, the LAS-UCC method, which builds upon the LASSCF wave function, exhibits a similar trend of increased accuracy as the distance between fragments increases. This indicates that the accuracy of the LAS-UCC method also improves with greater separation between the fragments.
The error in the gate-based QPE-LAS-UCC method does not show this trend, as it is induced even at 6 Trotter steps by Trotterization. The magnitude of this error and its dependence on the number of Trotter steps are discussed in Section 3.3.2 below.
### State Preparation Method Comparison
Figure 3(a) presents the effects of differing methods of state preparation on the error in the VQE energy for the same potential energy curve as Figure 3. Reproduced here are the error with respect to CASCI for classical LASSCF and QPE-LAS-UCC (Inset Fig. 3), with the addition of the DI-LAS-UCC and HF-UCC methods. Here, HF-UCC uses a simple Hartree-Fock wave function and the same generalized UCC ansatz as all LAS-UCC methods. Using DI to prepare the state lowers the error significantly as compared to the systematic error shown by the QPE-LAS-UCC method previously. The energy errors from DI track with the error in the LASSCF energies with respect to the CASCI reference, as expected from a method that eliminates the systematic error associated with Trotterization. The error with respect to the reference using HF-UCC is also low at most points on the potential energy curve for this small system. However, Figure 3(b) shows the number of VQE function evaluations (equivalent to VQE iterations, multiplied by a constant system- and optimizer-dependent factor) required for convergence using the Hartree-Fock reference is high for all points on the potential energy curve, while those required for convergence using the DI-LAS-UCC method reduce as the LASSCF wave function becomes a better approximation, beginning at R=1.8 A, and remaining low until R=4.5 A, the largest separation we studied.
Figure 3(b) also compares the number of function evaluations required during the VQE optimization for QPE-LAS-UCC and DI-LAS-UCC. DI-LAS-UCC requires fewer function evaluations beginning at \(R=1.8\) A, while QPE-LAS-UCC requires a higher number until \(R=3.0\) A. This suggests that the Trotterized state introduces added difficulty to the convergence problem of the VQE that is unrelated to the quality of the LASSCF wave function.
Figure 3: VQE energy in Hartrees (\(E_{h}\)) as a function of the distance between molecular centers of mass for (H\({}_{2}\))\({}_{2}\) as the molecules are moved apart.
### QPE-based State Preparation
The trans-butadiene and interacting H\({}_{2}\) model systems have been studied to obtain empirical evidence of the number of qubits and Trotter steps required in fragment QPE calculations for a desired level of accuracy in subsequent QPE-LAS-UCC calculations. The ancilla qubits play an essential role in the identification of the ground state of each fragment, and the Trotter steps affect the fidelity of the loading of the LASSCF wave function onto the circuit, one fragment at a time.
#### 3.3.1 Number of Ancilla Qubits
Figure 5 presents the errors in the QPE energy values of a single fragment for our trans-butadiene system at two different geometries, with R being the C-C double bond distance. This error is calculated with respect to exact diagonalization of the fragment Hamiltonian and is affected by both the number of ancilla qubits as well as the Trotter error. Since our chosen fragments have identical geometries and electronic structure, the behavior of the error is identical across fragments, and therefore only the energies corresponding to a single fragment are shown. As an error threshold we choose the gap between the ground and first excited states, represented by the dotted lines for each geometry. For R=1.0, this threshold is 28.58 m\(E_{h}\), while for R=3.0, it is 0.0071 m\(E_{h}\).
At least six ancilla qubits are required for the error to be smaller than the threshold at R=1.0. However, in the strongly-correlated regime where the C-C double bonds of the trans-butadiene molecule are stretched at R=3.0, while the energy error drops significantly at two ancilla qubits, it remains larger than the excitation energy gap threshold. While we cannot distinguish between the ground and first excited states of the trans-butadiene system at R=3.0 even with nine ancilla qubits, these states are very close in energy, as can be seen by the purple horizontal dotted line at \(10^{-}5\) in the inset of Fig.5. We note that nine qubits are enough to obtain a sufficiently small energy error, within 1.6 m\(E_{h}\) of the reference energy. Thus, when simulating systems with highly degenerate states, while we cannot guarantee collapse into the ground state with unlimited precision, we can achieve exponential precision with the number of ancilla qubits, though the error must still be minimized with respect to the Trotter steps.
Further, we observed that in the fragment QPE calculations for trans-butadiene at R=3.0, the most likely eigenvalue for each fragment is no longer the ground state energy (Table in the SI). Further analysis using a Prony-like approach [34] confirms that the Hartree-Fock reference state has the largest overlap not with the ground state but with an excited state with energy eigenvalue \(-2.788E_{h}\).
Another complication of QPE is that the choice of the scale factor, \(b\), in Eq. 3 can affect the precision obtained with a given number of ancillas and some preliminary analysis with differing \(b\) values can be beneficial, as we discuss in the SI. Thus some care and preliminary calculations with post-processing analysis are required to be confident of
Figure 4: (a) VQE energy error with respect to the CASCI reference in Hartrees (\(E_{h}\)) and (b) Number of VQE function evaluations, as a function of the distance between molecular centers of mass for (H\({}_{2}\))\({}_{2}\) as the molecules are moved apart.
QPE results in these difficult cases.
On the other hand, the error for a single H\({}_{2}\) molecule in the interacting H\({}_{2}\) systems, as our ideal systems, requires only more than two ancilla qubits in order to lie below the threshold (Figure in the SI).
#### 3.3.2 Number of Trotter Steps
Figure 6 contains information about the fidelity of the Trotterized wave function (red) and the energy error for the interacting H\({}_{2}\) systems (blue) as a function of the number of Trotter steps.
The fidelity of the fragment wave function obtained using the Trotter approximation is estimated by the absolute value of its overlap with the one obtained by exact diagonalization. The overlaps are averaged over the total number of fragments for the system containing four H\({}_{2}\) molecules. The number of ancilla qubits was set to eight (much larger than the two seen to be required in the previous section) to ensure collapse into the ground state, thus allowing us to focus on the effect of increasing Trotter steps only. As we increase the number of Trotter steps, we see the overlap increase in magnitude, asymptotically approaching 1.
The three blue curves represent the total system VQE energy error at iteration 0, also as a function of the number of Trotter steps. For the H\({}_{2}\) tetramer, at two Trotter steps, we see that while the overlap is above 0.995, the VQE energy has an error of 40 m\(E_{h}\) with respect to the LASSCF energy of the total system, taken as a reference. For the H\({}_{2}\) dimer, at a minimum 6 Trotter steps are required to converge to within 1.6 m\(E_{h}\) of the reference. The error scales linearly with the number of fragments, thus, the dimer has the lowest error, followed by the trimer and then the tetramer. The per-fragment error remaining constant implies size-intensivity of the method, which is a desirable property.
As the system size increases, a larger number of Trotter steps is then required to converge the 0-th iteration VQE energy to the corresponding LASSCF reference value. Thus, the fragment QPE wave functions must reproduce the LASSCF wave function with high fidelity in order to minimize the energy error for the whole system.
### Resource Estimation
The cost of the LAS-UCC algorithm depends on both the cost of the VQE circuit itself and the method chosen for the preparation of the LAS state on the VQE circuit. Assuming the cost of the VQE to be constant, we study the cost of state preparation only in terms of number of gates required for a given accuracy.
In the case of the QPE, the total number of qubits and the gate depth of the final state preparation circuit depend on the number of ancilla qubits and Trotter steps required to model the fragment wave functions with accuracy (in our case within an energy error of 1.6 \(mE_{h}\)), which is
Figure 5: Error in the QPE energy in Hartrees (\(E_{h}\)) with respect to the exact diagonalization energy as a function of the number of ancilla qubits used in the QPE for C\({}_{4}\)H\({}_{6}\) with C-C distance = 1.0 (blue) and C\({}_{4}\)H\({}_{6}\) with C-C distance = 3.0 (purple), both with 4 Trotter steps. The corresponding dotted lines represent the gap between the ground and first excited states for each system.
Figure 6: Error in the zeroth-iteration VQE energy with respect to the LASSCF energy in Hartrees (blue) and overlap with the exact diagonalization wave function (red) as a function of the number of Trotter steps used in the QPE for the (H\({}_{2}\))\({}_{2}\) (light blue/light red), (H\({}_{2}\))\({}_{3}\) (blue/red), and (H\({}_{2}\))\({}_{4}\) (dark blue/dark red) systems.
system-dependent. The DI circuit only needs to be run once and does not require ancilla qubits, however, the gate depth scales exponentially with the size of the active space, with the prefactor depending on the algorithm used to initialize the circuit. DI via quantum multiplexors as implemented in Qiskit scales as \(4^{N}-(3/2)2^{N}\), where \(N\) is the number of qubits in the fragment [27].
Table 1 contains information about the number of ancilla qubits required for a precision of 1.6 \(mE_{h}\) in the fragment QPE energies of each system studied above. This precision threshold was chosen as equivalent to the 1 \(kcalmol^{-1}\) threshold of chemical accuracy. The hydrogen systems have the same fragment type, the H\({}_{2}\) molecule, and so require the same number of ancilla qubits per fragment for the same precision. The transbutadiene system, is a model of weak correlation at R=1.0 A, requiring 15 ancilla qubits, while at R=3.0 Ashows more strong correlation and requires 21 qubits per fragment. The details of the ancilla qubit estimation are presented in the SI. We also report the actual numbers of ancilla qubits used in the simulations for high-accuracy results and the number of CNOT gates required for a single fragment circuit estimated for both QPE- and DI-LAS-UCC. The number of CNOTs required to implement the QPE circuit was computed according to the following equation:
\[N_{\text{CNOT}}=n_{U}n_{\text{Tr}}(2^{n_{\text{an}}}-1) \tag{6}\]
where \(n_{U}\) is the number of CNOTs required to implement a single unitary, \(n_{\text{Tr}}\) is the number of Trotter repetitions, and \(n_{\text{an}}\) the number of ancilla qubits used in the calculations. This estimation is based on the circuit used for the QPE, with the unitary repeated \(n_{\text{Tr}}\) times for the Trotter approximation and \(2^{n_{\text{an}}}-1\) times in a standard QPE circuit.
For the H\({}_{2}\) dimer, \(N_{\text{CNOT}}\) for QPE-LAS-UCC is equal to 38x \(N_{\text{CNOT}}\) for DI-LAS-UCC, while for the tetramer, it is 48x \(N_{\text{CNOT}}\) for DI. The number of CNOT gates scales exponentially with the number of ancilla qubits, thus for trans-butadiene at R=3.0 A, QPE-LAS-UCC requires an order of magnitude more resources than at R=1.0 A. This serves as an example of the system-dependence of the resources required by QPE-LAS-UCC.
To compare the resources needed for state preparation with the two methods, we explored a more realistic chemical system, [Fe(H\({}_{2}\)O)\({}_{4}\)]\({}_{2}\)bypm\({}^{+4}\) (bypm = 2,2'-bipyrimidine), and studied the effect of increasing the number of active spin orbitals on the total number of CNOT gates required for both methods. Each Fe center was chosen to be a fragment, with the active orbitals being localized on the Fe atoms.
Figure 7 reports the number of CNOT gates estimated for DI-LAS-UCC and QPE-LAS-UCC for different active spaces, ranging from 10 spin orbitals per fragment, with the LAS being ((6,5),(6,5)), to 22 spin orbitals per fragment, with the corresponding LAS as ((12,11),(12,11)). We find on comparing the DI in blue with the two QPE lines in peach and purple that DI requires fewer CNOT gates than QPE for smaller active spaces and cases in which larger numbers of ancilla qubits are used for QPE. As the size of the fragment active spaces grows, QPE-LAS-UCC becomes more efficient, especially if smaller numbers of ancilla qubits are needed. The number of ancilla qubits required, as seen from estimations in Table 1, depends on how strongly correlated the individual fragments are and inversely on the gap between the desired ground state and first excited state. (See SI for more details on the ancilla qubit estimates.) For the Fe system, this crossover occurs at between 14 and 20 active fragment spin orbitals. Note that we have here only estimated the gate depth. QPE has additional overheads based on the overlap of
Figure 7: Total estimated gate counts for the state preparation circuits tested as a function of the number of spin orbitals in each fragment active space for the [Fe(H\({}_{2}\)O)\({}_{4}\)]\({}_{2}\)bypm\({}^{+4}\) molecule. QPE-LAS-UCC counts are estimated using 10 Trotter repetitions and 10 (purple) or 20 (peach) ancilla qubits respectively.
the target eigenstate with the initial state. This adds additional overheads in time, which were not estimated here.
### Application to a Cu-Mn complex
Both the numerically simulated LAS-UCC and the DI-LAS-UCC methods were employed for the calculation of spin-state energy differences for a more challenging problem than the model systems above, [Mn(NH\({}_{3}\))\({}_{4}\)]oxamide[Cu(NH\({}_{3}\))\({}_{2}\)]\({}^{2+}\) (Figure 2c). The goal is to compute the spin state energy differences with LAS-UCC and DI-LAS-UCC and compare them with LASSCF, HF-UCC and CASSCF (defined in Section 3.2). A minimal (6,6) active space was used for the CASSCF, including 5 \(d\) orbitals on the Mn center and 1 \(d\) orbital on the Cu center. For the LASSCF calculation, the fragment active spaces considered were a (5,5) active space and a (1,1) active space centered on the Mn and Cu atoms respectively. Table 2 contains information about the energy differences between states of different \(m_{s}\) values, with antiferromagnetic local spin orientations for the LASSCF subspaces. The LAS-UCC and DI-LAS-UCC methods provide values within 1 kcal mol\({}^{-1}\) of the CASSCF reference values. However, the HF-UCC method gives an error of close to 10 kcal mol\({}^{-1}\) for the lower \(m_{s}\) states, which are, in general, harder to simulate. These results suggest that the LAS wave function is a better starting point for the VQE than Hartree-Fock for multi spin-center containing systems, and the LAS-UCC method also improves on the classical LASSCF calculation.
## 4 Discussion
We have analyzed two methods of state preparation for the loading of a fragment multireference wave function onto a quantum circuit, to obtain highly accurate ground state energies of systems with strongly correlated subunits. Section 3.1 shows that the error in the QPE-based state preparation (QPE-LAS-UCC) is dominated by Trotter error, which can however be systematically reduced. While both QPE- and DI-LAS-UCC lower the number of VQE iterations as compared to simply using a Hartree-Fock reference (Section 3.2), DI provides an ancilla- and probability-free method to load the state, at the cost of exponential scaling in the number of gates.
The analysis in Sections 3.3.1 and 3.3.2 suggests that the number of ancilla qubits and the number of Trotter steps heavily influence the quality of the fragment QPE wave functions, while the values of these parameters required for a desired accuracy depend on the system size and degree of strong correlation.
Strongly correlated systems, our final target systems, are challenging for the use of QPE for state preparation, requiring careful preliminary calculations and post-processing analysis as well as a large number of ancilla qubits and Trotter steps.
For systems with fragments that can be represented by less than 20 qubits, DI-LAS-UCC requires a smaller number of gates than the QPE with 10 Trotter steps. However, for systems whose representation requires more than 20 qubits, there exists a crossover point where the QPE algorithm's polynomial scaling requires fewer gates than DI. Thus, the size of the active space can guide the choice of state preparation method. These results also provide insight
\begin{table}
\begin{tabular}{||c c c c c c||} \hline System & \begin{tabular}{c} Est. Num. of \\ ancilla qubits \\ \end{tabular} & \begin{tabular}{c} Actual \\ ancilla qubits \\ \end{tabular} & \begin{tabular}{c} Num. of \\ Trotter steps \\ \end{tabular} & \begin{tabular}{c} CNOT Gates \\ QPE \\ \end{tabular} &
\begin{tabular}{c} CNOT Gates \\ QI \\ \end{tabular} \\ \hline \hline (H\({}_{2}\))\({}_{2}\) & 11 & 4 & 7 & 8820 & 232 \\ \hline (H\({}_{2}\))\({}_{4}\) & 11 & 4 & 9 & 11340 & 232 \\ \hline C\({}_{4}\)H\({}_{6}\); R=1.0 & 15 & 6 & 4 & 2,881,872 & 65,152 \\ \hline C\({}_{4}\)H\({}_{6}\); R=3.0 & 21 & 9 & 4 & 33,231,352 & 65,152 \\ \hline \end{tabular}
\end{table}
Table 1: Number of ancilla qubits per fragment estimated for an energy precision of 1.6 \(mE_{h}\). Actual numbers of ancilla qubits used in this study, as well as actual numbers of Trotter steps, with total CNOT gate counts for QPE- and DI-LAS-UCC respectively for each fragment studied.
to more general QPE algorithms, where effective state preparation is required, pointing to DI of complex classically computed wave functions as a potential technique for small-scale demonstrations of QPE.
We note that the ancilla-dependence, post-processing requirement, and other overheads will also apply to a full-system QPE, while the fragmentation of the active space results in shallower state-preparation circuits, thus the QPE-LAS-UCC and DI-LAS-UCC have a clear advantage over QPE in terms of the number of gates required.
Our results in Section 3.5 for the bimetallic system compare the LAS-UCC method, simulated both numerically and using a noiseless state vector simulator, with a generalized UCC ansatz utilizing an HF reference. The LAS-UCC method replicates the CASSCF reference value with high accuracy, confirming that to obtain accurate energy differences for spin states of multi-centered transition metal complexes, the LASSCF wave function is an ideal starting point.
Future work includes improvements to the LAS-UCC method through exploration of the VQE ansatz and optimization procedure. Currently the VQE step of the algorithm uses a generalized UCC ansatz, which is physically motivated and accurate, but expensive in terms of gate depth. Alternative methods of building an ansatz such as ADAPT-VQE [35], Qubit Coupled Cluster [36] or Unitary Selective Coupled Cluster [37] can be considered in order to reduce the circuit depth. Other more efficient optimization schemes can also be tested [38, 39]. Our ultimate goal is to simulate complex chemical systems via fragment-based methods by leveraging the power of both classical and quantum computers.
## 5 Acknowledgements
This research is based on work supported by Laboratory Directed Research and Development (LDRD) funding from Argonne National Laboratory, provided by the Director, Office of Science, of the U.S. Department of Energy (DOE) under Contract no. DE-AC0206CH11357. This work was performed, in part, at the Center for Nanoscale Materials, a U.S. Department of Energy Office of Science User Facility, and supported by the U.S. Department of Energy, Office of Science, under Contract no. DE-AC0206CH11357. M.R.H. and L.G. are partially supported by the U.S. DOE, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences under grant no. USDOE/DE-SC002183. M.J.O. is partially supported by the Defense Advanced Research Projects Agency under Contract No. HR001122C0074. This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers. We gratefully acknowledge the computing resources provided by University of Chicago Research Computing Center.
|
2305.17223 | Do We Really Need a Large Number of Visual Prompts? | Due to increasing interest in adapting models on resource-constrained edges,
parameter-efficient transfer learning has been widely explored. Among various
methods, Visual Prompt Tuning (VPT), prepending learnable prompts to input
space, shows competitive fine-tuning performance compared to training of full
network parameters. However, VPT increases the number of input tokens,
resulting in additional computational overhead. In this paper, we analyze the
impact of the number of prompts on fine-tuning performance and self-attention
operation in a vision transformer architecture. Through theoretical and
empirical analysis we show that adding more prompts does not lead to linear
performance improvement. Further, we propose a Prompt Condensation (PC)
technique that aims to prevent performance degradation from using a small
number of prompts. We validate our methods on FGVC and VTAB-1k tasks and show
that our approach reduces the number of prompts by ~70% while maintaining
accuracy. | Youngeun Kim, Yuhang Li, Abhishek Moitra, Ruokai Yin, Priyadarshini Panda | 2023-05-26T19:31:57Z | http://arxiv.org/abs/2305.17223v2 | # Do We Really Need a Large Number of Visual Prompts?
###### Abstract
Due to increasing interest in adapting models on resource-constrained edges, parameter-efficient transfer learning has been widely explored. Among various methods, Visual Prompt Tuning (VPT), prepending learnable prompts to input space, shows competitive fine-tuning performance compared to training of full network parameters. However, VPT increases the number of input tokens, resulting in additional computational overhead. In this paper, we analyze the impact of the number of prompts on fine-tuning performance and self-attention operation in a vision transformer architecture. Through theoretical and empirical analysis we show that adding more prompts does not lead to linear performance improvement. Further, we propose a Prompt Condensation (PC) technique that aims to prevent performance degradation from using a small number of prompts. We validate our methods on FGVC and VTAB-1k tasks and show that our approach reduces the number of prompts by \(\sim\)70% while maintaining accuracy.
## 1 Introduction
Parameter-Efficient Transfer Learning (PETL) has become a popular approach in various domains as it enables fine-tuning pre-trained models with minimal memory usage on resource-constrained edge devices [31, 45, 46, 48, 14, 16]. In PETL, a large model with billions of parameters, such as a transformer [8, 38], is first trained on a massive dataset on a cloud server, and then fine-tuned with limited computational/memory resources on edge devices. Among various PETL methods, Visual Prompt Tuning (VPT) [17] is promising due to its ability to update a small subset of parameters while achieving higher accuracy than other methods. Technically, VPT introduces learnable prompt tokens, which are prepended to the input or intermediate image patch tokens.
While VPT can induce memory efficiency, the use of additional prompt tokens leads to increased computational costs from self-attention and linear layers [26, 42, 8]. We report FLOPs with respect to the number of prompts in Table 1, which shows that the computational cost of VPT significantly increases as the number of prompts increases. If 200 prompts are prepended to the input space of ViT-B, the computational overhead (FLOPs) almost doubles compared to the model without any prompts. This indicates there is an inevitable trade-off between the number of prompts and computational cost in VPT.
Given such a trade-off, it is natural to ask: _How does the fine-tuning performance change with respect to the number of prompts?_ To find the answer, we measure the test accuracy with respect to the number of prompts. Interestingly, as
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline \# Prompts (ViT-B/16 [8]) & 0 & 50 & 100 & 150 & 200 \\ \hline GFLOPs & 17.6 & 22.2 & 26.9 & 31.8 & 36.7 \\ Computational Overhead & 0\% & 26.16\% & 52.8\% & 80.6\% & 108.5\% \\ \hline \hline \# Prompts (Swin-B [26]) & 0 & 5 & 10 & 25 & 50 \\ \hline GFLOPs & 15.4 & 16.3 & 17.2 & 19.8 & 24.3 \\ Computational Overhead & 0\% & 5.8\% & 11.6\% & 28.5\% & 57.8\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: The increase of floating-point operations (FLOPs) with respect to the number of prompts in VPT [17].
Figure 1: Accuracy depending on the number of prompts used for VPT training. We transfer an ImageNet-22k pre-trained ViT-B/16 [8] to three downstream tasks. The x-axis shows the relative number of prompts compared to the original number reported in [17]. The vertical dotted line indicates the point where there is < 1% drop in accuracy from 100% number of prompts.
shown in Fig. 1, we found that reducing the number of prompts for VPT training by approximately 50% does not lead to a significant drop, and most of the performance drop happens in the \(10\%\sim 40\%\) range. The results imply that the correlation between the number of prompts and fine-tuning accuracy is not linear.
To further provide a better understanding of the prompts in VPT, we analyze the impact of the number of prompt tokens on fine-tuning accuracy by addressing several questions: _Why does the number of prompts and the fine-tuning performance have a non-linear correlation? How does the number of prompts affect the self-attention operation? If there is a performance drop with less number of prompts, how can we recover the accuracy drop?_ We provide both empirical and mathematical analysis to answer such questions. This can provide insights into the behavior of the VPT model and its self-attention mechanism, which can help researchers better understand VPT and potentially improve the prompt design. At the same time, it is essential to analyze this impact on the computational cost to ensure that the method remains practical for deployment on extremely resource-constrained edge devices.
A noteworthy observation from Fig. 1 is that the performance degradation in \(<50\%\) number of prompts regime is non-trivial. To address this, we propose _Prompt Condensation_ (PC), a technique that reduces the number of prompt tokens with minimal accuracy drop. The PC consists of three steps: (1) Computing the importance score for each prompt. Here, we propose a global metric for measuring the importance score of each prompt, which provides better accuracy compared to the local attention-based metrics [25, 30, 10]. (2) Selecting the top \(k\%\) prompts based on the importance score, and discard the remaining prompts. (3) Fine-tuning the selected prompts while freezing other parameters.
In summary, our contributions can be as follows:
* In a first-of-its-kind study, we analyze the impact of the number of visual prompt tokens on the fine-tuning accuracy and self-attention operation in VPT.
* We find that the number of prompts is not linearly proportional to performance improvement. To support this, we provide empirical and mathematical analysis.
* To recover the performance drop with a small number of prompts, we propose _Prompt Condensation_ (PC). Our method can reduce the number of prompts by \(\sim 70\%\) while maintaining performance.
## 2 Related Work
### Parameter Efficient Transfer Learning (PETL)
Efficient fine-tuning of large pre-trained models on edge devices has become a popular research topic due to its practicality and high performance [31, 45, 46, 48, 14, 16]. Rather than training the entire set of parameters in neural networks, researchers focus on how to use a small percentage of weights to maximize transfer performance. To this end, several approaches [32, 35, 15, 3] insert a lightweight bottleneck module into the transformer model, allowing gradients to be calculated only for a small number of parameters. TinyTL [3] and BitFit [43] propose to update the bias term to fine-tune the model. Other approaches [45, 34] add side networks that can be optimized while keeping the original large model frozen. Another effective method to reduce memory requirements is to sparsify [18] or quantize activation [4, 5, 11, 9] during backward gradient calculation. Recently, VPT [17] prepends trainable parameters to the input space of the pre-trained model, achieving similar (and sometimes even better) accuracy compared to full fine-tuning while optimizing only about \(1\%\) of the parameters. However, adding a large number of prompts can significantly increase the computational overhead of the model. In this work, we analyze how the number of prompts affects fine-tuning performance.
**Importance of our work.** Prompt tuning is one of the major research directions to fine-tune the large-scale pre-trained model. Considering that prompt learning is applied to various applications, we aim to improve the efficiency of the prompt tuning approach. Our objective differentiates from prior works [7, 1, 6, 47, 19], such as adapter-based or partial training methods, which primarily seek to enhance performance on downstream tasks with different approaches. Furthermore, given that our technique does not necessitate any modifications to the model architecture, it offers promising potential for extension in future prompt learning approaches.
### Token Sparsification
The computational cost of ViT [8] increases as the number of tokens given to the model increases [36]. To alleviate this issue, previous works aim to reduce the number of tokens [10, 30, 27, 23, 13, 42, 21, 22, 33]. Liang _et al_. [25] define the importance score of each token based on its similarity to a \([CLS]\) token. Rao _et al_. [30] propose a prediction module with Gumbel-Softmax to sparsify tokens, which is jointly trained with the model parameters. Meng _et al_. [27] propose a decision network that can turn on/off heads and blocks in a transformer architecture. The authors of [41] propose an adaptive halting module that calculates a probability for each token to determine when to halt processing. However, these methods require updating the weight parameters inside the transformer or an additional module, which is challenging to apply to the PETL scenario. Recently, [2] proposed a token merging technique without training, gradually reducing the number of tokens in each block of vision transformers to speed up inference. However, their method will be difficult to apply for prompt tokens because prompt tokens are introduced at every layer.
## 3 Preliminary
**Vision Transformer.** Our work is based on ViT [8] which processes image tokens with multiple attention operations. The input image is sliced into multiple patches (tokens). Then, in each layer, the self-attention operation is applied to the image tokens. Let's assume we have token embedding \(X\in\mathbb{R}^{n\times d}\), Query \(Q=XW_{q}\), Key \(K=XW_{k}\), Value \(V=XW_{v}\) with linear projection. Then, the attention operation can be formulated as follows.
\[Attention(Q,K,V)=\underbrace{Softmax(\frac{QK^{T}}{\sqrt{d}})}_{A}V, \tag{1}\]
where \(A\) is the self-attention matrix after the Softmax function. We utilize Multi-Head Self-Attention (MHSA) in our approach, which takes the outputs of multiple single-head attention blocks, and then projects the combined output using an additional parameter matrix.
\[head_{i}=Attention(XW_{q}^{i},XW_{k}^{i},XW_{v}^{i}). \tag{2}\]
\[MHSA(X)=Concat[head_{1},...,head_{H}]W_{o}+X. \tag{3}\]
The output tokens generated by the MHSA block are fed into a Feed-Forward Network (FFN), which is composed of two fully-connected layers with a GELU activation layer in between. In the last encoder layer of the Transformer, the \([CLS]\) token is extracted from the output tokens and employed to predict the class.
**Visual Prompt Tuning.** Visual Prompt Tuning (VPT) [17] suggests a memory-efficient fine-tuning technique by adding a set of learnable prompts at the input/intermediate layers. Depending on where the prompts are added, VPT has two versions: VPT-Shallow and VPT-Deep.
Let \(X_{i}\in\mathbb{R}^{n\times d}\) be the token embedding at layer \(i\in\{1,2,...,L\}\), and \(F_{i}(\cdot)\) be the operations inside layer \(i\). VPT-Shallow prepends \(m\) prompts \(P_{1}\in\mathbb{R}^{m\times d}\) to the input token embedding \(X_{1}\).
\[[Z_{2},X_{2}]=F_{1}([P_{1};X_{1}]). \tag{4}\]
\[[Z_{i+1},X_{i+1}]=F_{i}([Z_{i};X_{i}])\quad\text{for}\ \ 1<i\leq L. \tag{5}\]
Here, \(Z_{i}\) is the output tokens from the layer \(i\). Note that only \(P_{1}\) and the classification head are trained.
On the other hand, VPT-Deep introduces prompts \(P_{i}\in\mathbb{R}^{m\times d}\) in every layer.
\[[\ \underline{\ },X_{i+1}]=F_{i}([P_{i};X_{i}])\quad\text{for}\ \ 1\leq i\leq L. \tag{6}\]
VPT-Deep shows higher performance than VPT-Shallow by using more prompts. In our work, we focus on VPT-Deep due to its superior performance. Although VPT requires significantly less memory usage for training, the computational overhead increases as the total number of tokens increases.
## 4 Analysis on the Number of Visual Prompts
In this section, we analyze the impact of prompts on self-attention operation and fine-tuning accuracy. We first
\begin{table}
\begin{tabular}{c|c c c c c c c c} \hline \hline \#Prompts\(\backslash\)Data & CUB-200 & NABirds & Stanford Dog & Stanford Car & CIFAR100 & SVHN & EuroSAT & Resisc45 \\ \hline
100\% & \(88.52_{\pm 0.09}\) & \(84.20_{\pm 0.05}\) & \(90.22_{\pm 0.09}\) & \(83.42_{\pm 0.11}\) & \(78.51_{\pm 0.71}\) & \(80.64_{\pm 1.28}\) & \(96.41_{\pm 0.39}\) & \(82.66_{\pm 1.54}\) \\
50\% & \(88.45_{\pm 0.04}\) & \(84.21_{\pm 0.06}\) & \(90.25_{\pm 0.05}\) & \(83.22_{\pm 0.09}\) & \(78.23_{\pm 0.34}\) & \(80.66_{\pm 0.39}\) & \(96.09_{\pm 0.08}\) & \(82.18_{\pm 0.11}\) \\
40\% & \(88.45_{\pm 0.10}\) & \(84.18_{\pm 0.02}\) & \(90.21_{\pm 0.06}\) & \(83.16_{\pm 0.04}\) & \(77.87_{\pm 0.49}\) & \(80.58_{\pm 0.42}\) & \(95.88_{\pm 0.40}\) & \(82.10_{\pm 0.97}\) \\
30\% & \(88.49_{\pm 0.10}\) & \(84.16_{\pm 0.04}\) & \(90.22_{\pm 0.06}\) & \(81.90_{\pm 0.08}\) & \(78.18_{\pm 0.98}\) & \(78.49_{\pm 1.65}\) & \(95.88_{\pm 0.40}\) & \(82.53_{\pm 0.81}\) \\
20\% & \(88.47_{\pm 0.09}\) & \(84.11_{\pm 0.04}\) & \(90.22_{\pm 0.09}\) & \(81.42_{\pm 0.12}\) & \(78.08_{\pm 0.68}\) & \(79.08_{\pm 1.46}\) & \(95.98_{\pm 0.27}\) & \(82.36_{\pm 0.40}\) \\
10\% & \(88.13_{\pm 0.11}\) & \(84.13_{\pm 0.03}\) & \(90.20_{\pm 0.07}\) & \(80.76_{\pm 0.14}\) & \(77.62_{\pm 0.23}\) & \(77.56_{\pm 0.89}\) & \(95.90_{\pm 0.12}\) & \(81.21_{\pm 0.57}\) \\ \hline \hline \#Prompts\(\backslash\)Data & Clevr/count & Clevr/dist & DMLab & KITTI/dist & dSprites/loc & dSprites/ori & SmallNORB/azi & SmallNORB/ele \\ \hline
100\% & \(68.65_{\pm 1.24}\) & \(59.05_{\pm 0.32}\) & \(46.05_{\pm 0.33}\) & \(72.89_{\pm 2.20}\) & \(74.35_{\pm 2.80}\) & \(48.09_{\pm 1.77}\) & \(32.86_{\pm 0.84}\) & \(36.46_{\pm 0.19}\) \\
50\% & \(68.49_{\pm 2.12}\) & \(59.68_{\pm 0.60}\) & \(46.21_{\pm 0.87}\) & \(72.26_{\pm 1.38}\) & \(72.26_{\pm 3.11}\) & \(47.50_{\pm 1.36}\) & \(32.43_{\pm 0.28}\) & \(36.34_{\pm 0.42}\) \\
40\% & \(68.88_{\pm 1.70}\) & \(59.45_{\pm 0.38}\) & \(45.32_{\pm 0.50}\) & \(72.32_{\pm 1.40}\) & \(69.02_{\pm 0.45}\) & \(47.18_{\pm 0.41}\) & \(32.22_{\pm 0.43}\) & \(36.19_{\pm 0.58}\) \\
30\% & \(66.40_{\pm 0.83}\) & \(58.94_{\pm 0.20}\) & \(44.58_{\pm 1.10}\) & \(72.33_{\pm 1.29}\) & \(67.48_{\pm 5.76}\) & \(47.37_{\pm 1.17}\) & \(31.24_{\pm 0.76}\) & \(35.94_{\pm 0.64}\) \\
20\% & \(65.62_{\pm 3.23}\) & \(58.94_{\pm 0.57}\) & \(44.57_{\pm 0.88}\) & \(72.14_{\pm 1.12}\) & \(58.14_{\pm 6.26}\) & \(47.22_{\pm 0.73}\) & \(29.29_{\pm 1.86}\) & \(35.43_{\pm 0.43}\) \\
10\% & \(60.49_{\pm 3.08}\) & \(58.83_{\pm 0.21}\) & \(44.22_{\pm 0.89}\) & \(72.39_{\pm 1.30}\) & \(52.26_{\pm 6.06}\) & \(44.46_{\pm 2.57}\) & \(29.03_{\pm 1.50}\) & \(32.63_{\pm 0.50}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: The test accuracy of VPT-Deep on FGVC and VTAB-1k tasks with respect to the number of prompts. We report 6 different prompt settings, where \(k\%\) represents how many prompts we use for VPT training compared to the original number of prompts reported in [17]. We run the same configuration 3 times and report the mean and standard deviation. We transfer an ImageNet pre-trained ViT-B/16 [8] backbone.
Figure 2: The normalized cumulative eigenvalue of self-attention matrix \(A\) in Eq. 1 on Stanford Cars and DMLab. We report the mean and standard deviation across all layers.
demonstrate two observations, and then we provide the mathematical support why the performance does not linearly improve as we use more prompts.
**Observation1**: _Reducing the number of prompts does not linearly decrease the accuracy._ In addition to Fig. 1, we provide further empirical evidence on the correlation between the number of prompts and the fine-tuning accuracy. We evaluate the test accuracy of our approach on FGVC and VTAB-1k [44] tasks with varying the number of prompts. It is worth noting that each dataset requires a specific number of prompts to achieve optimal performance, as reported in [17]. We focus on datasets that require more than 10 prompts for both VPT-Shallow and VPT-Deep since using fewer than 10 prompts does not result in significant computational overhead. We present the performance change according to the number of prompts in Table 2. Our analysis shows that for the majority of the datasets, decreasing the number of prompts by about 50% does not result in a significant decline in performance. Additionally, most of the performance decrease occurred in the range of 10% to 40%, indicating that the relation between accuracy and the number of prompts is not linear.
**Observation 2**: _Self-attention matrix is low-rank before/after adding prompts._ The previous work [40] shows that the self-attention matrix in ViT is low-rank. In a similar line of thought, we investigate the rank of the self-attention matrix when we add prompts. In Fig. 2, we compare the cumulative eigenvalue of the self-attention matrix \(A\)_without_ prompts and _with_ prompts. Our results show that the self-attention matrix remains low-rank even when prompts are added to the self-attention matrix. Especially, for the Stanford Cars dataset, we add 200 prompts which is a large number of tokens than the original image tokens (_i.e._[19]), but the cumulative eigenvalue trend does not change. Overall, the results imply that only a few prompts affect the self-attention operation.
To understand why the number of prompts is not linearly correlated to the self-attention operation and the accuracy, we provide a mathematical analysis here. We use the rank of the approximated low-rank matrix of the attention matrix as a surrogate metric to evaluate the impact of the prompt on the self-attention operation.
**Theorem 1** (Self-attention is low rank. Proved in [40]).: _Let \(A\in\mathbb{R}^{n\times n}\) be a self-attention matrix, and \(v\in\mathbb{R}^{n}\) be a column vector of value matrix \(V\). Then, there exists a low-rank matrix \(\tilde{A}\in\mathbb{R}^{n\times n}\) satisfying_
\[Pr(\|\tilde{A}v^{T}-Av^{T}\|<\epsilon\|Av^{T}\|)>1-o(1), \tag{7}\]
_where the rank of \(\tilde{A}\) is bounded, i.e., \(rank(\tilde{A})=\Theta(log(n))\)._
**Proposition 1**.: _For any low-rank matrices \(\tilde{A}_{n}\in\mathbb{R}^{n\times n}\) and \(\tilde{A}_{n+m}\in\mathbb{R}^{(n+m)\times(n+m)}\) satisfying \(Pr(\|\tilde{A}v^{T}-Av^{T}\|<\epsilon\|Av^{T}\|)>1-o(1)\), we have_
\[rank(\tilde{A}_{n+m})-rank(\tilde{A}_{n})=O(log(m)), \tag{8}\]
_where \(m\) is the number of prompts._
Proof.: Based on Theorem 1, given a bounded error \(Pr(\|\tilde{A}v^{T}-Av^{T}\|<\epsilon\|Av^{T}\|)>1-o(1)\), the rank of \(\tilde{A}_{n}\) and \(\tilde{A}_{n+m}\) can be:
\[\alpha log(n)\leq rank(\tilde{A}_{n})\leq\beta log(n), \tag{9}\]
\[\alpha log(n+m)\leq rank(\tilde{A}_{n+m})\leq\beta log(n+m), \tag{10}\]
where \(\alpha\) and \(\beta\) are the constants for the lower and upper bound respectively. Then, we have
\[log\left(\frac{(n+m)^{\alpha}}{n^{\beta}}\right)\leq\textit{rank}(\tilde{A}_ {n+m})-\textit{rank}(\tilde{A}_{n})\leq log\left(\frac{(n+m)^{\beta}}{n^{ \alpha}}\right). \tag{11}\]
We obtain Eq. 8 with respect to the variable \(m\). Additional details can be found in the Supplementary.
Proposition 1 demonstrates that the increase of the rank of the low-rank self-attention matrix follows a logarithmic trend. As the logarithmic function is concave, the effect of adding new prompts on the attention operation diminishes as the number of prompts increases. For instance, increasing the number of prompts from \(0\) to \(50\) has a greater impact than increasing the number of prompts from \(150\) to \(200\). This analysis is aligned with our **Observation 1** where reducing the number of prompts by approximately 50% does not lead to a significant performance drop, but most of the performance drop exists in the \(10\%\sim 40\%\) range.
## 5 Prompt Condensation
Although decreasing the number of prompts up to 50% shows slight performance degradation, the performance drop
Figure 3: Accuracy changes by removing whole prompts in one layer. We report the original accuracy with a dotted line. Each dataset shows a different trend in accuracy degradation.
is non-trivial in the small number of prompts regime. In Table 2, the major performance drop happens under 40% of prompts on most datasets. To address this, we propose a technique called _Prompt Condensation_ (PC).
**Problem Statement.** Our objective is to minimize the number of prompts while maintaining accuracy. Let \(P=\{p_{1},p_{2},...,p_{N}\}\) be the set of prompts, and \(P^{\prime}\) be the condensed prompt set which has a smaller number of elements. Then our goal can be written as:
\[\min_{P^{\prime}}|\mathcal{L}(\theta,P)-\mathcal{L}(\theta,P^{\prime})|, \tag{12}\]
where \(\mathcal{L}(\cdot)\) is the objective function of a task, \(\theta\) is the model parameters. At the same time, we also aim to minimize the number of prompts inside \(P^{\prime}\).
In designing our model for the Parameter Efficient Transfer Learning (PETL) scenario, we consider the following principles: (1) Model parameters cannot be updated due to memory constraints. Therefore, only prompts can be trainable. (2) Additional modules such as those proposed in [30, 27, 41] cannot be utilized. Given these constraints, most token sparsification methods are difficult to be applied in our case. Instead, our method focuses on identifying important prompts and fine-tuning them without updating/adding any model parameters.
**Are all prompts equally important?** The important design choice for PC is whether to condense the same number of prompts for each layer or not. To figure this out, we measure the accuracy change with respect to the prompts in each layer. We remove prompts in layer \(l\) while other layers keep the same number of prompts. As shown in Fig. 3, we observe that prompts in different layers have varying contributions to the accuracy, and the trend varies across different datasets. This observation leads us to leverage a global score across all layers, which is unlike the layer-wise score (_i.e_. using the row similarity in self-attention) widely used in the previous work [25, 30, 10].
**Prompt Scoring.** We define the impact of prompt \(p_{i}\) by computing the difference of the objective function from the fine-tuned VPT model.
\[\|\Delta\mathcal{L}(\theta,p_{i})\|_{2}=\|\mathcal{L}(\theta,P)-\mathcal{L}( \theta,P_{i}^{\prime})\|_{2}, \tag{13}\]
where \(P_{i}^{\prime}\) is the modified prompt set by zeroizing \(p_{i}\in P\). With Taylor approximation, we can approximate \(\mathcal{L}(\theta,P^{\prime})\) at \(p_{i}=0\) as
\[\mathcal{L}(\theta,P_{i}^{\prime})\approx\mathcal{L}(\theta,P)-\frac{d \mathcal{L}(\theta)}{dp_{i}}p_{i}. \tag{14}\]
We only use the first-order term since the beyond second-order term requires huge memory storage. If we substitute Eq. 14 to Eq. 13, we obtain
\[\|\Delta\mathcal{L}(\theta,p_{i})\|_{2}\approx\|\frac{d\mathcal{L}(\theta)}{ dp_{i}}p_{i}\|_{2}. \tag{15}\]
We average Eq. 15 across all data samples to compute the importance score.
\[s_{p_{i}}=\frac{1}{|D|}\sum_{d\in D}\|\frac{d\mathcal{L}(\theta,d)}{dp_{i}}p_ {i}\|_{2}, \tag{16}\]
where \(D\) is the input data set. Note that, calculating the importance score does not bring huge computational overhead since we only need to compute the backward gradient for the prompts.
Once we calculate the importance score for each prompt, we select the prompts with the highest k% scores across all layers. This global prompt selection method inherently allocates the optimal number of prompts for each layer. On the other hand, with a local layer-wise prompt selection, we would enforce top k% prompt selection uniformly across all layers that may inhibit the representation power within the model. In our experiments, we show the global score provides better performance than the local layer-wise metrics.
Our approach is similar to filter pruning in CNNs [24, 28] in the aspect of utilizing Taylor expansion. However, we have innovatively adapted this concept to the token level, presenting a fundamentally distinct granularity in pruning strategy. To our knowledge, our work is the first to employ gradient information directly for token pruning within the context of Vision Transformer (ViT) architectures. As a result, we believe our research paves the way for the potential application of existing channel pruning techniques to token pruning in ViTs.
**Overall Training Process.** Algorithm 1 illustrates the overall process of Prompt Condensation. We first train the original prompt set \(P\) (Line 1). Then we compute the importance score of each prompt inside \(P\) (Line 2). After that, we sort the importance score and select the prompts with the top \(k\%\) of the importance score (Line 3). This provides the condensed prompt set \(P^{\prime}\). We discard the remaining \((100-k)\%\) of prompts. Finally, the prompts within \(P^{\prime}\) are fine-tuned (Line 4). For fine-tuning, we use less number of epochs \(N_{p}\) compared to the original VPT training epochs \(N_{v}\). We analyze the effect of \(N_{p}\) in Section 6.3. Note that the entire training process freezes weight parameters across the model except for the last classifier.
## 6 Experiments
### Experiment Setting
**Architecture.** We conduct experiments using two transformer architectures pre-trained on ImageNet-22k, _i.e_., Vision Transformer (ViT-B/16) [8] and Swin Transformer (Swin-B) [26].
**Dataset.** We use the FGVC and VTAB-1k tasks as our datasets. FGVC consists of 5 datasets, including CUB-200-2011 [39], NABirds [37], Oxford Flowers [29], Stanford Dogs [20], and Stanford Cars [12]. VTAB-1k [44] contains 19 datasets with various visual domains. Following previous work [44, 17], we use the provided 800-200 split of the train-set for training and report the average accuracy score on tests within three runs. For both FGVC and VTAB-1k datasets, we select datasets that show a non-trivial (\(\geq 1\%\)) accuracy drop with \(10\%\) prompts compared to the original VPT. As a result, we have 8 datasets for ViT: {Stanford Cars, Clevr-count, DMLab, dSprites-location, dSprites-orientation, smallNORB-azimuth, smallNORB-elevation, SVHN}, and 5 datasets for Swin: {Clevr-count, Clevr-distance, dSprites-location, smallNORB-azimuth, SVHN}. The details of dataset selection are provided in the Supplementary.
We observed that non-trivial performance drops tend to occur in more challenging downstream tasks. To illustrate this, we calculated the mean and standard deviation of test accuracies across downstream tasks, separating them into those with non-trivial (\(\geq 1\%\)) and trivial (\(<1\%\)) performance drops. These tasks were derived from the FGVC and VTAB-1k datasets. Our results show that datasets with non-trivial accuracy drops exhibit an average accuracy of \(58.91_{\pm 20.23}\%\), while those with trivial accuracy drops demonstrate a higher average accuracy of \(81.96_{\pm 11.54}\%\).
**Hyperparameters.** We follow the hyperparameters (_e.g_., weight decay, learning rate) reported in [17]. Each dataset had a different number of prompts, determined by the previous work [17] that reported the best-performing number of prompts for each dataset. During the prompt fine-tuning stage, we turn off dropout and used a \(\times 0.1\) of the original VPT learning rate. For the prompt condensation, we retrain the selected prompts for 20 epochs, which is shorter than the original VPT training process. In Algorithm 1, we set the number of epochs \(N_{v}\) for training VPT to 100, following the original paper [17].
### Performance Comparison
We first evaluate the effectiveness of Prompt Condensation (PC) with a limited number of prompts. Specifically,
Figure 4: The test accuracy of VPT-Deep [17], _PC w/o fine-tuning_, and our proposed _PC_ with respect to the number of prompts. We use the ViT-B/16 backbone. A dotted line represents the accuracy with 100% prompts.
Figure 5: The test accuracy of VPT-Shallow [17], _PC w/o fine-tuning_, and our proposed _PC_ with respect to the number of prompts. We use the ViT-B/16 backbone. A dotted line represents the accuracy with 100% prompts.
we vary the number of prompts from \(10\%\) to \(50\%\), where the notation of \(k\%\) denotes the use of \(k\%\) of the number of prompts reported in [17]. We compare the performance of PC with the following models:
\(\bullet\) VPT (baseline): We train the ImageNet pre-trained model with \(k\%\) of original prompts.
\(\bullet\) PC w/o fine-tuning: From the trained VPT with \(100\%\) of prompts, we compute the importance score (Eq. 16) of each prompt and select top \(k\%\) of prompts based on the score and discard the remainder.
Fig. 4 and 5 present a comparison of the performance of VPT-Deep and VPT-Shallow, respectively. From the results, we make the following observations: (1) For VPT-Deep, PC maintains the performance with only \(20\sim 30\%\) number of prompts, demonstrating its effectiveness compared to the naive VPT baseline. (2) The performance gain achieved by applying PC to VPT-Shallow is comparatively lower than that of VPT-Deep. This can be attributed to VPT-Shallow having a smaller number of original prompts, which results in less room for performance improvement. At the same time, VPT-Deep yields higher performance than the VPT-Shallow model. Therefore, we focus on VPT-Deep in this paper. (3) Interestingly, _PC w/o fine-tuning_ with VPT-Deep does not demonstrate a significant performance drop with \(40\sim 50\%\) of the original prompts. This suggests that our prompt importance score accurately reflects the impact of each prompt on the overall accuracy. (4) For the \(10\sim 30\%\) regime, there is considerable performance degradation without fine-tuning. However, this can be fully recovered by fine-tuning prompts, demonstrating fine-tuning is an essential stage for PC. (5) The results of Swin also provide a similar trend as ViT, as shown in Fig. 6.
### Experimental Analysis
**Design Choice for Prompt Scoring.** In our method, we compute the gradient to evaluate the importance of each prompt (Eq. 16). Based on this score, we select the top-\(k\%\) highest-scored prompts across all layers. To investigate the effectiveness of our prompt scoring technique, we compare it with several variants.
\(\bullet\) Global Prompt Condensation (ours-Global): Our proposed method where the top-\(k\%\) highest-scored prompts are selected across all layers.
\(\bullet\) Local Prompt Condensation (ours-Local): Instead of considering whole layers, we select top-\(k\%\) scored prompts in one layer. This approach ensures that the number of selected prompts is the same across all layers.
\(\bullet\) [CLS]-Sim: We adopt the self-attention similarity between prompt tokens and a [CLS] token as a scoring technique inspired by a line of previous works [25, 30, 10]. Here, we also select top-\(k\%\) highest-scored prompts in one layer.
In Table 3, we present a comparison of the performance achieved using three different prompt scoring techniques. The results demonstrate that our proposed global scoring method, which considers the importance of prompts across all layers, outperforms the other two scoring techniques, particularly for lower percentages of PC (_e.g_. 10%). Therefore, our findings suggest that a global scoring metric is necessary for PC, given that the significance of prompts varies across
Figure 6: The test accuracy of VPT-Deep [17], _PC w/o fine-tuning_, and our proposed _PC_ with respect to the number of prompts. We use the Swin-B backbone. A dotted line represents the accuracy with 100% prompts.
\begin{table}
\begin{tabular}{l|l|c c c c c} \hline Datasets & Methods & 10\% & 20\% & 30\% & 40\% & 50\% \\ \hline \hline \multirow{3}{*}{StanfordCars} & [CLS]-Sim & 81.67 & 82.63 & 83.35 & 83.95 & 84.08 \\ & Ours-Local & 81.79 & 82.88 & 83.51 & 84.01 & 84.05 \\ & Ours-Global & **82.79** & **84.08** & **84.10** & **84.09** & **84.10** \\ \hline \multirow{3}{*}{Clev-count} & [CLS]-Sim & 57.25 & 63.02 & **67.51** & 67.40 & 67.92 \\ & Ours-Local & 57.95 & **67.09** & 66.04 & 66.58 & 66.75 \\ & Ours-Global & **65.06** & 67.00 & 67.26 & **69.30** & **69.70** \\ \hline \multirow{3}{*}{DMLab} & [CLS]-Sim & 45.44 & 45.42 & 46.25 & **46.81** & 46.74 \\ & Ours-Local & 45.33 & 45.25 & 46.45 & 45.91 & 47.06 \\ & Ours-Global & **45.74** & **45.34** & **46.69** & 46.58 & **47.12** \\ \hline \multirow{3}{*}{dSprites-loc} & [CLS]-Sim & 49.84 & 50.30 & 67.33 & 68.54 & 72.28 \\ & Ours-Local & 57.95 & 60.26 & 63.26 & 71.30 & 72.87 \\ & Ours-Global & **59.62** & **66.25** & **73.23** & **76.99** & **78.88** \\ \hline \multirow{3}{*}{dSprites-ori} & [CLS]-Sim & 40.95 & 45.64 & 46.68 & 46.69 & 46.62 \\ & Ours-Local & 44.09 & 45.08 & 46.71 & 46.33 & 46.76 \\ & Ours-Global & **45.59** & **47.58** & **47.36** & **47.36** & **47.05** \\ \hline \multirow{3}{*}{SmallNORB-azi} & [CLS]-Sim & 28.93 & 29.48 & **32.52** & **31.87** & **33.33** \\ & Ours-Local & 30.29 & 30.60 & 31.24 & 31.80 & 32.29 \\ & Ours-Global & **30.96** & **32.03** & 31.59 & 31.15 & 32.31 \\ \hline \multirow{3}{*}{SmallNORB-ele} & [CLS]-Sim & 35.83 & 36.47 & **37.90** & 37.62 & 37.31 \\ & Ours-Local & 35.66 & 36.22 & 36.04 & **38.41** & **38.66** \\ & Ours-Global & **36.24** & **36.76** & 36.88 & 37.38 & 37.81 \\ \hline \multirow{3}{*}{SVHN} & [CLS]-Sim & 74.26 & 75.87 & 78.06 & 79.63 & 80.00 \\ & Ours-Local & 76.75 & 78.06 & **79.18** & **79.67** & 97.77 \\ & Ours-Global & **78.22** & **78.31** & 78.43 & 80.88 & **80.39** \\ \hline \hline \multirow{3}{*}{Average} & [CLS]-Sim & 51.77 & 53.55 & 57.45 & 57.81 & 58.41 \\ & Ours-Local & 53.72 & 55.68 & 56.55 & 58.00 & 58.52 \\ \cline{1-1} & Ours-Global & **55.52** & **57.16** & **58.19** & **59.27** & **59.67** \\ \hline \end{tabular}
\end{table}
Table 3: Test accuracy is evaluated using different prompt scoring techniques on VPT-Deep with ViT-B/16, with the best performance highlighted in **bold**.
different layers.
**Layer-wise prompt distribution.** We present the visualization of the layer-wise prompt distribution for PC with different percentages of prompts (50%, 30%, and 10%) in Fig. 7. The average number of prompts in each layer is computed across all datasets, and the mean and standard deviation are reported. The results indicate that prompts in the early layers have a minimal impact on the accuracy for most datasets. Furthermore, reducing the percentage of prompts leads to higher standard deviation, implying that the optimal number of prompts varies across datasets. Therefore, a global PC method is necessary to determine the optimal number of prompts at each layer across different datasets.
**GPU Latency Analysis.** We analyze the practical latency time of VPT with PC on GPUs. Theoretically, the complexity of self-attention operation increases quadratically as the input token length increases. However, this may not hold true in practice due to factors such as hardware specifications [17, 8]. To investigate the advantage of PC on GPU latency, we measure the GPU latency for 64 images on three different GPU environments: Quadro RTX5000, V100, and A100. We performed experiments on three datasets with different original numbers of prompts (StanfordCar, DMLab, and SVHN datasets originally had 200, 100, and 50 prompts, respectively). In Table 4, we observe that the proposed PC reduces GPU latency for all configurations. As we expected, the effectiveness of PC is higher in the case with a larger number of prompts such as Stanford Cars. Moreover, the global PC yields similar latency as that of the local PC. This further corroborates the use of the global importance score which achieves higher accuracy with negligible computational overhead. We measure the FLOPs of VPT with PC in Table 5 to support our observation of the GPU latency, where the results show a similar trend.
**Analysis on the Number of Fine-tuning Epochs.** One crucial hyperparameter in our method is the number of prompt fine-tuning epochs (\(N_{p}\) in Algorithm 1). However, longer fine-tuning periods come with a higher computational cost, which is incompatible with on-device training scenarios. To determine the optimal number of fine-tuning epochs, we measure the average validation accuracy across all downstream datasets of VTAB-1K with 10% prompts. As shown in Fig. 8(a), the accuracy plateaus around epoch 20. Based on this observation, we set the number of fine-tuning epochs to 20 for all experiments. In addition, Fig. 8(b) illustrates the relative computational time between the original VPT training, prompt scoring, and prompt fine-tuning (line 1, line 2, and line 4 in Algorithm 1, respectively). The results demonstrate that our PC method (prompt scoring + fine-tuning) requires less than 25% of the computational time needed for the original VPT training. These results indicate that our method is well-suited for on-device training scenarios.
**Practical Implementation of Prompt Condensation.** In practical applications, it may not always be evident if there is a non-trivial performance drop with a small number of prompts. In such cases, we can use a relative computational cost metric (i.e., the ratio of [original image tokens] to [prompt + original image tokens]) to decide whether to apply Prompt Condensation (PC). For instance, consider a scenario with 197 original tokens (196 + [CLS] token) and 100 prompt tokens. In this case, the addition of prompts results in a computational cost increase of \(\frac{100}{197}=50.76\%\). If the inclusion of prompts leads to an additional computational cost of \(\geq K\)%, we can opt to implement PC. If not, it would be more beneficial to skip PC.
## 7 Conclusion
In this study, our aim is to investigate the influence of the number of prompts on VPT and its impact on both computational cost and fine-tuning performance. Our findings show that reducing the number of prompts by approximately 50% does not significantly affect fine-tuned accuracy, with
Figure 8: (a) The validation accuracy change with respect to the number of fine-tuning epochs. (b) Relative time of original VPT training, scoring, and prompt fine-tuning.
Figure 7: Layer-wise prompt distribution with three different prompt condensation levels.
the majority of the performance drop occurring in the 10% to 40% range. Additionally, we demonstrated that increasing the number of prompts does not linearly enhance the maximum rank of approximated self-attention matrices. At the same time, we proposed Prompt Condensation (PC), a condensation technique that can effectively recover the performance degradation caused by using a small number of prompts. Overall, we hope our analysis and observations can provide insight to researchers in designing visual prompts.
## Acknowledgement
This work was supported in part by CoCoSys, a JUMP2.0 center sponsored by DARPA and SRC, Google Research Scholar Award, the National Science Foundation CAREER Award, TII (Abu Dhabi), the DARPA AI Exploration (AIE) program, and the DoE MMICC center SEA-CROGS (Award #DE-SC0023198).
|
2306.11302 | A Two-Stage Bayesian Small Area Estimation Approach for Proportions | With the rise in popularity of digital Atlases to communicate spatial
variation, there is an increasing need for robust small-area estimates.
However, current small-area estimation methods suffer from various modeling
problems when data are very sparse or when estimates are required for areas
with very small populations. These issues are particularly heightened when
modeling proportions. Additionally, recent work has shown significant benefits
in modeling at both the individual and area levels. We propose a two-stage
Bayesian hierarchical small area estimation approach for proportions that can:
account for survey design; reduce direct estimate instability; and generate
prevalence estimates for small areas with no survey data. Using a simulation
study we show that, compared with existing Bayesian small area estimation
methods, our approach can provide optimal predictive performance (Bayesian mean
relative root mean squared error, mean absolute relative bias and coverage) of
proportions under a variety of data conditions, including very sparse and
unstable data. To assess the model in practice, we compare modeled estimates of
current smoking prevalence for 1,630 small areas in Australia using the
2017-2018 National Health Survey data combined with 2016 census data. | James Hogg, Jessica Cameron, Susanna Cramb, Peter Baade, Kerrie Mengersen | 2023-06-20T05:38:24Z | http://arxiv.org/abs/2306.11302v3 | # A Two-stage Bayesian Small Area Estimation Method for Proportions
###### Abstract
With the rise in popularity of digital Atlases to communicate spatial variation, there is an increasing need for robust small-area estimates. However, current small-area estimation methods suffer from various modelling problems when data are very sparse or when estimates are required for areas with very small populations. These issues are particularly heightened when modelling proportions. Additionally, recent work has shown significant benefits in modelling at both the individual and area levels. We propose a two-stage Bayesian hierarchical small area estimation model for proportions that can: account for survey design; use both individual-level survey-only covariates and area-level census covariates; reduce direct estimate instability; and generate prevalence estimates for small areas with no survey data. Using a simulation study we show that, compared with existing Bayesian small area estimation methods, our model can provide optimal predictive performance (Bayesian mean relative root mean squared error, mean absolute relative bias and coverage) of proportions under a variety of data conditions, including very sparse and unstable data. To assess the model in practice, we compare modeled estimates of current smoking prevalence for 1,630 small areas in Australia using the 2017-2018 National Health Survey data combined with 2016 census data.
Bayesian statistics, sample surveys, small area estimation, area level model, individual level model
## 1 Introduction
Although the popularity in using digital health Atlases to effectively communicate spatial patterns continues to rise, data for most health outcomes are generally not available for the entire population and therefore researchers must rely on large surveys to generate estimates for small areas. However, this often results in small sample sizes for each small area. A popular remedy is to employ statistical methods for small area estimation (SAE) which use concordance between survey and census data to generate estimates of small area characteristics [1].
Two common frameworks for SAE are direct and model-based estimators. When sample sizes are large, direct estimators which use only sampled individuals can yield estimates with low mean squared error (MSE). However, for smaller regions direct estimators can exhibit very high MSE [2]. Unlike direct estimators, model-based estimators can borrow statistical strength across areas and thus can provide more efficient estimates [3]. Model-based estimators are the pragmatic choice for situations of data sparsity; when area level sample sizes are very small.
Although model-based SAE methods were initially developed for continuous outcomes, proportions of binary or categorical outcomes such as smoking status, are often more useful in health settings [4]. In the past 35 years, model-based SAE for proportions has seen considerable development [4], although several unique methodological challenges arise when the goal is to estimate proportions as opposed to continuous outcomes.
The first difficulty involves the consistency of direct proportion estimates and their sampling variances [5, 6]. Direct proportion estimates for sparse SAE applications can be very unstable, frequently collapsing to zero or one. In this work, we define these sampled areas as _unstable_. As sparsity increases, the likelihood of this instability also dramatically increases. In conjunction with sparsity, instability is exacerbated when the health characteristic is either rare or very common, or when the sample design is very informative. Furthermore, unstable direct estimates give invalid sampling variances, rendering them inapplicable in standard SAE area-level models [7].
The second difficulty is that the bounded nature of proportions violates key distributional assumptions of both the individual level models [8] and area level models [9]. Although these difficulties can be mitigated to some extent by generalised linear models, such as Beta regression [6], or by appropriate transformations [10], there remains some estimation and computational challenges. Even though substantial research has been done to include the sample design in model-based approaches for Gaussian outcomes [2], equivalent methods for proportions has only become a research focus relatively recently [11, 12, 13, 14].
A final methodological challenge is that methods for proportions suffer from stricter data requirements than those for continuous outcomes. For example, individual level logistic models require covariate microdata for the entire population [15, 14], whereas models for continuous outcomes only require covariate summaries [2]. Although some multilevel regression and poststratification (MrP) models [16] relax these requirements, the necessity for concordance between survey and census covariates continues to limit covariate choices to those in the census. This can be a significant limitation because for some outcomes (e.g. chronic disease), survey-only covariates may be more predictive than the demographic and economic factors available in the census [17]. To date, little work has been done to explore the use of survey-only covariates in model-based SAE.
This work is motivated by issues that arise when using SAE in Australia. Australia's population density is highly variable: about 80% of Australians live in east coast cities, and there are huge inland areas that are sparsely populated [18]. It follows that unless a survey has been designed specifically for SAE, sample sizes for small areas can be prohibitively small and remote areas excluded altogether [19, 20]. A review of SAE papers revealed some international studies using area level sample sizes in excess of 25 [13], with most using sample sizes greater than 50 [11, 14]. By contrast, current Australian surveys, such as the 2017-2018 National Health Survey (NHS), have area level sample sizes ranging from 5 to 13 (see Fig. 1). Of course, the extent of this data sparsity renders many areas without survey data, and although model-based methods can be used to estimate proportions for the nonsampled areas, there is a lack of research to determine the best methods for doing so [21, 22].
Data sparsity has historically been remedied by aggregating to a higher administrative level. However, this approach sacrifices resolution. With the growing demand for estimates at a higher resolution and limited funding for larger surveys, new statistical methods must be developed to address the issue of data sparsity.
The aim of this work is to develop small area model-based methods for proportions that can provide superior estimates for sparse survey data. The method will address the above issues by: (1) providing stable direct estimates; (2) accommodating the survey design; (3) making appropriate use of all available data; and (4) providing estimates for nonsampled areas. Our proposed approach, called the two-stage logistic-normal (TSLN) model, has a two-stage structure where stability is gained via an individual level stage 1 prediction model, followed by an area level stage 2 smoothing and imputation model. We take a Bayesian perspective and estimate the model using Markov chain Monte
Carlo (MCMC) methods [23]. To further contribute to the literature on two-stage approaches we posit two simple metrics to determine the level of smoothing induced by our two-stage approach.
This paper is structured as follows. First, we introduce some notation and describe each stage of our proposed approach. Then we provide details on fitting the TSLN model with Bayesian inference before describing alternative two-stage and one-stage models for proportions. Next, we describe the simulation study conducted to assess the performance of the TSLN model compared with four alternatives. Finally, we provide details on a case study where we generated small area level prevalence estimates of current smoking on the east coast of Australia. In the case study, we demonstrate the flexibility of our approach by including complex random effects and accommodating known benchmarks.
### Notation
We define a finite population, \(F\), with \(N\) individuals, where each individual resides in one of \(M\) small geographical areas. Allow \(N_{i}\) individuals in each area \(i=1,\ldots,M\), such that \(\sum_{i=1}^{M}N_{i}=N\). We are interested in a binary characteristic \(y_{ij}\in\{0,1\}\) which is equal to 1 if an individual \(j\) who resides in area \(i\) has the characteristic and \(0\) otherwise. We wish to generate estimates, \(\hat{\boldsymbol{\mu}}=(\hat{\mu}_{1},\ldots,\hat{\mu}_{M})\), of the true proportion of the population with the characteristic, \(\boldsymbol{\mu}=(\mu_{1},\ldots,\mu_{M})\).
Without loss of generality, assume that we have samples from the first \(m\) areas, \(m<M\), and no samples for the remaining \(M-m\) areas. Denote the sampled individuals in area \(i\) by \(j\in r_{i}\) and the nonsampled individuals by \(j\in r_{i}^{C}\) where \(r_{i}^{C}\) is the complement set of \(r_{i}\). Generally, the survey data used in applications of SAE are collected according to a specified survey design. Design-unbiased population estimates are then derived using the sampling weights, \(w_{ij}^{\text{raw}}\), for each sampled individual [2].
Figure 1: Map of 282 small areas in and around Sydney on the east coast of Australia. Each area is coloured according to the sample size (greater or less than 10) or sample status of the area (sampled or nonsampled) in the 2017-2018 National Health Survey.
Similar to Vandendijck _et al._[11], we assume that in secondary analysis we do not have sufficient details or data to create weights and instead rely on the weights provided by the data custodians. Note that this is a restrictive assumption as several popular direct estimators require first- and second-order inclusion probabilities for variance estimation [13]. First-order inclusion probabilities give the probability of a person in the sampling frame being in the sample, while second-order inclusion probabilities give the probability of any pair of persons being in the sample. Without sufficient details on the sampling design, second-order inclusion probabilities cannot be calculated.
Throughout this work, \(\hat{\mu}_{i}^{D},\hat{\mu}_{i},\mu_{i}\) will denote a direct estimate (using the Hajek [24]), a model-based estimate and the unknown true small area proportion, respectively. Hence,
\[\hat{\mu}_{i}^{D}=\frac{\sum_{j\in r_{i}}w_{ij}y_{ij}}{n_{i}}. \tag{1}\]
By assuming \(w_{ij}=w_{ij}^{\text{raw}}\left(\frac{n_{i}}{\sum_{j\in r_{i}}w_{ij}^{\text{raw }}}\right)\) and that all the sampling fractions, \(\frac{n_{i}}{N_{i}}\), are sufficiently small, we use the following approximation to the sampling variance of the direct proportion estimator [11, 25],
\[\psi_{i}^{D}=\widehat{\mathbf{v}}\left(\hat{\mu}_{i}^{D}\right) =\frac{1}{n_{i}}\left(1-\frac{n_{i}}{N_{i}}\right)\left(\frac{1}{ n_{i}-1}\right)\] \[\sum_{j\in r_{i}}\left(w_{ij}^{2}\left(y_{ij}-\hat{\mu}_{i}^{D} \right)^{2}\right), \tag{2}\]
which is strictly between \(0\) and \(0.25\). Direct estimators have low variance and are design-unbiased for \(\mu_{i}\) when \(n_{i}\) is large, but high variance when \(n_{i}\) is small [1]. Although, model-based methods can be biased, they improve variance, resulting in lower MSEs [26].
## 2 Proposed method
### TSLN model
The proposed two-stage logistic-normal (TSLN) model involves two stages. First an individual level logistic model is fit to the survey data, where individual level predictions are used to generate area estimates [27]. Second, the area estimates are fed into an area level Fay and Herriot [9] (FH) model for further smoothing and to impute proportion estimates for nonsampled areas [13].
#### 2.1.1 Stage 1
The goal of the first stage model is to stabilise the area level direct estimates and sampling variances whilst simultaneously reducing their bias. This is achieved by fitting a Bayesian pseudo-likelihood logistic mixed model to the individual level binary outcomes, \(y_{ij}\). The Horvitz-Thompson estimator of the population-level log likelihood is used to ensure the predictions, \(p_{ij}\), from the logistic model are unbiased under the sample design [28, 12, 14] (see Supplemental Materials B). The first stage model is,
\[y_{ij} \sim \text{Bernoulli}(p_{ij})^{\tilde{w}_{ij}} \tag{3}\] \[\text{logit}(p_{ij}) = \mathbf{x}_{ij}\boldsymbol{\beta}+e_{i}\] \[e_{i} \sim N(0,\sigma_{e}^{2})\]
where \(j=1,\ldots,n_{i};i=1,\ldots,m\). Above, \(\mathbf{x}\in\mathbb{R}^{n\times(q^{u}+1)}\) is the unit (or individual) level sample design matrix, which includes the \(q^{u}\) fixed effects (individual level survey-only and area level fixed effects), and \(\boldsymbol{\beta}\in\mathbb{R}^{(q^{u}+1)\times 1}\) the corresponding regression coefficients. Following the notation of Parker, Janicki, and Holan [14], we represent the pseudo-likelihood for a probability density function, \(p(.)\), as \(p(y_{ij})^{\tilde{w}_{ij}}\), where \(\tilde{w}_{ij}\) are the sample scaled weights, \(\tilde{w}_{ij}=w_{ij}^{\text{raw}}\left(\frac{n}{\sum_{i=1}^{m}\sum_{j=1}^{n_ {i}}w_{ij}^{\text{raw}}}\right)\). Independent weakly informative priors are placed on the model parameters \(\sigma_{e},\boldsymbol{\beta}\). The first-stage model will be referred to as the TSLN-S1 model hereafter.
#### 2.1.2 Stage 1 (S1) estimates
To collapse the individual level data to area level data we calculate S1 proportion estimates, \(\hat{\mu}_{i}^{\text{S1}}\), and their corresponding sampling variances, \(\psi_{i}^{\text{S1}}\), by using the posterior distribution for the individual level probabilities, \(p_{ij}\) instead of the observed binary outcomes, \(y_{ij}\). This smooths the data at the individual level, which, similar to the work by Gao and Wakefield [29], permits one to view the TSLN-S1 model as a model-based smoothing method. Using S1 estimates guarantees that the values are more stable: all \(\hat{\mu}_{i}^{\text{S1}}\in(0,1)\) and \(\psi_{i}^{\text{S1}}\in(0,0.25)\).
The stage 1 estimates for \(i=1,\ldots,m\) are,
\[\hat{\mu}_{i}^{\text{S1}} = \frac{\sum_{j\in r_{i}}w_{ij}p_{ij}}{n_{i}} \tag{4}\] \[\psi_{i}^{\text{S1}} = \widehat{\triangledown}\left(\hat{\mu}_{i}^{\text{S1}}\right)= \widehat{\triangledown}\left(\hat{\mu}_{i}^{D}\right)+\widehat{\triangledown} \left(\hat{B}_{i}\right)\] (5) \[\hat{B}_{i} = \frac{\sum_{j\in r_{i}}w_{ij}\left(p_{ij}-y_{ij}\right)}{n_{i}} \tag{6}\]
where \(\hat{B}_{i}\) quantifies the level of smoothing achieved by using the TSLN-S1 model (See Supplemental Materials A for details). The formula used to calculate both \(\widehat{\triangledown}(.)\) terms is given in (2). Note that unlike Roy _et al._[27] who used the posterior means and standard deviations of \(\hat{\mu}_{i}^{\text{S1}}\) for their final results, we found that these were insufficient when sample sizes were very small.
To accommodate the constraints on the S1 estimates in the stage 2 model, we use the common empirical logistic transformation [10, 25],
\[\hat{\theta}_{i}^{\text{S1}} = \text{logit}\left(\hat{\mu}_{i}^{\text{S1}}\right) \tag{7}\] \[\gamma_{i}^{\text{S1}} = \psi_{i}^{\text{S1}}\left[\hat{\mu}_{i}^{\text{S1}}\left(1-\hat{ \mu}_{i}^{\text{S1}}\right)\right]^{-2} \tag{8}\]
where \(\hat{\theta}_{i}^{\text{S1}}\) is the log-odds of the stage 1 estimate for area \(i\) and \(\gamma_{i}^{\text{S1}}\) the corresponding sampling variance. The logistic transformation permits the use of a Gaussian likelihood in the stage 2 model, which improves computation and avoids the limitations of the Beta distribution (see Supplemental Materials B).
#### 2.1.3 Assessing smoothing properties
Predictions from the TSLN-S1 model should provide acceptable approximations to the observed values, whilst still ensuring a level of generalisability. Severely overfit TSLN-S1 models will yield minimal smoothing and stability gains, while poor fitting models can exhibit very high sampling variances, biased S1 estimates and inaccurate estimates for nonsampled areas. Therefore, the stability gained from smoothing the individual level data must be balanced by ensuring adequate agreement (and comparable variability) of the S1 and direct estimates. To address this we propose two metrics. The smoothing ratio (\(SR\)),
\[SR = \frac{\sum_{i=1}^{m}\left|\frac{\sum_{j\in r_{i}}w_{ij}(y_{ij}-p_{ ij})}{n_{i}}\right|}{\sum_{i=1}^{m}\left|\frac{\sum_{j\in r_{i}}w_{ij}(y_{ij}- \hat{\mu}^{D})}{n_{i}}\right|} \tag{9}\] \[\hat{\mu}^{D} = \left(\sum_{i=1}^{m}\sum_{j=1}^{n_{i}}w_{ij}\right)^{-1}\sum_{i=1 }^{m}\sum_{j=1}^{n_{i}}w_{ij}y_{ij},\]
indicates the level of smoothing induced by using the TSLN-S1 model, where a small value indicates undersmoothing and a large value indicates oversmoothing. The \(SR\) benchmarks the first-stage model predictions with a model considered to provide maximum smoothing (i.e. where \(p_{ij}=\hat{\mu}^{D},\forall i,j\)). Small values of the \(SR\) are preferred.
The second metric, denoted as the area linear comparison (\(ALC\)) in this work, gives the level of agreement between the S1 and direct estimates. The \(ALC\) is equal to the regression coefficient when we regress the posterior median of \(\hat{\theta}_{i}^{\text{S1}}\) on
\(\hat{\theta}_{i}^{D}\) with weights \(\frac{1}{\psi_{i}^{D}}\). The \(ALC\) assesses the equivalence of the S1 and direct estimates, where a slope of 1 denotes perfect agreement (i.e. no smoothing). By weighting the OLS estimates by \(\frac{1}{\psi_{i}^{D}}\) we ensure that highly certain direct estimates (or small \(\frac{1}{\psi_{i}^{D}}\)) are given more weight in the OLS. The construction of this metric reflects one of the goals of the TSLN-S1 model; to smooth areas with high uncertainty more than those with low uncertainty. Although there is no theoretical threshold for when the \(ALC\) measure indicates oversmoothing, \(ALC>0.5\) is a realistic suggestion. Note that other metrics produced by the weighted linear regression (such as the \(R^{2}\)) may also be informative.
#### 2.1.4 Stage 2
By construction, it is reasonable to assume that \(\hat{\theta}_{i}^{\text{S1}}\) follows an approximate Gaussian distribution. Further smoothing of the S1 estimates is achieved by fitting a Bayesian FH model. The second stage model, referred to as the TSLN-S2 model hereafter, is composed of a sampling model,
\[\hat{\theta}_{i}^{\text{S1}}\sim N(\hat{\theta}_{i},\gamma_{i}^{\text{S1}}), \tag{10}\]
which accommodates the sampling variance of the input data for \(i=1,\ldots,m\), and a linking model,
\[\hat{\theta}_{i} = \mathbf{Z}_{i}\boldsymbol{\lambda}+v_{i} \tag{11}\] \[v_{i} \sim N(0,\sigma_{v}^{2}),\]
which relates the modelled values, \(\hat{\theta}_{i}\), to a series of covariates and area level random effects \(\mathbf{v}=(v_{1},\ldots,v_{M})\) for \(i=1,\ldots,M\). We use independent weakly informative priors on the model parameters \(\sigma_{v},\boldsymbol{\lambda}\). The parameter of interest is the posterior distribution of \(\hat{\mu}_{i}=\text{logit}^{-1}(\hat{\theta}_{i})\), which can be summarized by deriving summary quantities (such as means, variances and highest density intervals) of the posterior draws of \(\hat{\mu}_{i}\)[30, 1, 2]. Although the specification above is generic, model extensions are, of course, possible. For example, we use the spatial BYM2 prior [31] for \(v_{i}\) in our case study in Section 5.
Note that \(\mathbf{Z}\in\mathbb{R}^{M\times(q^{2}+1)}\) is the area level design matrix, which should include \(q^{a}\) area level covariates that are available for all \(M\) areas, and \(\boldsymbol{\lambda}\in\mathbb{R}^{(q^{2}+1)\times 1}\) is the corresponding regression coefficients for the \(q^{a}+1\) covariates.Although S1 estimates are not available for areas \(m+1,\ldots,M\), Tzavidis _et al._[32] show how estimates can be obtained by combining the areas' known covariate values (i.e. \(\mathbf{Z}\)) and random draws from the priors.
#### 2.1.5 Generalized variance functions (GVF)
FH models require valid direct estimates and sampling variances for _all_ sampled areas. As discussed previously, for very sparse survey data, many sampled areas may give unstable direct estimates (\(\hat{\mu}_{i}^{D}=0\) or \(1\)) and thus sampling variances that are undefined or exactly zero.
Although S1 estimates do not suffer from the same instabilities as direct estimates, S1 sampling variances do exhibit undesirable limit properties as they are derived from probabilities rather than binary random variables. For example, consider a single area \(i\) with a sample size of 4 and assume that \(w_{ij}=1,\forall j\). While it is possible for \(\hat{\mu}_{i}^{\text{S1}}=0.01\), to obtain \(\hat{\mu}_{i}^{D}=0.01\) one would require 25 times the sample size. In other words, for a fixed sample size S1 estimates are able to be much closer to zero (or one), without being exactly zero (or one), than direct estimates. The result of this phenomenon is unrealistically low S1 sampling variances for unstable areas.
Our solution is to use generalized variance functions (GVF), which relate the sampling variance of a direct estimate to a set of covariates, often including the sample sizes and direct estimates, with necessary transformations [33, 2, 34]. To ensure that S1 sampling variances for unstable areas do not inadvertently affect the fit of the FH model in (10), we impute the \(m-m_{s}\) S1 sampling variances using GVF, where \(m_{s}<m<M\) is the number of stable areas.
In this work, we generalise the approach used by Das _et al._[35], to a fully Bayesian framework by implementing the following GVF within our model. By fitting the GVF and stage 2 model jointly, such as that by Gao and Wakefield [29], we ensure the uncertainty of the imputations are appropriately taken into account. We use the stable S1 sampling variances to estimate the parameters of the GVF and then use the fitted GVF to impute the S1 sampling variances for the unstable areas.
By letting \(f(.)\) be an appropriate link function, \(\mathbf{L}\) an area level design matrix, and \(\boldsymbol{\omega}\) the corresponding regression coefficients,
\[f\left(\gamma_{i}^{\text{S1}}\right)\sim N\left(\mathbf{L}_{i}\mathbf{\omega},\sigma_{ \text{gsf}}^{2}\right), \tag{12}\]
where \(i=1,\ldots,m_{s}\). The S1 sampling variances for \(i=m_{s}+1,\ldots,m\) are imputed by
\[\gamma_{i}^{\text{S1}}=f^{-1}\left(\mathbf{L}_{i}\mathbf{\omega}\right). \tag{13}\]
The general form specified in (12) can be adopted according to the scale of the sampling variances and covariates. Note that when \(f(x)=\text{log}(x)\), we use the following bias correction used by Das _et al._[35] when back-transforming, \(\gamma_{i}^{\text{S1}}=\text{exp}\left(\mathbf{L}_{i}\mathbf{\omega}+0.5\sigma_{ \text{gsf}}^{2}\right)\). For the TSLN-S2 model, we set \(f(x)=\text{log}\left(\sqrt{x}\right)\) and use \(\text{log}(n_{i})\) as the single covariate.
### Implementation
While the TSLN model is framed as a single Bayesian model, it is computationally necessary to fit the two stages separately. The TSLN model was fitted using Stan, leveraging its efficient Hamiltonian Monte Carlo (HMC) algorithm [36]. We generated \(T\) posterior draws for each parameter in the TSLN-S1 model and then passed (a random subset of) the posterior draws of the relevant parameter estimates as inputs to the TSLN-S2 model. Although we explored more complex approaches to estimating the parameters of the TSLN model, these introduced significant computational and MCMC convergence problems, which may be solved in future work. More details of our inference approach and some alternatives are given in Supplemental Materials A.
## 3 Existing methods
### Alternative two-stage approaches
Two stage approaches have been used in other SAE methodological work. First and foremost is an approach given by Gao and Wakefield [13], who proposed a smoothed model-assisted small area estimation technique (hereafter named the GW model) by leveraging a logistic generalized regression estimator (LGREG) [38] at the first stage, followed by an area level model to further smooth the stage 1 estimates. Another two-stage approach was implemented by Das _et al._[35] (hereafter named the DBBH approach), who used cross-sectional FH models to smooth direct estimates before feeding these into multilevel time-series models. In addition, inspiration is drawn from Honaker and Plutzer [39] who considered MrP as a multiple imputation problem and suggested a two-stage approach where the uncertainty induced during individual level modelling could be incorporated into a second stage area level model. Note that both the GW and DBBH approaches support the utility of multi-stage smoothing in SAE which is a key advantage of the TSLN model.
Although our work has similar intuition to both the GW and DBBH approaches, there are several benefits of our model. Firstly, unlike previous approaches which treat predictions from the first stage as fixed, the TSLN-S2 model accommodates the uncertainty in the model parameters and predictions from the TSLN-S1 model. Gao and Wakefield [13] warn against overconfident estimates from their approach as their first stage LGREG estimator is fit using frequentist inference and ignores model parameter uncertainty. By using Bayesian inference at both the individual and area level, our estimates inherit the uncertainty of all model parameters.
Secondly, our approach relaxes some covariate requirements. By relying on the assumption that the first-stage model fit to the survey data is consistent with the true population model, the GW model estimates "unbiased" predictions for all individuals and thus, all areas. This highlights two important data requirements underlying the work by Gao and Wakefield [13]: individual level covariates must be available for all population individuals and survey-only covariates cannot be used. The TSLN model has no such requirement, permitting the use of individual level survey-only covariates. Note that Gao and Wakefield [13] acknowledge the limitations of requiring access to covariate information for the entire population. Their solution was to redefine individuals as very fine spatial grids, where population data were available.
Next, the GW approach requires access to first and second-order inclusion probabilities in order to calculate the sampling variance of their first stage LGREG estimates. In most cases, first-order inclusion probabilities are not provided by data custodians, and further it is very rare to have access to second-order probabilities. Our TSLN model only assumes access to sample weights, which are commonly provided with survey data. Furthermore, unlike the GW model which does not incorporate area level covariates, because of our second stage FH setup, the TSLN model can naturally include area level fixed effects and accommodate the spatially structured and unstructured priors utilized by Gao and Wakefield [13]. Finally, in contrast to the DBBH approach, which operates at the area level only, the TSLN model can provide more efficient estimates by smoothing the data at both the individual and area levels.
Although not discussed in previous work, we found that the quality of the first stage model can play a critical role in the performance of two-stage approaches. Thus, the final benefit of our research is the proposal of a set of metrics (the \(SR\) and \(ALC\) in Section 2.1.2) and recommendations to ensure the two stages complement each other.
### Models for proportions
As with the TSLN model, the GW and DBBH techniques are composed of basic component models such as the logistic and FH. Table 1 gives an overview of four different model-based techniques for small area estimation of proportions that serve as comparison models in the simulation experiment described in Section 4. However, now we briefly address some of the challenges and limitations of these and other alternative methodologies (see Supplemental Materials B for more details), as well as how the TSLN model offers some solutions. In addition, we justify the four comparison models used in this work.
Despite their common use [2], standard FH models may not be suitable for modelling sparse binary data, which can give unstable direct estimates. While unstable sampling variances can be imputed using GVFs, some solutions to unstable direct estimates include: small conditional or unconditional perturbations prior to modelling, use of zero-or-one inflated models [40, 41], assuming a distributional form for the strata-specific proportions [10, 42] or dropping them from analysis altogether [6].
**BIN model** Instability is reduced by considering an aggregation of the binary outcomes via a binomial model. However, these do not generally accommodate the sample design. In contrast, our approach provides stable direct estimates and accommodates the sample design. The binomial model we use in this study purposely ignores the sample design and was chosen as a baseline (crude) model against which the TSLN model was evaluated in terms of sample design consistency.
**BETA model** Although the Beta distribution is a natural consideration when modelling proportions [43, 5, 6], it has several statistical and computational limitations that are summarized in Table 1 and discussed in detail in Supplemental Materials B. These include: problematic constraints on the \(\hat{\mu}_{i}\)'s which depend on the \(\psi_{i}^{D}\)'s; the necessity to impute sampling variances for non-sampled areas; bimodal behaviour which causes significant MCMC convergence issues; and undefined likelihoods for unstable direct estimates. Nevertheless, the FH Beta model is a popular solution to modelling proportions; thus, it was chosen as one of the comparison models.
**ELN model** Other area-level models for proportions assume Gaussian distributions for the direct estimators, allowing the use of typical FH models [5]. This method is effective for applications with large area sample sizes. In the situation of sparsity, however, the Gaussian approximation is insufficient. Instead, researchers use the empirical logistic transformation before using a FH model. This is the technique used by Mercer _et al._[10] and Cassy _et al._[25], and us. Since the ELN model is most similar to ours, we used it to assess the benefits of using S1 rather than direct estimates as input data.
**LOG model** Individual level models for binary data offer significant advantages over the area level models discussed above. Most notably, working at the individual level eliminates the need for direct estimates, and allows for greater model flexibility. However, individual level logistic models require covariate data for _all_ population units [44, 15]. If all covariates are categorical, multilevel regression with poststratification (MrP) is a favourable solution [45, 46, 44]. However, MrP relies on the process of poststratifying to known population counts which itself creates issues related to covariate choice and data accessibility [16, 17]. The assumption of known individual level covariate data introduces further restrictions on covariate choice due to necessary concordance between survey and census covariates. As mentioned before, covariates collected in a census may be less predictive of health-related outcomes, whilst individual level survey-only covariates may be more useful. The TSLN-S1 model does not predict values for the whole population, hence easing the requirement for survey-census concordance and permitting the inclusion of survey-only covariates.
Standard logistic mixed models (and by extension, MrP) do not accommodate known sample weights and are susceptible to bias under informative sampling. A simple solution is to include the sample weights as covariates, but this requires imputation for nonsampled individuals [11]. Bayesian pseudo-likelihood [28] is an alternative used by Parker, Janicki, and Holan [14] (see Supplemental Materials B). By construction our TSLN model accommodates the sample design at both stages, via pseudo-likelihood in (3) and the weighted estimators in (4). Using the pseudo-likelihood LOG model (a MrP-style) as a comparison model allows us to evaluate the efficacy of employing survey-only covariates and two-stage methods in general.
## 4 Simulation Study
To evaluate the performance of the TSLN model, we undertook a simulation experiment based on the approach used by Buttice and Highton [47]. First we generated individual level census data for \(M=100\) small areas with populations ranging from \(500\) and \(3000\). The simulated census included the binary outcomes, \(\mathbf{y}\), two individual-level covariates (one survey-only categorical covariate, \(\mathbf{x}^{\text{s}}\) with three groups, and one continuous covariate, \(\mathbf{x}^{\text{cs}}\) available in both the census and survey) and a single area level covariate (\(\mathbf{k}^{\text{a}}\)). Keeping the census, and thus true proportions, fixed, we then drew \(D=100\) unique (repetitions) surveys, fitting the TSLN and four comparison models (Table 1) to each. We used a sampling fraction of 0.4% and only sampled \(m=60\) areas each repetition. Following Hidiroglou and You [34], we devised the sampling method so that individuals with a binary value of 0 were more likely to be sampled and constructed sample weights to reflect this. The median sample size, \(n\), population size, \(N\), and area sample size, \(n_{i}\) was 768, 177071, and 8 respectively. Note that it is relatively rare for simulation experiments in the SAE literature to use such small area level sample sizes; generally \(n_{i}>50\)[13, 48, 14, 49], which is less relevant in the Australian context. Unstable direct estimates for all comparison models were stabilised via the simple perturbation method; add or subtract a value of \(0.001\) to any \(\hat{\mu}_{i}^{D}=0\) or \(1\), respectively, prior to modeling. Extensive details of the simulation algorithm can be found in Supplemental Materials C.
Overall, we wished to explore how the TSLN model performed for sparse survey data (e.g. when all \(n_{i}\) are very small). In addition, we wanted to assess performance for varying degrees of prevalence and for the inclusion of more predictive individual level survey-only covariates. These objectives translated to the six simulation scenarios given in Table 2. Note that Sc1, Sc3, and Sc5 ensure that the survey-only covariate is much more predictive of the outcome than the census-available covariate. Given the informative sample design, the rare scenarios (Sc3 and Sc4) will provide a high number of unstable direct estimates (see Table 4).
As described in Section 3 we compared the performance of our novel model to the four model-based SAE methods summarized in Table 1. The TSLN, BETA, LOG and ELN models were fit using rstan in R [36]. The BIN model was fit using JAGS via rjags [50]. The covariates used in the models are given in Table 3. Only models deemed to have converged were used in the simulation results. Convergence was assessed using \(\widehat{R}\)[51]. Any analysis for which at least one parameter of interest had \(\widehat{R}>1.02\) was discarded. We used \(2,000\) post-warmup and \(6,000\) post-burnin draws for each of four chains for the Stan and JAGS models, respectively. R code to run the simulations, with the necessary Stan and JAGS code is available here.
Noninformative and diffuse priors (e.g. \(\boldsymbol{\lambda}\propto 1\) and \(\sigma_{v}\propto 1\)) are common choices in Bayesian SAE [2, 22]. Recently however, weakly informative priors, which consider the scale of the data, are preferred to these flat priors [52]. All fixed regression parameters in all models were given a \(N(0,2)\) prior, whilst intercept terms were given a student-\(t(df=3,\mu=0,\sigma=1)\) prior. Standard deviation parameters in the TSLN-S1, LOG and BIN models were given a
\begin{table}
\begin{tabular}{l c c c c} & & \(\alpha^{\text{s}}\) & \(P^{L}\) & \(P^{U}\) \\ \hline
50-50 & Sc1 & 0.8 & 0.35 & 0.65 \\ & Sc2 & 1.5 & & \\ \hline Rare & Sc3 & 0.8 & & \\ & Sc4 & 1.5 & & \\ \hline Common & Sc5 & 0.8 & & \\ & Sc6 & 1.5 & & \\ \end{tabular}
\end{table}
Table 2: Summary of the six simulation scenarios. Note that \(\alpha^{\text{s}}\) mediates the predictive performance of \(\mathbf{x}^{\text{s}}\), with lower values resulting in better prediction of the binary outcome, and \(P^{L}\) and \(P^{U}\) give the lower and upper bounds of the true proportions, respectively (see Supplemental Materials C for more details).
\begin{table}
\begin{tabular}{c c c c} & & \(\mathbf{x}^{\text{s}}\) & \(\mathbf{x}^{\text{cs}}\) & \(\mathbf{k}^{\text{a}}\) \\ \hline Individual level & \({}^{\text{TSLN-S1}}\) & ✓ & ✓ & ✓ \\ & LOG & ✓ & ✓ \\ \hline & TSLN-S2 & & & ✓ \\ Area level & ELN & & ✓ \\ & BETA & & ✓ \\ & BIN & & ✓ \\ \end{tabular}
\end{table}
Table 3: Illustrates which of the simulated covariates, described in Section 4, were used in which models in the simulation study. Ticks denote that the model specified in the row used the covariate (shown in the column) as a fixed effect. The estimated regression coefficients and variance terms varied across models, scenarios and repetitions (see Supplemental Materials C).
weakly informative \(N(0,1)^{+}\) prior, while standard deviation parameters in the BETA, ELN and TSLN-S2 models were given a \(N(0,2)^{+}\) prior. Priors for the GVFs were Cauchy \(\left(0,2\right)^{+}\) for the standard deviations and \(N(0,2)\) for the regression coefficients. Note that the GVF of (12) was necessarily adjusted for the Beta model (See Supplemental Materials B for details). All design matrices were mean centered prior to model fitting and the popular QR decomposition was used [53].
The simulation results for the comparison models for Sc1 and Sc2, for example, should be very similar, apart from any simulation noise. This is because the only differences between Sc1 and Sc2 is the increased predictive accuracy of the survey-only covariates, which are _only_ included in the TSLN-S1 model (see Table 2). To ensure internal consistency, we refit the comparison models for each of the six scenarios.
Note that we cannot use the GW model as a comparison model as it uses a frequentist approach for the first stage and requires pairwise sampling probabilities for which we assume are inaccessible. Nor can we compare our approach to the DBBH model as it requires temporal data.
### Performance metrics
To compare the models we use Bayesian performance metrics which are calculated separately depending on whether an area was sampled or nonsampled in repetition \(d\). Unlike common frequentist metrics (see Supplemental Materials C), Bayesian metrics use all the appropriate posterior samples during calculation. Thus, they generally favour posteriors with smaller variance, whilst still penalising inaccuracy. While Bayesian coverage is summed over all areas and repetitions, the remaining metrics are calculated independently for each repetition, \(d\). The simulations provide \(100\times T\) posterior draws for each area, model and scenario. Let \(\hat{\mu}_{idt}\) be the \(t\)th posterior draw for repeat \(d\) in area \(i\) and \(\mu_{i}\) be the true proportion in area \(i\). Also let \(\hat{\mu}_{id}^{(\text{L})}\) and \(\hat{\mu}_{id}^{(\text{U})}\) denote the lower and upper bounds, respectively, of the posterior 95% highest density interval (HDI) for area \(i\) and repeat \(d\).
Absolute relative bias: ARB\({}_{id}\)
\[\left|\frac{\frac{1}{T}\sum_{t=1}^{T}\left(\hat{\mu}_{idt}-\mu_{i}\right)}{\mu _{i}}\right| \tag{14}\]
Mean ARB: MARB\({}_{d}\)
\[\frac{1}{M}\sum_{i=1}^{M}\text{ARB}_{id} \tag{15}\]
Relative root mean square error: RRMSE\({}_{id}\)
\[\frac{\sqrt{\frac{1}{T}\sum_{t=1}^{T}\left(\hat{\mu}_{idt}-\mu_{i}\right)^{2}}} {\mu_{i}} \tag{16}\]
Mean RRMSE: MRRMSE\({}_{d}\)
\[\frac{1}{M}\sum_{i=1}^{M}\text{RRMSE}_{id} \tag{17}\]
Coverage
\[\frac{1}{MD}\sum_{i=1}^{M}\sum_{d=1}^{D}\mathbb{I}\left(\hat{\mu}_{id}^{( \text{L})}<\mu_{i}<\hat{\mu}_{id}^{(\text{L})}\right) \tag{18}\]
The distribution of the \(100\) MRRMSE\({}_{d}\) and MARB\({}_{d}\)values are visualized in Fig. 2 and 3, respectively. To summarize these we take the median of each.
### Results
#### 4.2.1 Stage 1 results
Table 4 summarises the simulated data and provides details on the smoothing properties of the TSLN-S1 model. As anticipated with more predictive individual level covariates (e.g. Sc1 versus Sc2), the \(SR\)s are lower, and the \(ALC\)s are closer to 1. Interestingly, despite the less predictive scenarios having nearly double the noise on average (see Table 2), there are only subtle differences in the \(ALC\) and \(SR\). The biggest effect of having more predictive individual level covariates is the significantly smaller increase in sampling variance, at the cost of a lower level of reduction in MAB. As discussed in previous sections, this pattern is expected because the S1 estimates, by design, will collapse to the direct estimates as the TSLN-S1 model begins to "overfit" the survey data.
#### 4.2.2 Overall modelling results
Fig. 2 and 3 compare the distribution of MRRMSE and MARB for the models across the 100 repetitions, for the six scenarios and sample status (sampled and nonsampled areas). Table 5 summarises the results shown in the figures by providing the median across the 100 repetitions and also reports the median credible interval sizes and Bayesian coverage.
**Accuracy** Overall, the TSLN model provides MRRMSE that are between 28% to 40% smaller than those for the next smallest MRRMSE. Across all scenarios and for sampled areas, at least, the TSLN model provides a 20% to 35% smaller MARB than the next smallest MARB; a clear pattern in Fig. 3. However, for nonsampled areas, the TSLN model does not outperform the alternative models, with MARB values ranging from 1% to 43% bigger than the best performing approach (often the LOG model). These findings for the MARB (nonsampled) were expected given the additional smoothing enforced under the TSLN model. That said, the between-simulation variability of the MARB estimates (e.g. the sizes of the boxes in Fig. 3) suggest that the MARB for the TSLN model is at least comparable to the other models for nonsampled areas.
Supplemental Materials C provides frequentist MSE results [42]. Across all scenarios and for sampled areas we found that the TSLN model had MSE values ranging from 40% to 69% smaller than the next smallest. For nonsampled areas, the LOG model had smaller MSE than that of the TSLN model, apart from in the rare scenarios, where the TSLN model outperformed all comparison models.
**Uncertainty** Similar to Gomez-Rubio _et al._[54], we found that the credible intervals for all comparison models were too wide for nonsampled areas, but generally too narrow for sampled areas. The TSLN model had, in most cases, coverage closer to the nominal 95% than the comparison models.
By excluding the BIN model due to its excessive bias, our simulation study found that the TSLN model consistently provided the smallest CI widths, with the improvement most notable for nonsampled areas. For sampled areas, posterior credible intervals were consistently between 9% and 33% smaller than that of the other models. For nonsampled areas the CI widths ranged from 43% to 50% smaller. Since the TSLN-S1 model pre-smooths estimates, the TSLN-S2 model has considerably less variance to accommodate than the ELN and BETA models. This manifests itself in a smaller \(\sigma_{v}\) (see Supplemental Materials C), which results in less posterior variance (i.e. smaller credible intervals). Interestingly when the perturbation applied to unstable direct estimates is less extreme (e.g. setting \(\hat{\mu}_{i}^{D}=0.01\) instead of \(0.001\)), the size of \(\sigma_{v}\) and thus the uncertainty can be reduced, in some cases even halved (results not shown).
**Covariates** We found little performance improvements when we used more predictive individual level covariates in the TSLN-S1 model (e.g. comparing Sc1 to Sc2 for example). We found no differences in MRRMSE, CI width or coverage. That said, there were small improvements in MARB when using the more predictive survey-only covariates. For example, for nonsampled areas, the MARB is 2.07 and 2.12 for Sc3 and Sc4, respectively.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{2}{c}{Percent of} & \multicolumn{1}{c}{Posterior} & \multicolumn{1}{c}{Percent increase} & Percent \\ & & sampled areas & \multicolumn{1}{c}{medians of \(SR\)} & \multicolumn{1}{c}{medians of \(ALC\)} & \multicolumn{1}{c}{in sampling} & \multicolumn{1}{c}{reduction in} \\ \hline \multirow{3}{*}{50-50} & Sc1 & 3.3 (1.7, 5.0) & 0.50 (0.46, 0.54) & 0.70 (0.65, 0.74) & 55 (27, 91) & 25 (22, 29) \\ & Sc2 & 3.3 (1.7, 5.0) & 0.56 (0.51, 0.60) & 0.64 (0.59, 0.68) & 72 (40, 106) & 32 (28, 35) \\ \hline \multirow{3}{*}{Rare} & Sc3 & 25.0 (21.7, 30.0) & 0.46 (0.42, 0.51) & 0.79 (0.72, 0.83) & 78 (36, 140) & 24 (19, 28) \\ & Sc4 & 25.0 (21.7, 30.0) & 0.49 (0.46, 0.54) & 0.75 (0.71, 0.81) & 100 (52, 161) & 27 (22, 32) \\ \hline \multirow{3}{*}{Common} & Sc5 & 1.7 (1.7, 3.3) & 0.55 (0.51, 0.58) & 0.66 (0.62, 0.71) & 66 (28, 107) & 29 (26, 32) \\ & Sc6 & 1.7 (1.7, 3.3) & 0.61 (0.57, 0.64) & 0.62 (0.56, 0.65) & 81 (38, 125) & 37 (34, 41) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Overview of the simulation and smoothing properties of the TSLN-S1 model (see Supplemental Materials C), including smoothing ratio, \(SR\) (9), weighted OLS, \(ALC\) (Section 2.1.2), and mean absolute bias (MAB) along with the interquartile bounds in brackets. Column 1 is derived by calculating the percentage of sampled areas with unstable direct estimates for each repetition, before taking the median (and IQ bounds) of these 100 percentages. Column 2 is derived by calculating the posterior median of the \(SR\) for each repetition, before taking the median (and IQ bounds) of these 100 values. Column 3 is derived by taking the median (and IQ bounds) of the 100 \(ALC\) values. The fourth column is derived by first calculating the median of the area-specific ratios of the stage 1 sampling variances to the direct sampling variances for each repetition, before taking the median (and IQ bounds) of the 100 values (details are given in Supplemental Materials C). Large values indicate large increases in the S1 sampling variances above that of the direct, which is undesirable. The final column is derived by first calculating the ratio between the MABs for the direct and S1 estimates for each repetition, before taking the median (and IQ bounds) of these 100 ratios (details are given in Supplemental Materials C). Large values indicate larger reductions in MAB when using the S1 estimates, which is preferable.
**Rare outcomes** Given our interest in small area estimation applications under heavy instability, the simulation results for the rare scenarios (Sc3 and Sc4) are particularly important. Although the ELN and TSLN models are relatively similar in construction, our approach gave superior results under heavy instability. TSLN model had lower median MRRMSE, smaller average CI widths, and better coverage than the ELN model. We believe this is primarily due to the substantially larger random effect variance (see Supplemental Materials C); in Sc3 \(\sigma_{v}=2.14,0.17\) for the ELN and TSLN models, respectively. Although the alternative area level model, the BETA model, provided better coverage (97%) for non-sampled areas, it gave very poor coverage (67%) for sampled areas. Although individual level models like the LOG model should provide optimal prediction in the unstable setting, we found that the TSLN model remained superior to the LOG model in terms of median MRRMSE, average CI width and coverage. For the rare scenarios and for nonsampled areas, the LOG model gave the lowest bias; approximately 41% smaller than the TSLN model. However, the between-simulation variability of the MARB (see Fig. 3) suggests this difference is not significant.
## 5 Application: Prevalence of current smoking on the east coast of Australia
To illustrate the benefits of the two-stage logistic normal (TSLN) model in practice, we generate small area estimates of current smoking prevalence in Australia using the TSLN, ELN and LOG models. Neither the BIN model (poor performance in the simulation study) nor the BETA model (has various limitations, see Supplemental Materials B) were considered. Unless otherwise stated, the model specifications used in this application are identical to those described in the preceding sections.
Figure 2: Boxplots of MRRMSE\({}_{d}\) across all \(D=100\) repetitions. The medians of each boxplot are available in Table 5.
### Data
The individual level survey data were obtained from the 2017-18 National Health Survey (NHS), which is an Australia-wide population-level health survey conducted every 3-4 years by the Australian Bureau of Statistics (ABS) [55, 56]. The survey aimed to collect a variety of health data on one adult and one child (where possible) in each selected household. Households were selected using a complex multistage design [55]. Trained ABS interviewers conducted personal interviews with selected persons in each of the sampled households. To allow researchers to accommodate the complex sample design, the ABS provides survey weights, which we use in this analysis after applying the necessary rescalings (see Section 1.1). For the area level auxiliary data, we use data from the 2016 Australian census, represented as proportions. We obtained the Estimated Resident Population (ERP) stratified by age (15 years and above), sex and small area for both 2017 and 2018. In this study the population counts were derived by averaging across the two years. Although the 2017-18 NHS collected data across Australia, we reduce our analysis to just those states on the east coast, both to ease the interpretation of visualizations and reduce the computation time.
We classify current smoking as daily, weekly, or less-than-weekly smokers, like the Social Health Atlas [19]. We additionally enforce that all current smokers must have smoked at least 100 cigarettes in their lifetime. After exclusions, we have data for 10,918 respondents aged 15 years and above. The overall weighted prevalence of current smoking in our study region is 14.7%.
Figure 3: Boxplots of MARB\({}_{d}\) across all \(D=100\) repetitions. The medians of each boxplot are available in Table 5. We have omitted the BIN model; its very large bias distorts the plot.
The goal of this analysis is to generate prevalence estimates for 1,630 small areas, of which 1,262 (77%) were sampled. Of the sampled areas, 781 (62%) gave stable direct estimates. Area level sample sizes range from 1 to 140, with a median of 7. Given that around 50% of the small areas are either missing or unstable (see Fig. 4), it is clear how this SAE application illustrates the sparsity issues we've addressed in this work.
The small areas we use are derived from the 2016 Australia Statistical Geography Standard (ASGS), which is the geographic standard maintained by the ABS [57]. The ASGS splits Australia into a hierarchical structure of areas that completely cover the country. We generate prevalence estimates at the statistical areas level 2 (SA2), which is the lowest level of the ASGS hierarchy for which detailed census population characteristics are publicly available.
### Model details
Model selection is an intricate step of our approach in practice. Not only do we have two models for which variable selection must be performed, but variable selection decisions made at the second stage are dependent on the model fit of the first stage; an issue we do not tackle here. To reduce the computational burden of variable selection, we follow the advice by Goldstein [58] and initially use frequentist AIC, BIC and the \(SR\) and \(ALC\) (where applicable) to select fixed effects. Final covariate and random effect decisions were made using Bayesian leave-one-out cross-validation (LOOCV) via Pareto-importance sampling [59]. Lower values of AIC and BIC are preferred, while higher values of LOOCV are preferred. Where possible the fixed and random effects for all the models were chosen according to the TSLN model. That is, to enable fair comparisons, fixed and random effects for all models were as similar as possible. See Supplemental Materials B for further details.
#### 5.2.1 Individual level models
We used the following individual level categorical covariates in the TSLN-S1 model: age, sex and their interaction; registered marital status; high school completion status; Kessler psychological distress score; educational qualifications;
Figure 4: Map of the 1630 statistical area level 2 (SA2s) on the east coast of Australia. Each SA2 is coloured according to the sample size (greater or less than 10) or sample status of the SA2 (sampled or nonsampled).
self-assessed health; and labor force status. We also used the following household level categorical covariates: number of daily smokers, tenure type, and whether there were Indigenous Australian household members. Along with the individual and household level covariates, we used some SA2-level contextual covariates including state, the Index of Relative Socio-Economic Disadvantage (IRSD) from the Socio-Economic Indexes for Areas (SEIFA) [60], and the following SA2-level demographic variables as proportions: occupation, Indigenous Australian status, income, unemployment and household composition.
In addition to the individual and area level fixed effects, we added two further random effects. Borrowing ideas from MrP [61], we added a hierarchical prior on a risk factor categorical covariate to the TSLN-S1 model. This risk factor categorical covariate was constructed from every unique combination of sex and age and four binary-coded individual level risk factors resulting in 274 categories. The number of participants in each category ranged from 1 to 299, with a median of 20. Finally, to further improve the predictive accuracy of the TSLN-S1 model and reduce the \(SR\), we also included an individual level residual error term with a fixed standard deviation of 0.5. The chosen TSLN-S1 model had \(SR=0.59,ALC=0.67\).
Since census microdata was unavailable, we could not mirror the TSLN-S1 model complexity in the LOG model. We omitted the risk factor categorical random effect and were restricted to just three individual level covariates: age, sex and marital status. These three covariates gave a poststrata dataset with 146,700 rows. Note that we also omitted the individual level residual error term from the LOG model as, unlike the TSLN-S1 model, the LOG model must prioritise generalisability in order to perform well.
Further details and definitions for the covariates used in the individual level models can be found in Supplemental Materials B.
#### 5.2.2 Area level models
For both the TSLN-S2 and ELN models, we utilized the following SA2-level covariates: IRSD, state and the first six principal components derived from SA2-level census proportion data. Similar to Section 2.1.5 we use GVFs where the log of the SA2-level sample sizes were the only predictor. Details on variable selection and the covariates used in the area level models can be found in Supplemental Materials D.
#### 5.2.3 Spatial random effects
Unlike in the simulation study where data were not generated with any spatial autocorrelation, we expect smoking prevalence to exhibit spatial clustering as smoking is generally higher in areas of lower socioeconomic status which can be geographical neighbors [62]. It has been shown that accommodating spatial structure in model-based SAE methods can provide considerable efficiency gains [63, 11, 64, 13, 7]. To adjust for the spatial autocorrelation between areas and enforce global _and_ local smoothing, we use the BYM2 prior [31] at the SA2-level for the TSLN-S2, ELN and LOG models (see Supplemental Materials D for details). Although others have used conditional autoregressive (CAR) or simultaneous autoregressive (SAR) priors only, Gomez-Rubio _et al._[54] conclude that models with just the CAR prior generally over-estimate the small area estimates and argue that including a structured and unstructured random effect provides a useful compromise between producing accurate small area estimates and their corresponding variances.
#### 5.2.4 Priors and computation
Most of the priors used in this case study mirror those specified in Section 4. We utilised a relatively informative prior for \(\rho\), the mixing parameter of the BYM2 spatial prior which controls the amount of spatially structured as opposed to unstructured variation, where \(\rho=1\) gives a scaled intrinsic CAR prior [31]. We use \(\rho\sim\)Beta(\(\text{shape}=3.05,\text{rate}=1.65\)), a prior which places roughly 45% of density above \(\rho=0.7\). In this application, areas with survey data may have many neighbors but few with survey data, thus this informative prior "encouraged" the models to borrow information locally. The median number of neighbors is 7, whereas the median number that have sampled data is 5. This informative prior on \(\rho\) slightly improved both model fit and predictive accuracy. The posterior median of \(\rho\) was 0.89 under this informative prior, but 0.5 under a \(\text{Uniform}(0,1)\) prior.
We used \(5000\) post-warmup draws for each of four chains in Stan [36], feeding a random subset of 500 posterior draws from the TSLN-S1 to the TSLN-S2 model. For storage reasons we thinned the draws by four, resulting in \(5000\) useable posterior draws. Convergence of the models were assessed using \(\widehat{R}\)[51], where a \(\widehat{R}<1.02\) was used as the cutoff for convergence for the parameters of interest, namely \(\mathbf{\mu}\). We also explored trace plots and autocorrelation plots to verify convergence. All proportion parameters, \(\mathbf{\mu}\), had effective sample sizes \(>\)500.
#### 5.2.5 Benchmarking
To help validate our small area estimates, we utilised state-level estimates as internal benchmarks [65]. There are four states on the east coast of Australia, with a median sample and population size of 3100 and 4.6 million, respectively. At this level of aggregation, the direct estimates are reliable. We employed inexact fully Bayesian benchmarking [66], which acts as a soft constraint on the model by penalizing discrepancies between the modeled state estimates and the direct state estimates. Unlike previous approaches that use posterior point estimates [1], Bayesian benchmarking directly includes the benchmarks in the joint posterior distribution, which accounts for benchmarking-induced uncertainty. We use Bayesian benchmarking in the TSLN-S2 and ELN models and exact benchmarking for the LOG model. Full details and a performance comparison with and without benchmarking is given in Supplemental Materials D.
#### 5.2.6 Visualisations
We map both absolute and relative measures of current smoking. The absolute measures are the posterior medians and highest density intervals (HDIs) of the estimated prevalence from the models, while the relative measures rely on odds ratios (ORs). We derive ORs as follows,
\[\widehat{\text{OR}}_{i}=\frac{\hat{\mu}_{i}/(1-\hat{\mu}_{i})}{\hat{\mu}^{D}/ (1-\hat{\mu}^{D})}, \tag{19}\]
where \(\hat{\mu}^{D}\) is the overall direct estimate of current smoking prevalence. By deriving \(\widehat{\text{OR}}_{i}\) for all posterior draws, we can map their posterior medians and HDIs. To quantify whether an OR is significantly different to 1, we use the exceedence probability (EP) [67, 37], where \(t=1,\ldots,T\) indexes the MCMC draws.
Figure 5: Comparison of the modeled SA2 level current smoking prevalence estimates from the three models. The caterpillar plots displays the posterior medians and 95% highest density intervals (grey bars) for all 1630 SA2s, ordered by their magnitude. The point colors mirror the \(y\)-axis and the vertical black line is the overall current smoking prevalence.
\[EP_{i}=\left(\frac{1}{T}\sum_{t}\mathbb{I}\left(\widehat{\text{OR}}_{it}>1\right)\right) \tag{20}\]
A high (low) \(EP_{i}\) is interpreted as a high level of evidence that the OR for SA2 \(i\) is significantly higher (lower) than 1. Generally an \(EP_{i}\) above 0.8 or below 0.2 is considered significant [68].
### Results
Fig. 5 gives separate caterpillar plots for the SA2-level prevalence estimates and HDIs for the three models. Supporting this plot, Fig. 6 compares the estimates from the TSLN model to the ELN and LOG models. Both figures show the similarities in the modeled estimates from the two area level models and the superior interval sizes of the TSLN model. The LOG model provides estimates with little correspondence to those from the TSLN or the ELN models.
For areas with high prevalence the TSLN model provides more conservative estimates than those from the ELN model; a result of the two stages of smoothing applied when using the TSLN model. Given that the sparsity of the survey data results in very noisy and unstable direct estimates, in this case study we prefer estimates that are sligthly oversmoothed than undersmoothed.
The six outlying points visible in (a) of Fig. 6 are SA2s in the Cape York (Northern section) region of Queensland. The high level of uncertainty for these SA2s is reasonable as they are remote, have small populations and are far from areas with survey data.
In Fig. 7 we map the relative measures for all 1630 SA2s, including those without survey data. The figure displays posterior median ORs, HDIs and corresponding \(EP_{8}\) for the three models. Observe that because the area level models (TSLN and ELN) generally provide more certainty in the estimates, the \(EP_{8}\) are more extreme than those for the LOG model. The prevalence of current smoking is significantly higher than the overall prevalence in the west and northern parts of the region, while significantly lower in urban centres, such as Sydney (the most populous city on the east coast of Australia), Melbourne and Brisbane. Fig. 8 gives the absolute prevalence estimates for these three cities. See Supplemental Materials D for a plot stratifying the modeled estimates by socioeconomic status and maps of the absolute prevalence estimates from the models.
By treating the direct estimates at a higher aggregation level as the truth, we compare the direct and modelled estimates at the statistical area level 4 using RRMSE, ARB, coverage and interval overlap. The details and possible limitations of these comparative performance results are given in Supplemental Materials D. We found that the TSLN model provided superior MRRMSE and interval overlap. Furthermore, the performance metrics for the TSLN model show the smallest changes when we benchmark, providing more support for the validity of the TSLN model estimates. Prevalence estimates from the LOG model, on the other hand, saw considerable changes when benchmarking was applied. We posit that the LOG model is providing poor estimates given the restricted set of individual level census covariates that were available, given the requirements of the LOG model.
Figure 6: Comparison of the modeled estimates from the TSLN, LOG and ELN models. The scatter plots display the posterior medians (black points) and 95% highest density intervals (gray bars) of the modeled SA2 level current smoking prevalence estimates, with the red lines denoting equivalence. The vertical and horizontal black lines are the overall current smoking prevalence.
## Appendix A
Figure 7: Choropleth maps displaying the modeled ORs for smoking prevalence in 1630 SA2s on the east coast of Australia. For each model, this figure displays the posterior medians (top row), exceedence probabilities (\(EP\)s) (middle row) and width of the 95% HDI (bottom row) for the ORs. Note that some values are lower than the range of color scales shown — for these values, the lowest color is shown. Gray areas were excluded from estimation due to having 2016-2017 ERPs smaller than or equal to 10. Black lines represent the boundaries of the four states on the east coast of Australia.
## Appendix A
Figure 8: Choropleth inset maps displaying the modeled estimates of the proportion of current smokers in and around Sydney, Melbourne and Brisbane; three capital cities on the east coast of Australia. Gray areas were excluded from estimation due to having 2016-2017 ERPs smaller than or equal to 10. Most of these excluded areas are National parks, industrial areas, airports or cemetries.
## 6 Discussion
We have proposed a new method using a Bayesian two-stage model to estimate proportions for small areas from sparse survey data. The TSLN model is able to model these data by reducing instability, accommodating survey design, relaxing restrictions on the covariates that can be included and generating estimates for nonsampled areas. We've shown that the TSLN model can provide superior proportion estimates for sampled and nonsampled areas compared to some alternatives, with similar or slightly more bias but much smaller variance; resulting in consistently smaller MRRMSEs and credible intervals. Compared with other available approaches, the TSLN model appears to be the best option in sparse settings, such as when using the NHS.
We have demonstrated, along with others [13, 35], that the smoothing properties of modelling at both the individual and area level are beneficial across a broad range of SAE applications. That said, although it has not been explicitly acknowledged in earlier research on two-stage approaches, we found that the quality of the first stage model can have a significant impact on the performance of two-stage methods. Thus, in addition to the modeling approach, we have contributed more widely to the two-stage literature by proposing simple measures (e.g., the \(SR\) and the \(ALC\)) and recommendations to aid practitioners to ensure that the two-stages work with, rather than against, one another.
Although the simulation results and case study illustrate the potential benefits of the TSLN model, some limitations motivate future theoretical and applied research in two-stage SAE. First, there would be benefits in the development of rigorous statistical theory to motivate the use of S1 estimates over direct estimates. Moreover, there is opportunity for theoretical determination of optimal values for the \(SR\) and \(ALC\) in particular two-stage SAE applications. Another avenue of research is the development or application of model-specific MCMC algorithms capable of fitting the TSLN model in one single step. Alternatively, the TSLN model could be redefined using ideas from modular models that leverage Bayesian cut distributions [69]. There is also scope to develop non-GVF solutions to the undesirable properties mentioned in Section 2.1.5 and more formally compare methods to adjust for instability. Although coverage was relatively stable for the TSLN model, one could explore methods of conformal prediction [70], which can be used to guarantee frequentist coverage in small area estimation [71].
The TSLN model is generic in its component models, which within an applied context allows researchers to extend the TSLN model by using more flexible classes of models (such as semi or non-parametric models or even machine learning algorithms), as long as uncertainty can be captured. Moreover, further work is required in developing model selection tools for both stages of the TSLN model, with a focus on the flow-on effect of variable choices in the first stage to the second.
## 7 Conclusion
Given that the need for higher resolution area level estimates is increasing faster than funding for larger surveys, methods of small area estimation must be capable of tackling sparsity issues. In this work, we have developed a solution to small area estimation for severely sparse data by leveraging both area and individual level models and utilising more auxiliary data (namely, survey-only covariates). Similar to other work [13, 35], this research represents another important step in continuing the positive narrative surrounding two-stage approaches and highlights their benefits and future avenues of research in small area estimation. As expressed by Fuglstad, Li, and Wakefield [4], "\(\ldots\) the goal of the analysis should determine the approach, and different goals may call for different approaches." We hope our approach will afford practitioners to set more ambitious goals for their small area estimates; lower levels of aggregation or deriving estimates by area _and_ sex or age.
#### Acknowledgments
This study has received ethical approval from the Queensland University of Technology Human Research Ethics Committee (Project ID: 4609) for the project entitled "Statistical methods for small area estimation of cancer risk factors and their associations with cancer incidence".
We thank the Australian Bureau of Statistics (ABS) for designing and collecting the National Health Survey data and making it available for analysis in the DataLab. The views expressed in this paper are those of the authors and do not necessarily reflect the policy of QUT, CCQ or the ABS.
#### Competing interests
The authors declare that they have no competing interests.
## Funding
JH was supported by the Queensland University of Technology (QUT) Centre for Data Science and Cancer Council QLD (CCQ) Scholarship. SC receives salary and research support from a National Health and Medical Research Council Investigator Grant (#2008313).
## Supplemental materials
The supplemental material mentioned throughout this work can be found on at the end of this document. This additional material includes further details, plots and results to accompany Sections 2-5 of this paper.
## References
* Pfeffermann [2013] Danny Pfeffermann. New important developments in small area estimation. _Statistical Science_, 28(1):40-68, 2013.
* Rao and Molina [2015] J.N.K. Rao and Isabel Molina. _Small Area Estimation_. Wiley Series in Survey Methodology, Hoboken, New Jersey, 2nd edition, 2015.
* Moretti and Whitworth [2021] A. Moretti and A. Whitworth. Estimating the uncertainty of a small area estimator based on a microsimulation approach. _Sociological Methods and Research_, 2021. doi: 10.1177/0049124120986199.
* Fuglstad et al. [2021] Geir-Arne Fuglstad, Zehang Richard Li, and Jon Wakefield. The two cultures for prevalence mapping: Small area estimation and spatial statistics. _arXiv preprint arXiv:2110.09576_, 2021. doi: arXiv:2110.09576.
* Liu et al. [2007] Benmei Liu, Partha Lahiri, and Graham Kalton. Hierarchical bayes modeling of survey-weighted small area proportions. _Proceedings of the American Statistical Association, Survey Research Section_, pages 3181-3186, 2007.
* Theory and Methods_, 49(9):2264-2284, 2020. doi: 10.1080/03610926.2019.1570266.
* Paige et al. [2022] John Paige, Geir-Arne Fuglstad, Andrea Riebler, and Jon Wakefield. Design-and model-based approaches to small-area estimation in a low-and middle-income country context: comparisons and recommendations. _Journal of Survey Statistics and Methodology_, 10(1):50-80, 2022. ISSN 2325-0984.
* Battese et al. [1988] George E. Battese, Rachel M. Harter, and Wayne A. Fuller. An error-components model for prediction of county crop areas using survey and satellite data. _Journal of the American Statistical Association_, 83(401):28-36, 1988. ISSN 0162-1459. doi: 10.10801621459.1988.10478561.
* Fay and Herriot [1979] Robert E. Fay and Roger A. Herriot. Estimates of income for small places: An application of james-stein procedures to census data. _Journal of the American Statistical Association_, 74(366):269-277, 1979. ISSN 01621459. doi: 10.2307/2286322.
* Mercer et al. [2014] Laina Mercer, Jon Wakefield, Cici Chen, and Thomas Lumley. A comparison of spatial smoothing methods for small area estimation with sampling weights. _Spatial Statistics_, 8(1):69-85, 2014. ISSN 2211-6753. doi: [https://doi.org/10.1016/j.spasta.2013.12.001](https://doi.org/10.1016/j.spasta.2013.12.001).
* Vandendijck et al. [2016] Y. Vandendijck, C. Faes, R. S. Kirby, A. Lawson, and N. Hens. Model-based inference for small area estimation with sampling weights. _Spatial Statistics_, 18(1):455-473, 2016. doi: 10.1016/j.spasta.2016.09.004.
* Savitsky and Toth [2016] Terrance D Savitsky and Daniell Toth. Bayesian estimation under informative sampling. _Electronic Journal of Statistics_, 10(1):1677-1708, 2016. ISSN 1935-7524.
* Gao and Wakefield [2022] Peter A. Gao and Jonathan Wakefield. Smoothed model-assisted small area estimation. _arXiv preprint arXiv:2201.08775_, 2022. doi: arXiv:2201.08775.
* Parker et al. [2019] Paul A. Parker, Ryan Janicki, and Scott H. Holan. Unit level modeling of survey data for small area estimation under informative sampling: A comprehensive overview with extensions. _arXiv preprint arXiv:1908.10488_, 2019. doi: arXiv:1908.10488.
* Moura and Migon [2002] F. A. S. Moura and H. S. Migon. Bayesian spatial models for small area estimation of proportions. _Statistical Modeling_, 2(3):183-201, 2002. doi: 10.1191/1471082x02st0320a.
* Leemann and Wasserfallen [2017] Lucas Leemann and Fabio Wasserfallen. Extending the use and prediction precision of subnational public opinion estimation. _American Journal of Political Science_, 61(4):1003-1022, 2017. ISSN 0092-5853. doi: [https://doi.org/10.1111/ajps.12319](https://doi.org/10.1111/ajps.12319).
* Kuriwaki and Yamauchi [2021] Shiro Kuriwaki and Soichiro Yamauchi. Synthetic area weighting for measuring public opinion in small areas. _arXiv preprint arXiv:2105.05829_, 2021. doi:arXiv:2105.05829.
* Baffour et al. [2019] B. Baffour, H. Chandra, and A. Martinez. Localised estimates of dynamics of multi-dimensional disadvantage: An application of the small area estimation technique using australian survey and census data. _International Statistical Review_, 87(1):1-23, 2019.
* [19] PHIDU. Social health atlases, 2018. [https://phidu.torrens.edu.au/social-health-atlases](https://phidu.torrens.edu.au/social-health-atlases).
* ABS [2019] ABS. Modelled estimates for small areas based on the 2017-18 National Health Survey. Report, Australian Bureau of Statistics, 2019.
* Chakraborty et al. [2018] Adrijo Chakraborty, Gauri Sankar Datta, and Abhyuday Mandal. A two-component normal mixture alternative to the Fay-herriot model. _arXiv preprint arXiv:1510.04482_, 2018. doi:arXiv:1510.04482.
* You and Rao [2000] Yong You and JNK Rao. Hierarchical bayes estimation of small area means using multi-level models. _Survey Methodology_, 26(2):173-181, 2000.
* Rao [2011] J. N. K. Rao. Impact of frequentist and bayesian methods on survey sampling practice: A selective appraisal. _Statistical Science_, 26(2):240-256, 2011. doi:10.1214/10-STS346. URL [https://doi.org/10.1214/10-STS346](https://doi.org/10.1214/10-STS346).
* Hajek [1971] J. Hajek. Comment on "an essay on the logical foundations of survey sampling, part one". _The Foundations of Survey Sampling_, 1971.
* Cassy et al. [2022] S. R. Cassy, S. Manda, F. Marques, and Mdro Martins. Accounting for sampling weights in the analysis of spatial distributions of disease using health survey data, with an application to mapping child health in malawi and mozambique. _International Journal of Environmental Research and Public Health_, 19(10), 2022. ISSN 1661-7827 (Print) 1660-4601. doi:10.3390/ijerph19106319.
* Iriondo-Perez et al. [2018] Jennifer Iriondo-Perez, Amang Sukasih, and Rachel Harter. Comparing direct survey and small area estimates of health care coverage in new york. Report, American Statistical Association, 2018.
* Roy et al. [2019] Paritosh K. Roy, Md Hasinur R. Khan, Tahmina Akter, and M. Shafiqur Rahman. Exploring socio-demographic- and geographical-variations in prevalence of diabetes and hypertension in bangladesh: Bayesian spatial analysis of national health survey data. _Spatial and Spatio-temporal Epidemiology_, 29:71-83, 2019. ISSN 1877-5845. doi:[https://doi.org/10.1016/j.sste.2019.03.003](https://doi.org/10.1016/j.sste.2019.03.003).
* Binder [1983] David A Binder. On the variances of asymptotically normal estimators from complex surveys. _International Statistical Review_, pages 279-292, 1983. ISSN 0306-7734.
* Gao and Wakefield [2022] Peter A. Gao and Jon Wakefield. A spatial variance-smoothing area level model for small area estimation of demographic rates. _arXiv preprint arXiv:2209.02602_, 2022. doi:arXiv:2209.02602.
* Best et al. [2008] N. Best, S. Richardson, and P. Clarke. A comparison of model-based methods for small area estimation. Report, Department of Epidemiology and Public Health, Imperial College London, 2008.
* Riebler et al. [2016] Andrea Riebler, Sigrun H. Sorbye, Daniel Simpson, and Havard Rue. An intuitive bayesian spatial model for disease mapping that accounts for scaling. _arXiv preprint arXiv:1601.01180_, 2016. doi:arXiv:1601.01180.
* Tzavidis et al. [2018] N. Tzavidis, L. C. Zhang, A. Luna, T. Schmid, and N. Rojas-Perilla. From start to finish: a framework for the production of small area official statistics. _Journal of the Royal Statistical Society_, 181(4):927-979, 2018. ISSN 0964-1998. doi:10.1111/rssa.12364. URL <GotoISI>://WOS:000445193300002.
* Wolter [2007] Kirk M. Wolter. _Introduction to Variance Estimation_. Springer, New York, NY, 2007. doi:[https://doi.org/10.1007/978-0-387-35099-8](https://doi.org/10.1007/978-0-387-35099-8).
* Hidiroglou and You [2016] M. Hidiroglou and Y. You. Comparison of unit level and area level small area estimators. _Survey Methodology_, 42(1):41-61, 2016.
* Das et al. [2021] Sumonkanti Das, Jan A van den Brakel, Harm Jan Boonstra, and Stephen Haslett. _Multilevel Time Series Modelling of Antenatal Care Coverage in Bangladesh at Disaggregated Administrative Levels_. Statistics Netherlands, 2021.
* Stan [2022] Stan Development Team. Stan, 2022. [https://mc-stan.org](https://mc-stan.org).
* Dong and Wakefield [2021] Tracy Qi Dong and Jon Wakefield. Modeling and presentation of vaccination coverage estimates using data from household surveys. _Vaccine_, 39(18):2584-2594, 2021. ISSN 0264-410X.
* Lehtonen and Veijanen [1998] Risto Lehtonen and Ari Veijanen. Logistic generalized regression estimators. _Survey Methodology_, 24(1):51-55, 1998.
* Honaker and Plutzer [2011] James Honaker and Eric Plutzer. Small area estimation with multiple overimputation. _Midwest Political Science Association, Chicago_, 2011.
* Ospina and Ferrari [2012] Raydonal Ospina and Silvia L. P. Ferrari. A general class of zero-or-one inflated beta regression models. _Computational Statistics and Data Analysis_, 56(6):1609-1623, 2012. ISSN 0167-9473. doi:[https://doi.org/10.1016/j.csda.2011.10.005](https://doi.org/10.1016/j.csda.2011.10.005). URL [https://www.sciencedirect.com/science/article/pii/S0167947311003628](https://www.sciencedirect.com/science/article/pii/S0167947311003628).
* De Nicolo and Gardini [2022] Silvia De Nicolo and Aldo Gardini. The r package tipsae: Tools for mapping proportions and indicators on the unit interval. Report, 2022.
* Chen et al. [2014] Cici Chen, Jon Wakefield, and Thomas Lumely. The use of sampling weights in bayesian hierarchical models for small area estimation. _Spatial and Spatio-temporal Epidemiology_, 11:33-43, 2014. ISSN 1877-5845. doi:[https://doi.org/10.1016/j.sste.2014.07.002](https://doi.org/10.1016/j.sste.2014.07.002). URL [https://www.sciencedirect.com/science/article/pii/S1877584514000367](https://www.sciencedirect.com/science/article/pii/S1877584514000367).
* Aitkin et al. [2009] Murray Aitkin, Charles C. Liu, and Tom Chadwick. Bayesian model comparison and model averaging for small-area estimation. _The Annals of Applied Statistics_, 3(1):199-221, 2009. URL [https://doi.org/10.1214/08-AOAS205](https://doi.org/10.1214/08-AOAS205).
* Malec et al. [1997] Donald Malec, J. Sedransk, Christopher L. Moriarity, and Felicia B. Leclere. Small area inference for binary variables in the national health interview survey. _Journal of the American Statistical Association_, 92(439):815-826, 1997. ISSN 0162-1459. doi:10.1080/01621459.1997.10474037. URL [https://doi.org/10.1080/01621459.1997.10474037](https://doi.org/10.1080/01621459.1997.10474037).
* Gelman [2007] Andrew Gelman. Sturggles with survey weighting and regression modeling. _Statistical Science_, 22(2):153-164, 2007. URL [https://doi.org/10.1214/088342306000000691](https://doi.org/10.1214/088342306000000691).
* Gelman and Little [1997] Andrew Gelman and Thomas C Little. Poststratification into many categories using hierarchical logistic regression. _Survey Methodology_, pages 23:127-135, 1997.
* Buttice and Brighton [2013] Matthew K. Buttice and Benjamin Highton. How does multilevel regression and poststratification perform with conventional national surveys? _Political Analysis_, 21(4):449-467, 2013. ISSN 10471987, 14764989.
* Guadarrama et al. [2021] Maria Guadarrama, Domingo Morales, and Isabel Molina. Time stable empirical best predictors under a unit-level model. _Computational Statistics and Data Analysis_, 160:107226, 2021. ISSN 0167-9473. doi:[https://doi.org/10.1016/j.csda.2021.107226](https://doi.org/10.1016/j.csda.2021.107226).
* Molina and Rao [2010] Isabel Molina and J. N. K. Rao. Small area estimation of poverty indicators. _The Canadian Journal of Statistics_, 38(3):369-385, 2010. ISSN 03195724. URL [http://www.jstor.org/stable/27896031](http://www.jstor.org/stable/27896031).
* Plummer [2003] Martyn Plummer. Jags: A program for analysis of bayesian graphical models using gibbs sampling. In _3rd International Workshop on Distributed Statistical Computing_, 2003.
* Vehtari et al. [2021] Aki Vehtari, Andrew Gelman, Daniel Simpson, Bob Carpenter, and Paul-Christian Burkner. Rank-normalization, folding, and localization: An improved \(\widehat{R}\) for assessing convergence of mcmc (with discussion). _Bayesian Analysis_, 16(2):667-718, 2021. doi:10.1214/20-BA1221. URL [https://doi.org/10.1214/20-BA1221](https://doi.org/10.1214/20-BA1221).
* Gelman et al. [2020] Andrew Gelman, Aki Vehtari, Daniel Simpson, Charles C. Margossian, Bob Carpenter, Yuling Yao, Lauren Kennedy, Jonah Gabry, Paul-Christian Burkner, and Martin Modrak. Bayesian workflow. _arXiv preprint arXiv:2011.01808v1_, 2020. doi:arXiv:2011.01808v1.
* Team [2022] Stan Development Team. The qr reparameterization. In _Stan User's Guide_. 2022. URL [https://mc-stan.org/docs/stan-users-guide/QR-reparameterization.html](https://mc-stan.org/docs/stan-users-guide/QR-reparameterization.html).
* Gomez-Rubio et al. [2008] V. Gomez-Rubio, Nicky Best, Sylvia Richardson, Guangquan Li, and Philip Clarke. Bayesian statistics small area estimation. Report, Office for National Statistics, 2008.
* ABS [2018] ABS. National Health Survey: First results methodology. Report, Australian Bureau of Statistics, 2018. URL [https://www.abs.gov.au/methodologies/national-health-survey-first-results-methodology/2017-18](https://www.abs.gov.au/methodologies/national-health-survey-first-results-methodology/2017-18).
* Algorata [2017] ABS. Microdata: National Health Survey [DataLab], 2017.
* ABS [2011] ABS. Australian Statistical Geography Standard (ASGS), 2011. URL [https://www.abs.gov.au/websitedbs/d3310114.nsf/home/australian+statistical+geography+standard+](https://www.abs.gov.au/websitedbs/d3310114.nsf/home/australian+statistical+geography+standard+)(ass).
* Goldstein [2011] Harvey Goldstein. _Multilevel statistical models_. John Wiley and Sons, 2011. ISBN 111995682X.
* Vehtari et al. [2017] A. Vehtari, A. Gelman, and J. Gabry. Practical bayesian model evaluation using leave-one-out cross-validation and waic. _Statistics and Computing_, 27(5):1413-1432, 2017. doi:10.1007/s11222-016-9696-4. URL [https://www.scopus.com/inward/record.uri?eid=2-s2.0-85026299835&doi=10.1007%2fs11222-016-9696-4&partnerID=40&md5=d4d48dc9b435386f7e4be2c5cb580f4b](https://www.scopus.com/inward/record.uri?eid=2-s2.0-85026299835&doi=10.1007%2fs11222-016-9696-4&partnerID=40&md5=d4d48dc9b435386f7e4be2c5cb580f4b).
* ABS [2016] ABS. Technical paper: Socio-economic indexes for areas (SEIFA). Report, Australian Bureau of Statistics,, 2016.
* Ghitza and Gelman [2013] Yair Ghitza and Andrew Gelman. Deep interactions with mrp: Election turnout and voting patterns among small electoral subgroups. _American Journal of Political Science_, 57(3):762-776, 2013. ISSN 0092-5853. doi:[https://doi.org/10.1111/ajps.12004](https://doi.org/10.1111/ajps.12004). URL [https://doi.org/10.1111/ajps.12004](https://doi.org/10.1111/ajps.12004).
* Patterson et al. [2014] K. A. E. Patterson, V. Cleland, A. Venn, L. Blizzard, and S. Gall. A cross-sectional study of geographic differences in health risk factors among young australian adults: The role of socioeconomic position. _BMC Public Health_, 14(1):1-10, 2014. doi: 10.1186/1471-2458-14-1278. URL [https://www.scopus.com/inward/record.uri?eid=2-s2.0-84924288776&doi=10.1186%2f1471-2458-14-1278&partnerID=40&md5=775e49b9ba2c4478ae77d0e0c641ce77](https://www.scopus.com/inward/record.uri?eid=2-s2.0-84924288776&doi=10.1186%2f1471-2458-14-1278&partnerID=40&md5=775e49b9ba2c4478ae77d0e0c641ce77).
* Chung and Datta [2020] Hee Cheol Chung and Gauri Sankar Datta. Bayesian hierarchical spatial models for small area estimation. Report, Center for Statistical Research and Methodology, 2020.
* Donegan et al. [2021] Connor Donegan, Yongwan Chun, and Daniel A. Griffith. Modeling community health with areal data: Bayesian inference with survey standard errors and spatial structure. _International Journal of Environmental Research and Public Health_, 18(13):6856, 2021. ISSN 1660-4601. URL [https://www.mdpi.com/1660-4601/18/13/6856](https://www.mdpi.com/1660-4601/18/13/6856).
* Bell et al. [2013] W. R. Bell, G. S. Datta, and M. Ghosh. Benchmarking small area estimators. _Biometrika_, 100(1):189-202, 2013. ISSN 0006-3444. doi: 10.1093/biomet/ass063. URL <GotoISI>://WOS:000315623900011.
* Zhang and Bryant [2020] J. L. Zhang and J. Bryant. Fully bayesian benchmarking of small area estimation models. _Journal of Official Statistics_, 36(1):197-223, 2020. ISSN 0282-423X. doi: 10.2478/jos-2020-0010. URL <GotoISI>://WOS:000520863200010.
* Duncan et al. [2019] Earl W. Duncan, Susanna M. Cramb, Joanne F. Aitken, Kerrie L. Mengersen, and Peter D. Baade. Development of the australian cancer atlas: spatial modelling, visualisation, and reporting of estimates. _International Journal of Health Geographics_, 18(1):21, 2019. ISSN 1476-072X. doi: 10.1186/s12942-019-0185-9. URL [https://doi.org/10.1186/s12942-019-0185-9](https://doi.org/10.1186/s12942-019-0185-9).
* Richardson et al. [2004] Sylvia Richardson, Andrew Thomson, Nicky Best, and Paul Elliott. Interpreting posterior relative risk estimates in disease-mapping studies. _Environmental Health Perspectives_, 112(9):1016-1025, 2004. ISSN 0091-6765-1552-9924. doi: 10.1289/ehp.6740.
* Jacob et al. [2017] Pierre E Jacob, Lawrence M Murray, Chris C Holmes, and Christian P Robert. Better together? statistical learning in models made of modules. _arXiv preprint arXiv:1708.08719_, 2017. doi: arXiv:1708.08719.
* Shafer and Vovk [2008] Glenn Shafer and Vladimir Vovk. A tutorial on conformal prediction. _Journal of Machine Learning Research_, 9(3), 2008. ISSN 1532-4435.
* Bersson and Hoff [2022] Elizabeth Bersson and Peter D Hoff. Optimal conformal prediction for small areas. _arXiv preprint arXiv:2204.08122_, 2022.
# Supplemental Materials for "A Two-stage Bayesian Small Area Estimation Method for Proportions"
James Hogg
Centre for Data Science, Queensland University of Technology
Jessica Cameron
Susanna Cramb
Australian Centre for Health Services Innovation, School of Public Health and Social Work, Queensland University of Technology
Peter Baade
Kerrie Mengersen
Centre for Data Science, Queensland University of Technology
## Appendix A Proposed method: further details
### Definition for S1 sampling variance
Here we show that the sampling variance for a S1 estimate are lower bounded by the direct estimate sampling variance. Initially let,
\[\frac{\sum_{j\in r_{i}}w_{ij}y_{ij}}{n_{i}}=\frac{\sum_{j\in r_{i}}w_{ij}y_{ ij}}{n_{i}}+\frac{\sum_{j\in r_{i}}w_{ij}p_{ij}}{n_{i}}-\frac{\sum_{j\in r_{i}}w_{ ij}p_{ij}}{n_{i}},\]
which can be rewritten as,
\[\hat{\mu}_{i}^{\text{S1}}=\hat{\mu}_{i}^{D}+\hat{B}_{i},\]
where
\[\hat{B}_{i}=\frac{\sum_{j\in r_{i}}w_{ij}(p_{ij}-y_{ij})}{n_{i}},\]
is the difference between \(\hat{\mu}_{i}^{\text{S1}}\) and \(\hat{\mu}_{i}^{D}\). Thus, by treating both \(y_{ij}\) and \(w_{ij}\) as fixed quantities and by definition assuming \(\text{cov}\left(\hat{\mu}_{i}^{D},\hat{B}_{i}\right)=0\), the sampling variance of the S1 estimator is given by,
\[\widehat{\mathbf{v}}\left(\hat{\mu}_{i}^{\text{S1}}\right)\leq \widehat{\mathbf{v}}\left(\hat{\mu}_{i}^{D}\right)+\widehat{\mathbf{v}}\left( \hat{B}_{i}\right),\]
where the \(\leq\) is used as simple empirical results show that without assuming \(y_{ij}\) as fixed the \(\text{cov}\left(\hat{\mu}_{i}^{D},\hat{B}_{i}\right)<0\).
The specification for \(\widehat{\mathbf{v}}\left(\hat{\mu}_{i}^{\text{S1}}\right)\) ensures that poorly specified logistic models will give large sampling variances, because \(\widehat{\mathbf{v}}\left(\hat{B}_{i}\right)\) will be large. Furthermore, as \(p_{ij}\) tends to \(y_{ij}\), \(\widehat{\mathbf{v}}\left(\hat{\mu}_{i}^{\text{S1}}\right)\approx\widehat{ \mathbf{v}}\left(\hat{\mu}_{i}^{D}\right)\).
### Inference methods for the TSLN model
As mentioned in Section 2.2, we fit the TSLN model in two stages. The steps are threefold:
1. Fit the individual level logistic model, and collect all \(T\) posterior draws of \(\hat{\theta}_{i}^{\text{S1}}\) and \(\gamma_{i}^{\text{S1}}\).
2. Calculate the posterior mean of \(\gamma_{i}^{\text{S1}}\), defined as \(\tilde{\gamma}_{i}^{\text{S1}}\), and the posterior variance of \(\hat{\theta}_{i}^{\text{S1}}\), denoted \(\widehat{\mathbf{v}}\left(\hat{\theta}_{i}^{\text{S1}}\right)\).
3. Take a random sample of posterior draws for each \(\hat{\theta}_{i}^{\text{S1}}\), to create a matrix of draws \(\hat{\mathbf{\theta}}^{\text{S1},\text{its}}\in\mathbb{R}^{\widehat{T}\times m}\), where \(\widetilde{T}<T\) is the number of posterior draws randomly sampled. In this work, we set \(\widetilde{T}=\frac{T}{2}\).
By letting \(\hat{\mathbf{\theta}}_{i}^{\text{S1},\text{its}}=\left(\hat{\theta}_{i1}^{\text{S1 },\text{its}},\ldots,\hat{\theta}_{i\widetilde{T}}^{\text{S1},\text{its}}\right)\) be the vector of randomly selected posterior draws for area \(i\), the TSLN-S2 model is specified as,
\[\hat{\mathbf{\theta}}_{i}^{\text{S1},\text{its}} \sim N\left(\hat{\theta}_{i},\widehat{\mathbf{v}}\left(\hat{\theta}_{i }^{\text{S1}}\right)\right)^{1/\widehat{T}}\qquad i=1,\ldots,m\] (A.2.1) \[\hat{\bar{\theta}}_{i} \sim N\left(\hat{\theta}_{i},\tilde{\gamma}_{i}^{\text{S1}}\right) \qquad i=1,\ldots,m,\] (A.2.2)
where \(\hat{\theta}_{i}\) follows (11) in Section 2.1.4. To ensure that the uncertainty of the parameters of the TSLN-S2 model are not mediated by the arbitrary choice of \(\widetilde{T}\), we scale the likelihood contributions by \(1/\widetilde{T}\). This approach follows similar intuition to pseudo-likelihood [1, 2].
The specification above can be viewed as a type of FH model where the variance of the sampling model is an independent combination of sampling and model variance. Thus, Eq. A.2.1 and Eq. A.2.2 can be collapsed into \(\hat{\mathbf{\theta}}_{i}^{\text{S1},\text{its}}\sim N\left(\hat{\theta}_{i}, \tilde{\gamma}_{i}^{\text{S1}}+\widehat{\mathbf{v}}\left(\hat{\theta}_{i}^{ \text{S1}}\right)\right)\).
There are simpler and also more complex approaches to estimating the parameters of the TSLN model. The approach of Das et al. [3] and Gao and Wakefield [4] was to treat the posterior means for \(\hat{\theta}_{i}^{\text{S1}}\) and \(\gamma_{i}^{\text{S1}}\) from the stage 1 model as known quantities in the stage 2 model. However, this approach neglects the uncertainty in the first-stage logistic model. In a more complex model setup, we would also add \(\mathbf{\gamma}_{i}^{\text{S1},\text{its}}\sim N\left(\tilde{\gamma}_{i}^{\text{S1 }},\widehat{\mathbf{v}}\left(\gamma_{i}^{\text{S1}}\right)\right)\), rather than treating \(\tilde{\gamma}_{i}^{\text{S1}}\) as a fixed quantity.
Further extensions could incorporate the correlation structure between posterior draws for \(\hat{\theta}_{i}^{\text{S1}}\) and \(\gamma_{i}^{\text{S1}}\). One could initially treat the diagonal elements of the variance-covariance matrices, \(\sum\in\mathbb{R}^{m\times m}\), as fixed, but estimate a constant correlation between all areas (i.e. use an exchangeable correlation structure). A further step would be to consider an unstructured correlation matrix, by using the empirical variance-covariance matrices, \(\widetilde{\sum}\in\mathbb{R}^{m\times m}\), and a multivariate normal distribution. In this case, \(\hat{\mathbf{\theta}}_{t}^{\text{S1},\text{its}}\sim\mathcal{M}\text{VN}\left( \hat{\bar{\mathbf{\theta}}},\widetilde{\sum}\right)\), where \(\hat{\bar{\mathbf{\theta}}}=\left(\hat{\bar{\theta}}_{1},\ldots,\hat{\bar{\theta} }_{m}\right)\). Note that Das _et al._[3] found that incorporating the covariances between area predictions did little to improve the bias or variance of their final estimates. In our simulation study, we found little correlation between the posterior draws for different areas.
Existing methods: further details
### Seminal model-based methods
Fay-Herriot (FH) modelLet \(\hat{\theta}_{i}^{D}\) and \(\gamma_{i}^{D}\) be the known (and fixed) area level direct estimates and sampling variances for areas \(i=1,\ldots,m\) and \(\mathbf{Z}\in\mathbb{R}^{M\times(q^{a}+1)}\) be the area level design matrix with corresponding regression coefficients, \(\mathbf{\lambda}\in\mathbb{R}^{(q^{a}+1)\times 1}\), for the \(q^{a}\) area level covariates. A fully Bayesian FH model is given by,
\[\hat{\theta}_{i}^{D} \sim N(\theta_{i},\gamma_{i}^{D})\qquad i=1,\ldots,m \tag{1}\] \[\theta_{i} = \mathbf{Z}_{i}\mathbf{\lambda}+v_{i}\qquad i=1,\ldots,M\] (2) \[v_{i} \sim N(0,\sigma_{v}^{2})\qquad i=1,\ldots,M\]
where independent priors are used for the other model parameters (\(\mathbf{\lambda},\sigma_{v}\)). The small area estimate for area \(i\) is given by the marginal posterior of \(\theta_{i}\). By construction, the FH model gives \(\hat{\theta}_{i}\), the synthetic estimate in Eq. 2, when \(\gamma_{i}^{D}\) is very large, but \(\hat{\theta}_{i}^{D}\) when \(\gamma_{i}^{D}\) is very small.
Nested error (NER) modelWith access to detailed individual level continuous outcome data, \(\delta_{ij}\), individual level models become another avenue for SAE [5]. The Bayesian nested error (NER) model is [6],
\[\delta_{ij} \sim N\left(\mathbf{x}_{ij}\mathbf{\beta}+e_{i},\sigma_{r}^{2}\right) \qquad j=1,\ldots,n_{i};i=1,\ldots,m \tag{3}\] \[e_{i} \sim N\left(0,\sigma_{e}^{2}\right)\qquad i=1,\ldots,M.\]
Independent priors are used for the model parameters. By assuming that \(N_{i}\) is large [7], the empirical best linear unbiased predictor (EBLUP) is calculated as,
\[\hat{\theta}_{i}=\bar{\mathbf{X}}_{i}\mathbf{\beta}+e_{i},\qquad i=1,\ldots,M\]
where we distinguish between the design matrix, \(\mathbf{x}\in\mathbb{R}^{n\times(q^{a}+1)}\), for the \(q^{u}\) individual level covariates and the area level means for the same covariates, \(\bar{\mathbf{X}}\in\mathbb{R}^{M\times(q^{u}+1)}\).
Unlike the Bayesian FH model (Eq. 1), the NER Eq. 3, does not incorporate the individual level sampling weights and is thus not design unbiased [8]. Although individual level models have been shown to outperform area level models [9], the core limitation of Eq. 3 is that one must have access to the individual level and known population means for all \(q^{u}\) covariates; restricting one to census covariates only.
### Area level models for proportions
Normal-logit modelA simple adaption of the FH model (Eq. 1) for proportions is achieved by accommodating the bounds of the direct proportion estimates in the link function. Liu, Lahiri, and Kalton [10] describes this approach via a normal-logit model.
\[\hat{\mu}_{i}^{D} \sim N(\mu_{i},\psi_{i}^{D})\qquad i=1,\ldots,M \tag{4}\] \[\text{logit}(\mu_{i}) = \mathbf{Z}_{i}\mathbf{\lambda}+v_{i}\qquad i=1,\ldots,M\] (5) \[v_{i} \sim N(0,\sigma_{v}^{2})\qquad i=1,\ldots,M\]
Liu, Lahiri, and Kalton [10] argue that Eq. 3 is applicable only when \(n_{i}\) is sufficiently large, resulting in small sample variances, \(\psi_{i}^{D}\) and thus approximate normality of \(\hat{\mu}_{i}^{D}\).
Binomial modelAn alternative approach to the normal logit model is to model the area level sample counts, \(\widehat{Y}_{i}=\sum_{j\in s_{i}}y_{ij}\), with a binomial distribution [11, 12, 13], \(\widehat{Y}_{i}\sim\text{Binomial}(n_{i},\mu_{i})\), where the linear predictor and random effects are the same as those in Eq. 3.
Although the binomial model does not automatically accommodate the sample design, interested readers are referred to Vandendijck _et al._[14] and Chen, Wakefield, and Lumely [15] for binomial models that do. These authors adjust both
\(\widehat{Y}_{i}\) and \(n_{i}\) according to the sample design to derive the effective number of counts \(\widehat{Y}_{i}\) and effective sample size, \(\tilde{n}_{i}\). In practice, their method results in non-integer values for \(\widehat{Y}_{i}\) and \(\tilde{n}_{i}\), which requires generalizing the discrete binomial distribution when fitting Bayesian models in probabilistic programming languages such as Stan.
Beta modelThe Beta distribution, which is naturally bounded between 0 and 1, is a favourable choice for modelling small area proportions [16, 10, 17]. Although formally parameterized by two parameters, \(\kappa^{(1)}\) and \(\kappa^{(2)}\) say, that control the shape and scale, it is common to adopt the following mean-precision parameterization [18] where \(\kappa^{(1)}=\phi\mu\) and \(\kappa^{(2)}=\phi-\phi\mu=\phi-\kappa^{(1)}\). The parameter \(\mu\in(0,1)\) is the mean of the bounded outcome, whilst \(\phi>0\) can be interpreted as a precision parameter. Hence for \(Z\in(0,1)\), note that
\[\text{E}[Z] = \frac{\kappa^{(1)}}{\kappa^{(1)}+\kappa^{(2)}}=\mu\] \[\text{Var}[Z] = \frac{\kappa^{(1)}\kappa^{(2)}}{(\kappa^{(1)}+\kappa^{(2)})^{2} (\kappa^{(1)}+\kappa^{(2)}+1)}=\frac{\mu(1-\mu)}{\phi+1}.\]
Following the specification of the FH model [19], \(\phi\) is considered known and is calculated as
\[\frac{\mu_{i}(1-\mu_{i})}{\psi_{i}^{D}}-1.\] (B.2.2)
The Bayesian area level FH Beta model follows directly,
\[\hat{\mu}_{i}^{D} \sim \text{Beta}\left(\kappa_{i}^{(1)}=\mu_{i}\phi_{i},\kappa_{i}^{(2 )}=\phi_{i}-\kappa_{i}^{(1)}\right)\qquad i=1,\dots,m\] \[\text{logit}(\mu_{i}) = \mathbf{X}_{i}\boldsymbol{\beta}+v_{i}\qquad i=1,\dots,M\] \[v_{i} \sim N(0,\sigma_{v}^{2})\qquad i=1,\dots,M,\]
where the proportion estimate for area \(i\) is given by \(\mu_{i}\).
Unlike other implementations of FH Beta models [20, 17], we specify the shape parameters in terms of \(\phi_{i}\) directly rather than pre-calculating the effective sample size, \(\tilde{n}_{i}=\phi_{i}+1\). Below we show how these implementations are equivalent.
\[\hat{v}_{\text{ss}}\left(\hat{\mu}_{i}^{D}\right) = \frac{\hat{\mu}_{i}^{D}\left(1-\hat{\mu}_{i}^{D}\right)}{n_{i}}\] \[\text{deff}_{i} = \frac{\psi_{i}^{l}}{\hat{v}_{\text{ss}}\left(\hat{\mu}_{i}^{D} \right)}\] \[\phi_{i}+1 = \frac{n_{i}}{\text{deff}_{i}},\] \[\phi_{i}+1 = \frac{\left(\frac{n_{i}}{1}\right)}{\left(\frac{\psi_{i}^{D}}{ \left(\frac{\mu^{D}\left(1-\hat{\mu}_{i}^{D}\right)}{\tilde{n}_{i}}\right)} \right)}=\frac{n_{i}\left(\frac{\hat{\mu}_{i}^{D}\left(1-\hat{\mu}_{i}^{D} \right)}{n_{i}}\right)}{\psi_{i}^{D}}\] \[\phi_{i} = \frac{\hat{\mu}_{i}^{D}\left(1-\hat{\mu}_{i}^{D}\right)}{\psi_{ i}^{D}}-1\]
Unfortunately, the FH Beta model has several statistical and computational limitations, some of which are summarized in Table 1. The first is a constraint on the mean of the Beta distribution. In order to ensure that the shape parameters remain strictly positive,
\[\hat{\mu}_{i}\in\left(\frac{1-\sqrt{1-4\psi_{i}^{D}}}{2},\frac{1+\sqrt{1-4 \psi_{i}^{D}}}{2}\right).\] (B.2.4)
Very imprecise or unstable direct estimates can give biased posterior distributions for \(\hat{\mu}_{i}\) since its range tends to \(0\) as \(\psi_{i}^{D}\) tends to \(0.25\).
As a result of the first limitation, the bounds on \(\hat{\mu}_{i}\) must be applied to all \(M\) areas to ensure consistency between estimates for both sampled and nonsampled areas. One requires a version of the GVF in (12) that can impute sampling variances for all \(M\) -- even those areas with no data. Arguably it makes little sense to estimate sampling variances for areas with no sample data, even if this is only for computational reasons. In this work, we assuming that \(n_{i}\propto N_{i}\), and then use \(\text{log}(N_{i})\) as the single covariate in \(\mathbf{L}\). We also set \(f(x)=\text{log}\left(\frac{x}{0.25+x}\right)\), which respects the constraint \(\psi_{i}^{D}<=0.25\) imposed by the Beta distribution. In practice, covariate choice for a GVF (of the form in (12)) for use in a FH Beta model, is restricted to covariates available for all areas, which makes it impossible to accommodate sample sizes and direct estimates, which are expected _a priori_ to be very predictive of the sampling variances.
A computational limitation of the Beta model is its possible bimodal behaviour; a disastrous affair for standard MCMC methods. A constraint \(\kappa^{(1)},\kappa^{(2)}>1\) must be imposed to ensure the Beta distribution remains unimodal. However, in the case of the FH Beta model, this constraint places a very restrictive upper limit on the values of \(\psi_{i}^{D}\). Any areas with large sampling variances generally produce bimodal Beta distributions and thus cannot be accommodated into the FH Beta model. To improve convergence, we constrain \(\hat{\mu}_{i}\in(0.03,0.97)\). This is a valid computational constraint and is unlikely to affect model performance.
A final limitation is that the FH Beta model is particularly affected by instability of direct estimates because the likelihood for the Beta distribution becomes undefined if \(\hat{\mu}_{i}^{D}\) is exactly equal to \(0\) or \(1\).
Empirical logistic-normal modelBy targeting the boundary issues with the normal-logit model (Eq. B.2.1), Mercer _et al._[21] and then Cassy _et al._[22] used an empirical logit transformation of the direct estimates.
\[\text{logit}\left(\hat{\mu}_{i}^{D}\right) \sim N\left(\theta_{i},\gamma_{i}^{D}\right)\qquad i=1,\ldots,m \tag{20}\] \[\gamma_{i}^{D} = \psi_{i}^{D}\left[\hat{\mu}_{i}^{D}\left(1-\hat{\mu}_{i}^{D} \right)\right]^{-2}\qquad i=1,\ldots,m\] (21) \[\theta_{i} = \text{logit}\left(\mu_{i}\right)=\mathbf{Z}_{i}\boldsymbol{ \lambda}+v_{i}\qquad i=1,\ldots,M\] (22) \[v_{i} \sim N(0,\sigma_{v}^{2})\qquad i=1,\ldots,M \tag{23}\]
The small area estimate for area \(i\) is given by \(\mu_{i}=\text{logit}^{-1}\left(\theta_{i}\right)\).
### Individual level models for proportions
Pseudo-likelihood logistic mixed modelTo extend the NER model to the binary setting and accommodate the sample design, one can use an individual level Bayesian pseudo-likelihood logistic mixed model [23, 2].
\[y_{ij} \sim \text{Bernoulli}(p_{ij})^{\bar{w}_{ij}}\qquad j=1,\ldots,n_{i},i=1,\ldots,m \tag{24}\] \[\text{logit}(p_{ij}) = \mathbf{x}_{ij}\boldsymbol{\beta}+e_{i}\qquad j=1,\ldots,N_{i},i= 1,\ldots,M \tag{25}\]
Because of the nonlinear link function, the area level proportion estimate is derived by aggregating across the observed and unobserved individuals in each small area,
\[\hat{\mu}_{i}=\frac{1}{N_{i}}\left(\sum_{j\in r_{i}}y_{ij}+\sum_{j\in r_{i}^{ C}}\hat{p}_{ij}\right), \tag{26}\]
where \(\hat{p}_{ij}\) is estimated using the posterior distributions of the model parameters, \(\boldsymbol{\beta},\sigma_{e}\), for \(j\in r_{i}^{C}\).
The motivation for Bayesian pseudo-likelihood is to ensure that the posterior distribution is similar (at least asymptotically) with that from the same specified model fit to the entire population [1]. For an arbitrary outcome vector \(\mathbf{y}\), sampling weights, \(\mathbf{w}\) and model parameters \(\boldsymbol{\theta}\), the posterior distribution is specified as
\[p\left(\boldsymbol{\theta}|\mathbf{y}\right)\propto\left[\prod_{i=1}^{N}p\left( y_{i}|\boldsymbol{\theta}\right)^{w_{i}}\right]p\left(\boldsymbol{\theta} \right), \tag{27}\]
where \(\prod_{i=1}^{N}p\left(y_{i}|\mathbf{\theta}\right)^{w_{i}}\) denotes the pseudo likelihood for the sample.
Simulation study: further details
### Algorithm
Below we give more details on the simulation study described in Section 4. We generated a census using steps 1-4 below, and then drew \(D=100\) unique samples (repetitions) from this census using step 5.
**Step 1**: Create a \(M\)-length vector of area level proportions, \(\mathbf{U}\), with values equally spaced between \(P^{L}\) and \(P^{U}\) (e.g. \(\mathbf{U}=(U_{1}=0.1,\ldots,U_{M}=0.4)\)). Sample a random vector of area specific population sizes, \(\mathbf{N}=(N_{1},\ldots,N_{M})\) from the set \(\{500,3000\}\). Note that \(N=\sum_{i=1}^{M}N_{i}\). Next, using a binomial distribution sample the area counts, \(Y_{i}\sim\text{Binomial}(n=N_{i},p=U_{i})\). Finally, _uncount_\(Y_{i}\) to create the binary outcome \(y_{ij}\in\{0,1\}\) for individual \(j\) in area \(i\). For example, in area 1 the vector \(\mathbf{y}_{1}=(y_{11},\ldots,y_{1N_{1}})\), will be composed of \(Y_{1}\) 1's and \(N_{1}-Y_{1}\) 0's.
**Step 2**: To simulate individual level covariates (one survey-only categorical covariate, \(\mathbf{x}^{\text{s}}\) with three groups, and one continuous covariate, \(\mathbf{x}^{\text{cs}}\) available in both the census and survey), first sample two standard normal vectors of length \(N\), denoted \(\mathbf{e}^{\text{s}},\mathbf{e}^{\text{cs}}\). Then calculate the following two continuous covariates, \(x^{\text{s}}_{*,ij}=y_{ij}+\alpha^{\text{s}}e^{\text{s}}_{ij}\) and \(x^{\text{cs}}_{*,ij}=y_{ij}+\alpha^{\text{cs}}e^{\text{cs}}_{ij}\), where \(\alpha^{\text{s}}\) and \(\alpha^{\text{cs}}\) control the predictive power of the individual level covariates. Small values of \(\alpha^{\text{s}}\) and \(\alpha^{\text{cs}}\) will provide greater correlation between the outcome and the individual level covariates. Finally, convert \(\mathbf{x}^{\text{s}}_{*}\) into a categorical covariate, \(\mathbf{x}^{\text{s}}\), using appropriate quantiles and standardize \(\mathbf{x}^{\text{cs}}_{*}\) to create \(\mathbf{x}^{\text{cs}}\).
**Step 3**: To generate an area level covariate first calculate the true area level proportions, \(\mu_{i}=\frac{1}{N_{i}}\sum_{j=1}^{N_{i}}y_{ij}\). Note that \(\boldsymbol{\mu}\) is constant for all 100 repetitions. Following a similar method to Step 2, simulate a random standard normal vector of length \(M\), denoted \(\mathbf{g}\), and calculate a continuous covariate, \(k^{\text{s}}_{*,i}=\text{logit}(\mu_{i})+ug_{i}\), where \(u\) controls the predictive power of the area level covariate (similar to \(\alpha^{\text{s}}\) and \(\alpha^{\text{cs}}\) above). As before, we standardize \(\mathbf{k}^{\text{a}}_{*}\) to create \(\mathbf{k}^{\text{a}}\), and then expand the vector and include it in the census dataset. The simulated census (of size \(N\times 5\)) has columns \(\mathbf{y},\mathbf{x}^{\text{s}},\mathbf{x}^{\text{cs}},\mathbf{k}^{\text{a}}, \mathbf{I}\), where \(\mathbf{I}\in\{1,\ldots,M\}\) is an integer vector that defines the area for each individual.
**Step 4**: We fix the sampling fraction at 0.4% and \(m=60\), and then calculate the fixed area sample sizes, \(n_{i}=\text{round}\left(\frac{100}{60}\times 0.004\times N_{i}\right)\). Next, by following the simulation method used by Hidiroglou and You [9], we simulate \(z_{ij}=\mathbb{I}\left(y_{ij}=0\right)+0.8h_{ij}\) for all individuals, where \(h_{ij}\) is a random draw from an exponential distribution with rate equal to \(1\). The values of \(z_{ij}\) are used to determine each individual's sampling probability, \(\pi_{ij}=z_{ij}\left(\sum_{j=1}^{N_{i}}z_{ij}\right)^{-1}\), and sampling weight, \(w_{ij}=n_{i}^{-1}\pi_{ij}^{-1}\), based on the fixed area sample size. The calculation of \(\pi_{ij}\) makes individuals with \(y_{ij}=0\) more likely to be sampled (i.e. the sampled design is informative).
**Step 5**: Select 60 out of the 100 areas proportional to their size (i.e. randomly select areas according to \(\frac{N_{i}}{N}\)). Within each selected area, draw an informative sample of size \(n_{i}\) based on the sampling probabilities, \(\pi_{ij}\). Finally, rescale the sampling weights to ensure that the sum within area \(i\) equals \(N_{i}\).
The simulation algorithm was purposely stochastic and thus complete control via the various simulation parameters was not definite. This is particularly true for the covariate effect sizes. We conducted an extensive grid search across simulation parameter values (using crude but fast frequentist methods) to determine the values that gave reasonable model coefficients. In this work, we found that \(\alpha^{\text{cs}}=1\) gave reasonable coefficients. Although \(N_{i}\) and \(n_{i}\) were fixed, the areas to be sampled and which individuals were sampled in each selected area was stochastic, resulting in different \(n\) for each repetition \(d\). Note that \(u\), which has an inverse affect on the predictive power of the area level covariate, was set to 0.05 for Sc1-Sc2 and 0.01 for Sc3-Sc6 to ensure \(\mathbf{k}^{\text{a}}\), the area level covariate, was sufficiently predictive.
### Summary
The fourth column in Table 3 is derived as follows. For each repetition, we calculate the median of the per-area ratio of the S1 and direct sampling variances using
\[100\left(\left(\frac{\bar{\gamma}_{id}^{\text{S1}}+\tilde{\text{v}}\left(\hat {\theta}_{id}^{\text{S1}}\right)}{\gamma_{id}^{D}}\right)-1\right),\]
which is expected to be greater than 0 as \(\bar{\gamma}_{id}^{\text{S1}}>\gamma_{id}^{D}\).
The fifth column is derived as follows. For each repetition we derive the ratio of the MAB,
\[\text{MAB}_{d}^{D} = \frac{1}{M}\sum_{i}|\tilde{\mu}_{id}^{D}-\mu_{i}|\] \[\text{MAB}_{d}^{\text{S1}} = \frac{1}{M}\sum_{i}|\tilde{\mu}_{id}^{\text{S1}}-\mu_{i}|\] \[\text{Ratio}_{d} = 100\left(1-\frac{\text{MAB}_{d}^{\text{S1}}}{\text{MAB}_{d}^{D}}\right)\]
for the direct and S1 estimates, where \(\tilde{\mu}_{id}^{\text{S1}}\) is the posterior median of \(\tilde{\mu}_{id}^{\text{S1}}\). In Table 3 we give the median of Ratio\({}_{d}\) across the 100 repetitions.
### Other performance metrics
Let \(\tilde{\mu}_{id}\) denote the posterior median of a model's parameter of interest for area \(i\) for repeat \(d\)[24]. The following frequentist metrics were used in work by Chen, Wakefield, and Lumely [15], and are summarized in Table 1.
\[\text{Bias} = \frac{1}{M}\sum_{i=1}^{M}\left(\bar{\tilde{\mu}}_{i}-\mu_{i} \right)\text{, where }\bar{\tilde{\mu}}_{i}=\frac{1}{D}\sum_{d=1}^{D}\bar{ \tilde{\mu}}_{id}\] \[\text{Variance} = \frac{1}{M}\sum_{i=1}^{M}\left(\frac{1}{D-1}\sum_{d=1}^{D}\left( \tilde{\mu}_{id}-\bar{\tilde{\mu}}_{i}\right)^{2}\right)\] \[\text{MSE} = \left(\text{Bias}\right)^{2}+\text{Variance}\] Eq. C.3.1
\begin{table}
\begin{tabular}{c|c|c|c c|c c|} & & \multicolumn{2}{c|}{Sampled areas} & \multicolumn{2}{c|}{Nonsampled areas} \\ \cline{3-6} & & \multicolumn{2}{c|}{MSE} & \multicolumn{2}{c|}{MSE} \\ \hline \multirow{5}{*}{50-50} & \multirow{5}{*}{Sc1} & D & 4.73 & (12.78) & \multirow{5}{*}{0.82} & \multirow{5}{*}{(2.22)} \\ & & BETA & 2.05 & (5.54) & **0.82** & (2.22) \\ & & BIN & 3.55 & (9.59) & **3.55** & (9.59) \\ & & ELN & 1.87 & (5.05) & **0.78** & (2.11) \\ & & LOG & 0.90 & (2.43) & **0.31** & (0.84) \\ & & **TSLN** & **0.37** & (100) & **0.37** & (100) \\ \cline{2-6} & \multirow{5}{*}{Sc2} & D & 4.62 & (12.16) & \multirow{5}{*}{0.83} & \multirow{5}{*}{(2.18)} \\ & & BETA & 1.99 & (5.24) & **0.83** & (2.18) \\ & & BIN & 3.54 & (9.32) & **3.54** & (9.32) \\ & & ELN & 1.85 & (4.87) & **0.83** & (2.18) \\ & & LOG & 0.88 & (2.32) & **0.31** & (0.82) \\ \cline{2-6} & & TSLN & 0.38 & (100) & **0.38** & (100) \\ \hline \multirow{5}{*}{Rare} & \multirow{5}{*}{Sc3} & D & 3.77 & (10.77) & \multirow{5}{*}{0.84} & \multirow{5}{*}{(2.69)} \\ & & BETA & 1.09 & (3.11) & **0.94** & (2.69) \\ & & BIN & 1.49 & (4.26) & **1.48** & (4.23) \\ & & ELN & 2.84 & (8.11) & **2.03** & (5.80) \\ & & LOG & 1.08 & (3.09) & **0.43** & (1.23) \\ \cline{2-6} & & TSLN & **0.35** & (100) & **0.35** & (100) \\ \cline{2-6} & & & 3.77 & (10.19) & (2.95) & **0.94** & (2.47) \\ & & BIN & 1.49 & (4.03) & **1.48** & (3.89) \\ & & ELN & 2.84 & (7.68) & **2.03** & (5.34) \\ & & LOG & 1.08 & (2.92) & **0.43** & (1.13) \\ & & TSLN & **0.37** & (100) & **0.38** & (100) \\ \hline \multirow{5}{*}{Common} & \multirow{5}{*}{Sc5} & D & 3.06 & (14.57) & \multirow{5}{*}{**0.19**} & \multirow{5}{*}{(0.86)} \\ & & BETA & 0.78 & (3.71) & **0.19** & (0.86) \\ & & BIN & 3.00 & (14.29) & **3.00** & (13.64) \\ & & ELN & 0.51 & (2.43) & **0.24** & (1.09) \\ & & LOG & 0.34 & (1.62) & **0.17** & (0.77) \\ & & TSLN & **0.21** & (100) & **0.22** & (1.00) \\ \cline{2-6} & & D & 3.31 & (17.42) & \multirow{5}{*}{**0.16**} & \multirow{5}{*}{(0.84)} \\ & & BETA & 0.92 & (4.84) & **0.16** & (0.84) \\ & & BIN & 2.99 & (15.74) & **2.99** & (15.74) \\ & & ELN & 0.63 & (3.32) & **0.23** & (1.21) \\ & & LOG & 0.35 & (1.84) & **0.17** & (0.89) \\ \cline{2-6} & & TSLN & **0.19** & (100) & **0.19** & (1.00) \\ \hline \end{tabular}
\end{table}
Table 1: Frequentist MSE (\(\overline{\times}10^{-2}\)) (Eq. C.3.1). D denotes the Hajek [25] direct estimator given in (1). Bold numbers represent the lowest MSE value in each column and scenario for sampled and nonsampled areas. Gray numbers in brackets give the ratio of the value to that of the TSLN model.
\begin{table}
\begin{tabular}{c|c|c c c c c|c c c|} & & \multicolumn{4}{c|}{Individual level} & \multicolumn{2}{c|}{Area level} \\ \cline{3-10} & & \(\mathbf{x}^{\text{s}}(2)\) & \(\mathbf{x}^{\text{s}}(3)\) & \(\mathbf{x}^{\text{cs}}\) & \(\sigma_{e}\) & \(\mathbf{k}^{\text{a}}\) & \(\sigma_{v}\) \\ \hline \multirow{5}{*}{50-50} & \multirow{5}{*}{Sc1} & BETA & \multirow{5}{*}{3.07} & \multirow{5}{*}{1.20} & \multirow{5}{*}{0.65} & 0.36 & 0.60 \\ & & BIN & & & & 0.35 & 0.14 \\ & & ELN & & & & 0.44 & 0.74 \\ & & LOG & & 1.20 & 0.65 & 0.41 \\ & & TSLN-S1 & 1.49 & 3.07 & 1.22 & 0.72 & 0.39 \\ & & TSLN-S2 & & & & 0.38 & 0.12 \\ \cline{2-10} & \multirow{5}{*}{Sc2} & BETA & \multirow{5}{*}{0.74} & \multirow{5}{*}{1.57} & \multirow{5}{*}{1.20} & 0.37 & 0.58 \\ & & BIN & & & & 0.34 & 0.14 \\ & & ELN & & & & 0.46 & 0.74 \\ & & LOG & & 1.20 & 0.64 & 0.42 \\ & & TSLN-S1 & 0.74 & 1.57 & 1.20 & 0.65 & 0.41 \\ & & TSLN-S2 & & & & 0.40 & 0.11 \\ \hline \multirow{5}{*}{Rare} & \multirow{5}{*}{Sc3} & BETA & \multirow{5}{*}{0.75} & \multirow{5}{*}{1.59} & \multirow{5}{*}{1.22} & 0.47 & 0.65 \\ & & BIN & & & & 0.49 & 0.21 \\ & & ELN & & & & 0.96 & 2.10 \\ & & LOG & & & 1.22 & 0.93 & 0.57 \\ & & TSLN-S1 & 1.35 & 3.17 & 1.27 & 0.97 & 0.59 \\ & & TSLN-S2 & & & & 0.60 & 0.17 \\ \cline{2-10} & \multirow{5}{*}{Sc4} & BETA & \multirow{5}{*}{0.75} & \multirow{5}{*}{1.59} & \multirow{5}{*}{1.25} & 0.56 & 0.55 \\ & & BIN & & & & 0.52 & 0.14 \\ & & ELN & & & & 0.58 & 0.39 \\ & & LOG & & & 1.17 & 0.46 & 0.53 \\ & & TSLN-S1 & 1.65 & 3.01 & 1.19 & 0.55 & 0.55 \\ & & TSLN-S2 & & & & 0.52 & 0.11 \\ \cline{2-10} & \multirow{5}{*}{Sc6} & BETA & \multirow{5}{*}{0.79} & \multirow{5}{*}{1.49} & \multirow{5}{*}{1.17} & 0.57 & 0.58 \\ & & BIN & & & & 0.51 & 0.14 \\ & & ELN & & & & 0.60 & 0.42 \\ & & LOG & & & 1.15 & 0.50 & 0.54 \\ & & TSLN-S1 & 0.79 & 1.49 & 1.17 & 0.51 & 0.53 \\ & & TSLN-S2 & & & & 0.55 & 0.10 \\ \end{tabular}
\end{table}
Table 2: Median of posterior medians of model parameters across all \(D=100\) repetitions by model and scenario. Given the scale of the data is different depending on the model, coefficients cannot be easily compared. The first group of \(\mathbf{x}^{\text{s}}\) is the reference group, \(\mathbf{x}^{\text{s}}(2)\) refers to the regression coefficient for the indicator for the second group of \(\mathbf{x}^{\text{s}}\). By construction the coefficients for levels 2 and 3 of \(\mathbf{x}^{\text{s}}\) are larger for Sc1, Sc3 and Sc5.
Application: further details
### Covariates
TSLN-S1The breadth of individual level covariates available in the NHS is enormous. We found an initial set of candidate covariates using pseudo-likelihood and lme4[26]. As mentioned in Section 2.1.2, we preferred models with a lower \(SR\) and a \(ALC\) closer to 1.
On top of those covariates included in the LOG model (sex, age, and marital status), we used a variety of others, listed below. Note that we square root transformed, denoted as sqrt, some of the continuous covariates to reduce their skew.
* Individual level categorical covariates with the number of categories given in brackets. For details see Table 3.
* High school (6)
* Kessler psychological distress score (5)
* Qualifications (9)
* Self-assessed health (5)
* laborforce status (6)
* Number of daily smokers in the household (3)
* Tenure type of household (5)
* Whether indigenous members in household (3)
* Area level
* Index of Relative Socio-Economic Disadvantage (IRSD) from the Socio-Economic Indexes for Areas (SEIFA) by the ABS (ABS 2016): 10 decile groups of increasing socio-economic disadvantage (SA2s in group 1 are classified as the most disadvantaged)
* State: 4 groups
* Occupation: proportion of SA2 who are professionals
* Indigenous status (sqrt): proportion of SA2 who identify as Aboriginal and/or Torres Strait Islander
* Income: proportion of SA2 with a high weekly personal income (specifically between AUDS1,500 and AUDS1,750)
* Unemployment rate (sqrt): proportion of persons in labor force in SA2 who are unemployed
* Household composition: proportion of households in SA2 with four people
We also found a significant improvement in model fit by adding a hierarchical prior on a risk factor categorical covariate constructed from every unique combination of sex and age and the following binary risk factors: insufficient physical activity, insufficient fruit and vegetable consumption, overweight, and risky alcohol consumption. All the binary risk factors were defined according to current Australian health guidelines with definitions given in Table 4.
Tsln-s2Deriving variable selection metrics for the TSLN-S2 requires additional thought; the input data are vectors of posterior draws, making the definition of the "data" difficult to determine. To align with previous work where the uncertainty inherent in fitting the first stage model is not considered, we use an approximation to the LOOCV. The 10o package requires log-likelihood, \(\mathbb{L}\left(.\right)\) evaluations for all data points and posterior draws [34]. Below we restate Eq. 2.1 and Eq. 2.2.2 as a reminder,
\[\hat{\mathbf{\theta}}_{i}^{\text{S1,is}} \sim N\left(\hat{\hat{\theta}}_{i},\widehat{\gamma}\left(\hat{\theta}_ {i}^{\text{S1}}\right)\right)\qquad i=1,\ldots,m\] \[\hat{\theta}_{i} \sim N\left(\theta_{i},\bar{\gamma}_{i}^{\text{S1}}\right)\qquad i= 1,\ldots,m.\]
\begin{table}
\begin{tabular}{l l} Certificate I/II & \\ Certificate not further defined & \\ No non-school qualification & \\ Level not determined & \\ \hline laborforce status & Employed, working full-time \\ Employed, working part-time & \\ Unemployed, looking for full-time work & \\ Unemployed, looking for part-time work & \\ Unemployed, looking for full-time or part-time work & \\ Not in the labor force & \\ \hline High school & Year 12 or equivalent \\ Year 11 or equivalent & \\ Year 10 or equivalent & \\ Year 9 or equivalent & \\ Year 8 or below & \\ Never attended school & \\ \hline Kessler psychological distress score & Low/moderate level of psychological distress (5-11) \\ & High/very high level of psychological distress (12-25) \\ & No applicable \\ & No asked \\ & Unable to determine \\ \hline Tenure type of household & Owner without a mortgage \\ & Owner with a mortgage \\ & Renter \\ & Other \\ & Not stated \\ \hline Self-assessed health & Excellent \\ & Very good \\ & Good \\ & Fair \\ & Poor \\ \hline Number of daily smokers in the household & Less than 2 \\ & Moore than 1 \\ & Not stated \\ \hline Whether indigenous members in household & Non-indigenous only household \\ & Indigenous only household \\ & Mixed household \\ \hline \end{tabular}
\end{table}
Table 3: Categories for the individual level covariates used in the TSLN and LOG models for current smoking prevalence. Most of the categories for these covariates were derived by the Australian Bureau of Statistics. For details of the definitions, we refer the reader to publicly available data dictionaries. An exception is the Number of daily smokers in the household variable, which was collapsed from its original form to ensure the variable did not _perfectly_ predict current smokers.
We take the expectation of \(\hat{\hat{\theta}}_{i}\) and treat it as data. In this way, we use
\[\mathbb{L}\left(\theta_{it};\mathbb{E}\left(\hat{\hat{\theta}}_{i}\right),\bar{ \gamma}_{i}^{\mathrm{S}1}\right)\qquad i=1,\dots,M;t=1,\dots,T,\]
to derive the LOOCV using the loo package. Although this approach to deriving the LOOCV ignores the uncertainty in fitting the first stage model, it aligns with previous two-stage approaches [3, 4].
The 2016 Australian census collects a large amount of demographic data which can be used as SA2-level covariates in our models. For a single demographic factor, such as qualification, there are several categories, resulting in numerous proportion variables for qualification alone (bachelor degree, postgraduate degree, etc.). Instead of relying on a set of single proportion covariates and risk missing important predictors, we used Principal Components analysis on 84 SA2-level census covariates. We found that the first six principal components had 63% of the variation and when used as covariates provided superior fit than models using the actual census proportions. Thus, the following eight SA2-level variables were included as fixed effects in the TSLN-S2 and ELN models; IRSD, state and principal components one to six.
### Spatial prior
The BYM2 prior proposed by Riebler _et al._[35] is a linear combination of an intrinsic CAR prior [36] and an unstructured normal prior. It places a single variance parameter, \(\sigma_{\delta}^{2}\), on the combined components with the help of a mixing parameter, \(\rho\in(0,1)\), that represents the amount of spatially structured as opposed to unstructured residual variation. The BYM2 prior is
\[\delta_{i} = \sigma_{\delta}\left(s_{i}\sqrt{\rho/\kappa}+v_{i}\sqrt{1-\rho} \right)\qquad i=1,\dots,M\] Eq. D.2.1 \[s_{i} \sim N\left(\frac{\sum_{k=1}^{M}W_{ik}s_{k}}{\sum_{k=1}^{M}W_{ik}}, \frac{1}{\sum_{k=1}^{M}W_{ik}}\right)\qquad i=1,\dots,M\] \[v_{i} \sim N(0,1)\qquad i=1,\dots,M,\]
where \(\mathbf{W}\in\mathbb{R}^{M\times M}\) is the spatial weight matrix, which defines the neighborhood structure of the SA2s. As is common in disease mapping [37], we use the binary contiguous specification where \(W_{ik}=1\) if area \(i\) and area \(k\) are neighbors
\begin{table}
\begin{tabular}{l l l} \hline \hline & Definition & Notes \\ \hline Overweight & By using the common cut-offs [27, 28, 29], those with a BMI greater or equal to & This includes those who are obese. \\ & 25 are coded as 1. & \\ Alcohol & Those who did not meet the revised 2020 & The guidelines stipulate that adults \\ & Australian National Health and Medical Research Council (NHMRC) guidelines [30] are coded as 1. & \\ Physical activity & Those who did not meet the 2014 Australian Department of Health Physical & The NHMRC guidelines, which closely \\ & Activity guidelines [31] were coded as 1. & \\ & & \\ & & \\ & & \\ & & \\ & & \\ \end{tabular}
\end{table}
Table 4: Definitions for the binary variables used in the risk factor categorical covariate. The categorical covariate has every unique combination of the four binary variables in this table and age and sex.
and zero otherwise. The parameter \(\kappa\) is a known scaling factor, while \(\rho\) is generally estimated from the data [35]. Following the recommendations by Gomez-Rubio _et al._[38], Mohadjer _et al._[39] and Banerjee, Carlin, and Gelfand [40], the ICAR prior for \(\mathbf{s}=(s_{1},\ldots,s_{M})\) is declared for all areas and thus the \(s_{i}\)'s for non-sampled areas are implicitly imputed during MCMC. Following advice by Gomez-Rubio _et al._[38], we also tried fitting an unstructured, ICAR and BYM2 prior at a higher administrative level; specifically the statistical areas level 3 (SA3s) which are constructed from aggregated SA2s. However, we found no discernible improvement in model fit.
### Benchmarking
Let \(\widehat{C}^{D}_{k[i]}\) and \(\widehat{\mathbf{v}}\left(\widehat{C}^{D}_{k[i]}\right)\) be the direct Hajek [25] estimate and sampling variance for state \(k=1,\ldots,4\) (see (1) and (2) in Section 1.1). The values of \(\widehat{C}^{D}_{k[i]}\) will be the benchmark values, with the goal that our population-weighted model-based estimate of this quantity
\[\widetilde{C}_{k[i]}=\frac{\sum_{i\in S_{k}}\hat{\mu}_{i}N_{i}}{\sum_{i\in S_ {k}}N_{i}}\qquad k=1,\ldots,4\] (D.3.1)
agrees at least approximately with \(\widehat{C}^{D}_{k[i]}\), where \(\hat{\mu}_{i}\) is the modeled prevalence estimate for SA2 \(i\). Note that \(S_{k}\) is the subset of integers that determines which SA2s are contained within state \(k\) (i.e. \(\mathbb{I}\left(i\in S_{k}\right)=1\) if SA2 \(i\) is in state \(k\) and zero otherwise). Inexact fully benchmarking takes the form,
\[\widetilde{C}_{k[i]}\sim N\left(\widehat{C}^{D}_{k[i]},\left(\epsilon\times \sqrt{\widehat{\mathbf{v}}\left(\widehat{C}^{D}_{k[i]}\right)}\right)^{2} \right)\qquad k=1,\ldots,4,\] (D.3.2)
where \(0<\epsilon<1\) is a discrepancy measure. Setting \(\epsilon=0\) gives exact benchmarking, whilst \(\epsilon=1\) enables the model to accommodate the benchmarks in line with their respective accuracy. We enforced stronger concordance between \(\widetilde{C}_{k[i]}\) and \(\widehat{C}^{D}_{k[i]}\) by fixing \(\epsilon=0.3\). With our SA4 level direct estimates, \(\epsilon=0.3\) gives standard deviations in Eq. D.3.2 that range from 0.002 to 0.003.
Given that SA2-level estimates from the LOG model are derived via poststratification (e.g. a post-model calculation), we cannot use Bayesian benchmarking. Instead, we use an exact ratio-adjusted estimator to benchmark the LOG model estimates [41]. We first derived an adjustment factor, \(R^{B}_{k[i]t}\), for the \(k\)th state and \(t\)th posterior draw,
\[\widetilde{Y}_{k[i]t} = \sum_{i\in S_{k}}\hat{\mu}_{it}N_{i}\] \[\widehat{Y}_{k[i]} = \widehat{C}^{D}_{k[i]}N_{k}\] \[R^{B}_{k[i]t} = \frac{\widetilde{Y}_{k[i]t}}{\widetilde{Y}_{k[i]}},\]
where \(\widetilde{Y}_{k[i]t}\) and \(\widehat{Y}_{k[i]}\) are the modeled and direct estimates of the smoking counts in state \(k\), respectively. Note that \(N_{k}\) is the population in state \(k\). Then the benchmarked LOG model estimates are calculated as,
\[\hat{\mu}^{B}_{it}=\frac{\hat{\mu}_{it}}{R^{B}_{k[i]t}}.\] (D.3.3)
which, by design, ensures that \(\sum_{i=1}^{M}\mathbb{I}\left(i\in S_{k}\right)\hat{\mu}^{B}_{it}N_{i}/N_{k}= \widehat{C}^{D}_{k[i]}\) for all posterior draws. Finally, posterior summaries are applied to \(\hat{\mu}^{B}_{i}\). Note that Zhang and Bryant [42] found that exact benchmarking provided larger reductions in posterior variance than that of inexact Bayesian benchmarking.
### Comparative performance
To compare models, we derived modeled and direct estimates at a higher administrative level: statistical areas level 4 (SA4). There are 65 SA4's across the east coast of Australia, with a median sample and population size of 148 and 220,000, respectively.
The motivation for comparing the models at the SA4 rather than SA2 level, is that we can plausibly treat the SA4 level direct estimates as the _truth_; approximately 75% of the direct estimates have coefficients of variation below 25%. Note that although the SA4-level estimates are far more precise and stable than those at the SA2 level, we acknowledge the limitations inherent in treating these as the truth. Thus, these performance results are illustrative at best, but presented here for completeness.
The SA4 level modeled and direct estimates are calculated using population-weighted averages of the model-based SA2-level estimates and (1), respectively. We compare the models using RRMSE, ARB, and coverage (see Section 4.0.1). In addition, given that the SA4-level estimates also have uncertainty, we quantify the overlap of the direct and modeled intervals. Overlap probabilities give the proportion of the modeled interval that is contained within the direct estimate interval. A high probability is preferred and a value of 1 denotes that the modeled interval is entirely within the direct estimate interval. To summarize the overlap probabilites across the 65 SA4s, we take a weighted mean where the weights are the inverse direct estimate standard deviations.
Fig. 1 displays equivalence plots and Table 5 compares the mean RRMSE and ARB across all SA4s and summarises the credible interval sizes, coverage and overlap probabilites for the three models.
Similar to the findings in the simulation study, Table 5 shows that the TSLN model provides smaller MRRMSE and credible interval sizes at the SA4 level. Although the TSLN provides poor Bayesian coverage and higher MARB, our model is preferable in terms of overlap.
### Further plots for case study
Figure 2: Comparison of the modeled SA2 level current smoking prevalence estimates by IRSD categories. Each boxplot summarises the posterior medians of the prevalence for a specific IRSD deciles and model.
Figure 3: Comparison of the modeled SA2 level current smoking prevalence estimates by remoteness.
Figure 4: Choropleth maps displaying the modeled estimates of current smoking prevalence for 1630 SA2s on the east coast of Australia. For each model, we mapped the posterior medians and width of the 95% HDI. Note that some values are lower than the range of color scales shown — for these values, the lowest color is used. |
2307.06557 | The kinematics of young stellar population in the W5 region of the
Cassiopeia OB6 association: implication on the formation process of stellar
associations | The star-forming region W5 is a major part of the Cassiopeia OB6 association.
Its internal structure and kinematics may provide hints of the star formation
process in this region. Here, we present a kinematic study of young stars in W5
using the Gaia data and our radial velocity data. A total 490 out of 2,000
young stars are confirmed as members. Their spatial distribution shows that W5
is highly substructured. We identify a total of eight groups using the k-means
clustering algorithm. There are three dense groups in the cavities of H II
bubbles, and the other five sparse groups are distributed at the ridge of the
bubbles. The three dense groups have almost the same ages (5 Myr) and show a
pattern of expansion. The scale of their expansion is not large enough to
account for the overall structure of W5. The three northern groups are, in
fact, 3 Myr younger than the dense groups, which indicates the independent star
formation events. Only one group of them shows the signature of feedback-driven
star formation as its members move away from the eastern dense group. The other
two groups might have formed in a spontaneous way. On the other hand, the
properties of two southern groups are not understood as those of a coeval
population. Their origins can be explained by dynamical ejection of stars and
multiple star formation. Our results suggest that the substructures in W5
formed through multiple star-forming events in a giant molecular cloud. | Beomdu Lim, Jongsuk Hong, Jinhee Lee, Hyeong-Sik Yun, Narae Hwang, Byeong-Gon Park | 2023-07-13T04:54:05Z | http://arxiv.org/abs/2307.06557v1 | The kinematics of young stellar population in the W5 region of the Cassiopeia OB6 association: implication on the formation process of stellar associations
###### Abstract
The star-forming region W5 is a major part of the Cassiopeia OB6 association. Its internal structure and kinematics may provide hints of the star formation process in this region. Here, we present a kinematic study of young stars in W5 using the Gaia data and our radial velocity data. A total 490 out of 2,000 young stars are confirmed as members. Their spatial distribution shows that W5 is highly substructured. We identify a total of eight groups using the k-means clustering algorithm. There are three dense groups in the cavities of H ii bubbles, and the other five sparse groups are distributed at the ridge of the bubbles. The three dense groups have almost the same ages (5 Myr) and show a pattern of expansion. The scale of their expansion is not large enough to account for the overall structure of W5. The three northern groups are, in fact, 3 Myr younger than the dense groups, which indicates the independent star formation events. Only one group of them shows the signature of feedback-driven star formation as its members move away from the eastern dense group. The other two groups might have formed in a spontaneous way. On the other hand, the properties of two southern groups are not understood as those of a coeval population. Their origins can be explained by dynamical ejection of stars and multiple star formation. Our results suggest that the substructures in W5 formed through multiple star-forming events in a giant molecular cloud.
Star formation (1569) - Stellar kinematics (1608) - Stellar associations (1582) - Stellar dynamics (1596) - Open star clusters (1160) +
Footnote †: journal: AJ
0000-0002-2880-788X]Beomdu Lim
0000-0002-2880-788X]Jongsuk Hong
0000-0002-2880-788X]Jinhee Lee
0000-0002-0888-088X]Hyeong-Sik Yun
0000-0002-0780-088X]Narae Hwang
0000-0002-3133-088X]Byeong-Gon Park
## 1 Introduction
Star formation takes place on a few parsecs to several hundreds of parsecs scales in a hierarchical way (Elmegreen et al., 2000). Stellar associations are the superb laboratories to study star formation process on such different spatial scales as they are the prime star-forming sites distributed along the spiral arm structure in the host galaxies (Battinelli et al., 1996; Lada and Lada, 2003; Gouliermis, 2018). OB associations are particularly interesting stellar systems because they contain a number of massive stars (Ambartsumian, 1947), which are rare in the solar neighborhood. OB associations are, in general, composed of a single or multiple stellar clusters and a distributed stellar population (Blaauw, 1964; Koenig et al., 2008; Lim et al., 2019, 2020). This internal structure may be closely associated with their formation processes.
Expansion of stellar clusters has been steadily detected in many associations (Kuhn et al., 2019; Lim et al., 2019, 2020, 2021). These findings seem to be the key features to understand the unboundedness of associations according to a classical model for the dynamical evolution of embedded clusters after rapid gas expulsion (Tutukov, 1978; Hills, 1980; Lada et al., 1984; Kroupa et al., 2001; Banerjee and Kroupa, 2013, 2015). Based on the
observational data, Lim et al. (2020) suggested that the young stellar population distributed over 20 pc in the W4 region of the Cassiopeia OB6 association originates from escaping stars from the central open cluster IC 1805.
However, cluster expansion alone cannot explain the origin of substructures commonly found in stellar associations. Such substructures are composed of stellar groups (or subclusters) (Kuhn et al., 2014) that are kinematically distinct (Lim et al., 2019, 2021; Lim et al., 2022). The formation of substructures can naturally be explained by star formation along filaments in almost all turbulent clouds (Andre, 2015). A range of gas densities leads to different levels of star formation efficiencies. High-density regions are the sites of cluster formation (Bonnell et al., 2011; Kruijssen, 2012). Gas clumps have different sizes and velocity dispersions depending on their virial states, which is observed as the so-called size-line width relation (Larson, 1981). There are attempts to detect this signature from substructures in stellar associations (Lim et al., 2019; Ward et al., 2020).
Since Elmegreen & Lada (1977) proposed the so-called collect and collapse scenario, a number of observational studies have reported the signatures of feedback-driven star formation, such as the morphological relationship between remaining gas structures and young stellar objects (YSOs), and their age sequences (Fukuda et al., 2002; Sicilia-Aguilar et al., 2004; Zavagno et al., 2007; Koenig et al., 2008; Lim et al., 2014, etc.). Recently, the physical causality between the first and the second generations of stars was assessed by using gas and stellar kinematics (Lim et al., 2018, 2021). Meanwhile, a series of theoretical work showed that feedback from massive stars predominantly suppresses subsequent star formation by dispersing remaining clouds (Dale et al., 2012, 2013; Dale et al., 2015). This result is supported by recent observations (Yi et al., 2018, 2021). The cores in the \(\lambda\) Orionis cloud exposed to a massive O-type star have higher temperatures, lower densities, lower masses, smaller sizes, and lower detection rates of dense gas tracers (N\({}_{2}\)H\({}^{+}\), HCO\({}^{+}\), and H\({}^{13}\)CO\({}^{+}\)) than those in the adjacent star-forming clouds Orion A and B, which implies the former cloud has less favorable conditions for core formation than the others. Therefore, further observational studies are required to test the collect and collapse scenario.
The massive star-forming region (SFR) W5, which is a major part of the Cassiopeia OB6 association, is an ideal target to study the formation process of stellar associations. The previously determined distances to this SFR range from 1.7 kpc to 2.3 kpc (Sharpless, 1955; Johnson et al., 1961; Becker & Fenkart, 1971; Georgelin & Georgelin, 1976; Moffat, 1972; Loktin et al., 2001; Chauhan et al., 2011; Lim et al., 2014). Its age is younger than 5 Myr (Karr & Martin, 2003; Koenig & Allen, 2011; Lim et al., 2014). This SFR is divided into the two regions W5 East and W5 West as it is surrounded by two giant H ii bubbles (Karr & Martin, 2003). The major sources of ionization are four O-type stars, BD +60 586 (O7.5V), HD 17505 (O6.5III((f))), HD 17520 (O9V), and HD 237019 (O8V) (Morgan et al., 1955; Conti & Leep, 1974; Hillwig et al., 2006). The presence of numerous YSOs have also been confirmed using extensive imaging surveys (Carpenter et al., 2000; Koenig et al., 2008). Most YSOs form clusters, while some are spread over several tens of parsecs (Koenig et al., 2008) as seen in many associations (Blaauw, 1964; Koenig & Leisawitz, 2014).
Early studies of the bright-rimmed cloud IC 1848A (W5A/S201) at the border of the giant H ii region suggested that star formation in the cloud had been triggered by the expansion of the H ii region (Loren & Wootten, 1978; Thronson et al., 1980). Wilking et al. (1984) also found another possible site of feedback-driven star formation at the northern cloud (W5NW). The double-peaked \({}^{13}\)CO (\(J=1-0\)) line they observed was interpreted as a result of the passage of a shock driven by the ionization front. Later, it was found that a number of YSOs and cometary nebulae were distributed along the H ii bubble (Karr & Martin, 2003; Koenig et al., 2008, 2008). In addition, YSOs far away from the ionizing sources tend to be at an earlier evolutionary stage of protostars (Koenig et al., 2008). These results were interpreted by feedback-driven star formation models (Elmegreen & Lada, 1977; Sandford et al., 1982).
The presence of multiple clusters, distributed stellar population, and the young stars distributed along the border of H ii regions suggest that this SFR might have been formed through multiple processes. The absence of kinematic information has hindered our understanding of its formation process. However, the parallax and proper motion (PM) data obtained from the Gaia mission (Gaia Collaboration et al., 2016) along with radial velocities (RVs) allow us to evaluate the membership of young stars and further investigate their kinematic properties. In this study, we aim to understand the formation process of this SFR. Data that we used are described in Section 2. In Section 3, the scheme of genuine members is addressed. We present the results of this study in Section 4 and discuss the star formation process within W5 in Section 5. Finally, our results are summarized in Section 6 along with our conclusions.
## 2 Data
### Selection of member candidates
Most OB associations are distributed along the Galactic plane (Wright, 2020), and therefore a large number of field interlopers are observed together in the same field of view. The member selection is a procedure of crucial importance to obtain reliable results as emphasized by our previous observational studies (e.g., Lim et al., 2020, 2021; Lim et al., 2022). We selected members through two steps. First, the candidates of young star members were identified using several spectrophotometric criteria. Second, the final member candidates can be selected in the parallax and PM cuts.
We first gathered four different catalogues. Massive O- and B-type stars found in SFRs are probable member candidates because of their short lifetime, especially for O-type stars. We obtained the lists of such O- and B-type stars from several databases of MK classification (Wenger et al., 2000; Reed, 2003; Skiff, 2009; Maiz Apellaniz et al., 2013). A catalogue of 192 O- and B-type stars in W5 region was created after some duplicates were removed. Koenig et al. (2008) published a list of 17,771 infrared sources distributed over W5 region. We took only 2,062 sources showing infrared excess. Later, a catalogue of 408 YSO candidates was released by Koenig & Allen (2011). This catalogue contains the spectral types and \(H\alpha\) equivalent widths of the stars. We considered stars with \(H\alpha\) equivalent widths smaller than \(-10\)A and \(0\)A as \(H\alpha\) emission stars and candidates, respectively. The last catalogue contains a total of 567 members in W5 West selected using \(UBVI\) and \(H\alpha\) photometry (Lim et al., 2014).
We cross-matched the four catalogues to create a master catalogue of member candidates. All O- and B-type stars were found in the catalogue of Koenig et al. (2008) except one. A total 564 out of 567 member candidates from Lim et al. (2014) have the infrared counterparts. Among the three candidates without infrared counterparts, two candidates are H\(\alpha\) emission stars, and the other one is an early-type star. Since they are highly probable members, we added these four sources to the master catalogue. All the YSO candidates from Koenig & Allen (2011) were included in the infrared source list of Koenig et al. (2008). The master catalogue contains a total of 2376 member candidates, of which 2000 have counterparts in the catalogue of Gaia Early Data Release 3 (EDR3; Gaia Collaboration et al., 2021).
The parallaxes of Gaia EDR3 have zero-point offsets as a function of magnitude, color, and ecliptic latitude (Lindegren et al., 2021). We corrected such offsets for the parallaxes of individual member candidates using the public Python code (Lindegren et al., 2021; [https://gitlab.com/iccub/public/gaiadr3_zeropoint](https://gitlab.com/iccub/public/gaiadr3_zeropoint)). In the catalogue of member candidates, we did not use stars with negative parallaxes or close companion (duplication flag = 1 or RUWE \(>\) 1.4) and stars without astrometric parameters in analysis. Figure 1 displays the color-magnitude diagram (CMD) of the member candidates.
### Radial velocities
We performed multi-object spectroscopic observations of 273 YSO candidates on September 3, October 30 in 2020, and October 22, 27, and 29 in 2021 using high-resolution (\(R\sim 34,000\)) multi-object spectrograph Hectochelle (Szentgyorgyi et al., 2011) on the 6.5m telescope of the MMT observatory. All the spectra were taken with the RV31 filter that covers a spectral range of 5150 to 5300 A in a \(2\times 2\) binning mode. For one observation setup, several tens of fibers were assigned to the YSO candidates, and the others were directed toward blank sky to obtain sky spectra. The exposure time for each frame was set to 35 minutes. A minimum of three frames were taken for the same observation setup to eliminate cosmic rays and achieve as high a signal-to-noise ratio as possible. For calibration, dome flat and ThAr lamp spectra were also obtained just before and after the target observation.
Figure 1: Color-magnitude diagram of stars in W5 region. Blue dots, black triangles, green squares, black open squares, red dots, and red open circles represent early-type stars, Class I, Class II, YSOs with a transitional disk, H\(\alpha\) emission stars, and H\(\alpha\) emission star candidates, respectively. The photometric data of these stars were taken from Gaia EDR3 (Gaia Collaboration et al., 2021).
We reduced the raw mosaic frames using the IRAF1/MSCRED packages following standard reduction procedures. One-dimensional spectra were subsequently extracted from the reduced frames using the dofiber task in the IRAF/SPECRED package. Target spectra were then flattened using dome flat spectra. The solutions for the wavelength calibration obtained from ThAr spectra were applied to both target and sky spectra.
Footnote 1: Image Reduction and Analysis Facility is developed and distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy under operative agreement with the National Science Foundation.
Some spectra were affected by the scattered light because our observations were conducted under bright sky condition. The scattered light was unevenly illuminated over the field of view (1\({}^{\circ}\) in diameter), resulting in a spatial variation of sky levels. Hence, we constructed a map of sky levels for a given setup following the procedure used in our previous study (see Lim et al., 2021 for detail). Target spectra were subtracted by sky spectra scaled at given target positions. The sky-subtracted spectra for the same target were then combined into a single spectrum. Finally, all target spectra were normalized by using continuum levels traced from a cubic spline interpolation. We rejected the spectra of 115 targets from subsequent analysis. Among them, the spectra of 108 targets have signals close to the sky background levels, and therefore the signals of these spectra were insufficient to measure RVs. The spectra of six targets were dominated by continuum, and that of the other one is dominated by emission lines.
We measured the RVs of the rest 158 YSO candidates using a cross-correlation technique. Synthetic stellar spectra for the solar abundance and \(\log g=4\) were generated in a wide temperature range of 3,500 to 10,000 K using SPECTRUM v2.76(Gray & Corbally, 1994)2 based on a grid of the ODFNEW model atmospheres (Castelli & Kurucz, 2004). These synthetic spectra were used as template spectra. We derived the cross-correlation functions between the synthetic spectra and the observed spectra of the YSO candidates with xcsao task in the RVSAO package (Kurtz & Mink, 1998). The velocities at the strongest correlation peaks were adopted as the RVs of given YSO candidates. The errors on RVs were estimated using the equation as below (Kurtz & Mink, 1998):
Footnote 2: [http://www.appstate.edu/](http://www.appstate.edu/) grayro/spectrum/spectrum.html
\[\epsilon(\mathrm{RV})=\frac{3w}{8(1+h/\sqrt{2}\sigma_{a})} \tag{1}\]
where \(w\), \(h\), and \(\sigma_{a}\) represent the full widths at half-maximum of cross-correlation functions, their amplitudes, and the root mean square of antisymmetric components, respectively. Rapidly rotating stars, in general, have large uncertainties in RVs because they have large full widths at half-maximum of cross-correlation functions. Also, the RV errors exponentially increase as the r-statistics of cross-correlation functions (\(r=h/\sqrt{2}\sigma_{a}\); Tonry & Davis, 1979) decrease. Indeed, it was confirmed that the cross-correlation functions with r-statistics smaller than 6 yield very large errors of RVs (\(>5\) km s\({}^{-1}\)). We thus excluded some RV measurements where the r-statistics of cross-correlation functions is less than 6. The median error of the RVs is about 1.2 km s\({}^{-1}\). The RVs of YSO candidates were then converted to velocities in the local standard of rest frame using the IRAF/RVOORRECT task.
## 3 Member Selection
We selected the member candidates based on the spectrophotometric properties of young stars. However, there may be a number of nonmembers in the catalogue of member candidates (see Lim et al., 2020, 2021; Lim et al., 2022). For instance, the two bright infrared sources (\(G_{\mathrm{RP}}<10\) mag and \(G_{\mathrm{BP}}-G_{\mathrm{RP}}>3\)) in Figure 1 are probably asymptotic giant branch stars in the Galactic disk because they are too bright to be the pre-main-sequence members of this SFR. It is, thus, necessary to filter out additional nonmembers using the Gaia parallaxes and PMs of stars.
Figure 2: Parallax (left) and PM (right) distributions of member candidates. The left panel displays the parallaxes and their associates errors. We plot stars that are brighter than 18 mag in \(G_{\mathrm{RP}}\) band and have parallaxes greater than three times the associated errors. In the left panel, dashed lines indicate the boundary used to search for genuine members between 1.0 and 3.5 kpc. The right panel exhibits the PM distributions of member candidates between the 1.0 and 3.5 kpc. Only stars with parallaxes greater than their associated errors are considered for analysis. The ellipse in the right panel shows the region confined within five times the standard deviation from the weighted mean PMs, where the inverse of the squared PM error is used as the weight value. The selected members are shown by red dots.
In order to exclude stars with very large measurement errors in parallax and PM, we used stars that are brighter than 18 mag in \(G_{\rm RP}\) band and have parallaxes greater than three times the associated errors. Note that a total of 863 candidates are fainter than 18 mag. The left panel of Figure 2 displays the parallax distribution of the member candidates. Most member candidates have parallaxes smaller than 1 mas (\(d>1\) kpc). The distances determined from previous studies range from 1.7 to 2.3 kpc (Sharpless, 1955; Johnson et al., 1961; Becker and Fenkart, 1971; Georgelin and Georgelin, 1976; Moffat, 1972; Loktin et al., 2001; Chauhan et al., 2011; Lim et al., 2014). However, we considered candidates between 1.0 kpc and 3.5 kpc to contain as many probable members as possible. Their PMs distribution is shown in the right panel of Figure 2.
The PM distribution shows the strong concentration of member candidates around (\(\mu_{\alpha}\cos\delta\), \(\mu_{\delta}\)) = (0 mas yr\({}^{-1}\), 0 mas yr\({}^{-1}\)). Most of them may be genuine members. In order to remove PM outliers, the statistical clipping method described in Lim et al. (2022) was applied for the member candidates.
We excluded some member candidates with PMs larger than five times the standard deviation (5\(\sigma\)) from the mean PMs. This criterion allows us to select some walkaway stars. The mean and standard deviation values were redetermined using the remaining member candidates. This iterative process was performed until the statistical values reached constant values.
Figure 3 displays the CMD of the selected members. However, the bright infrared source (\(G_{\rm RP}\) = 7.1 and \(G_{\rm BP}-G_{\rm RP}\) = 3.2) was still selected as a member. As we mentioned, this star may be an asymptotic giant branch star with similar kinematics to those of W5 members at almost the same distance. We excluded this star in our member list. A total of 490 candidates were finally selected as members. The members are listed in Table 1.
Member candidates that we selected using optical and infrared data are active stars with warm circumstellar disks. There are, in fact, a number of diskless YSOs in this SFR. Therefore, it is necessary to determine whether or not our member sample is representative for statistical analysis.
Such diskless YSOs without infrared excess emission have been identified using X-ray data in many previous studies (Getman et al., 2005; Flaccomio et al., 2006; Townsley et al., 2011; Caramazza et al., 2012, etc). However, any extensive X-ray survey has not yet been per
Figure 4: Spatial distribution of members in W5. The size of dots is proportional to the brightness of individual stars. The positions of stars are relative to the reference coordinate R.A. = 02\({}^{\rm h}\) 54\({}^{\rm m}\) 45\(\fs\)80, decl. = +60\({}^{\circ}\) 22\({}^{\prime}\) 04\(\farcs\)3 (J2000).
Figure 5: Distribution of distances (left) and RVs (right). In order to compute the reliable distance to W5, we used members with parallaxes larger than 10 times the associated errors. The bin sizes of 0.2 kpc and 1.2 km s\({}^{-1}\) were used to obtain the distance distribution and RV distribution, respectively. The red curves represents the best-fit Gaussian distributions.
Figure 3: CMD of the selected members. The symbols are the same as in the Figure 1.
formed for W5. Several hundreds of X-ray sources only in the eastern edge of W5 East (AFGL 4029) were detected (Townsley et al., 2019). We found a total of 257 X-ray counterparts in the Gaia EDR3 catalogue (Gaia Collaboration et al., 2021), of which 56 were genuine members brighter than 18 mag in \(G_{\rm RP}\) according to our member selection criteria. The member catalogue of this study contains 15 out of 56 X-ray sources. The PMs of members in the catalogue were compared with those of the 56 members with X-ray emission. As a result, they have median PM (\(-0.110\) mas yr\({}^{-1}\), \(-0.055\) mas yr\({}^{-1}\)) similar to that of the X-ray members (\(-0.122\) mas yr\({}^{-1}\), \(-0.092\) mas yr\({}^{-1}\)). Hence, we confirmed that the members selected in this study are representative sample of young stellar population in W5.
We present the spatial distribution of members in Figure 4. W5 region has a high level of substructures. There are several groups of stars with high surface density and a distributed stellar population. The identification of stellar groups is addressed in a later section in detail. Figure 5 shows the distance and RV distributions of members. The distances of individual members were obtained from the inversion of the zero-point-corrected Gaia parallaxes (Gaia Collaboration et al., 2021; Lindegren et al., 2021). These two distributions were fit to Gaussian distributions, respectively. We obtained the distance to W5 to be \(2.1\pm 0.1\) (s.d.) kpc and its systemic RV to be \(-37.8\pm 3.3\) km s\({}^{-1}\) from the center values of the best-fit Gaussian distributions. RV data within three times the standard deviation from the mean RV were used to minimize the contribution of close binaries.
Figure 6: Relation between the number of groups and inertia values. The number of groups was obtained from the elbow value of the inertia curve. The elbow value of eight (8) was determined from the point of intersection of the two red straight lines. See the main text for detail.
## 4 Results
### Substructure
Our previous studies have shown that stellar groups constituting the substructure in associations are spatially and kinematically distinct (Lim et al., 2019, 2020, 2021; Lim et al., 2022). This means that they are individually different physical systems. We identified stellar groups in W5 by means of the unsupervised machine learning algorithm k-means clustering (Lloyd, 1982). This algorithm finds a set of groups that have the smallest variance of each group (i.e., most compact groups) at a given number of groups. We used four-dimensional parameters as input data: R.A., decl., \(\mu_{\alpha}\cos\delta\), and \(\mu_{\delta}\). To find the optimized number of groups, we tested a number of groups from 1 to 20 and then computed the inertia value that is a sum of squared distances of stars to the centroid of their nearest group.
Figure 6 displays the variation of the inertia values with respect to the number of stellar groups. Adopting a small number of groups prevents many real stellar groups from being identified, while adopting a large number of groups can lead to an overestimation of the number of genuine groups. In the figure, the location where an abrupt change of the inertia value occurs is referred to as the elbow. The elbow value is useful to determine the optimal number of groups. It is assumed that the variation of the inertia values approximates the combination of two straight lines (red lines in the figure). These two lines were obtained using a least square method for the number of groups ranging from 2 to 9 and 10 to 19, respectively. The elbow value was determined as the intersection of two straight lines. Finally, we adopted eight groups that constitute the substructure in W5. The identified groups are plotted by different colors in Figure 7. These stellar groups were named according to R.A. order (A to H).
The groups C (green), D (blue), and F (purple) are stellar clusters denser than the other groups in W5. The two former groups are located at W5 West, while the latter ones are centered at the W5 East. Early-type stars in these three groups may be the main ionizing sources of W5. There are five sparse groups of stars around these clusters. The groups A, E, and H are located at the border of the northern H \(\mathrm{\SIUnitSymbol ii}\) bubbles, and the other two groups B and G are found in the southern part of the H \(\mathrm{\SIUnitSymbol ii}\) regions. We summarize the properties of individual groups in Table 2.
In order to quantify the structural properties of the eight groups in W5, the minimum spanning tree (MST) technique were applied to the eight groups. The MST technique is to find the minimum length of the edges to connect all data points, which is often used to measure the degree of mass segregation (Allison et al., 2009). In this work, we measured a dimensionless parameter \(\Lambda\) defined as the ratio of standard deviation of length of edges to the mean length of edges (Hong et al., 2017). The larger \(\Lambda\) means that there is a substructure in a group, such as core, clumps or filamentary structures. On the other hand, the lower \(\Lambda\) means that there is no structural trend in the group and, for example, the \(\Lambda\) becomes \(\sim 0.46\) when the group follows a uniform random distribution. The result of MST analysis can be found in Table 2. Although the MST results of individual groups rely on the membership determination of host groups by the clustering algorithm, the results clearly show a trend that dense groups show larger \(\Lambda\), and sparse groups show lower \(\Lambda\). Especially, the MST results for three groups (B, E, and G) show that there is no significant structure.
We additionally tested three different clustering algorithms: Density-Based Spatial Clustering of Applications with Noise (DBSCAN; Ester et al., 1996), hierarchical DBSCAN (HDBSCAN; Campello et al., 2013), and Agglomerative Clustering. Adopting the results from the k-means clustering, we tuned parameters for each algorithm that makes similar results. Scikit-learn (Pedregosa et al., 2011) was used for k-means clustering, DBSCAN, and Agglomerative Clustering, while the software developed by McInnes et al. (2017) was used for HDBSCAN.
DBSCAN finds groups based on the densities of data points. A main parameter is a radius (\(\epsilon\)) in consideration of neighboring data points. The \(\epsilon\) value from 3.5 to 3.0 identifies 7 to 10 groups. When \(\epsilon\) is 3.5, DBSCAN identified two major groups in the east and west (D+C and F+H in Figure 7). As \(\epsilon\) decreases, these groups tend to be split. DBSCAN could not identify sparse groups (A, B, E, and G) in all cases; these group members were identified as noises.
DBSCAN is limited to identify groups with similar density, and therefore we implemented hierarchical HDBSCAN (Campello et al., 2013). Unlike the DBSCAN algorithm, HDBSCAN can identify clusters with various density. A major parameter is the minimum sample size (\(n\)), which is a number of neighboring points to be considered as cores. Larger \(n\) provides more conservative clustering results. We tested \(n\) from 5 to 20. When Similar to DBSCAN, HDBSCAN identified denser major groups in east and west (C, D, and H+F). These groups were split when using smaller \(n\). Two sparse groups B and G were identified when using \(n=5\). Other sparse groups A and E were not identified in any cases.
We then tested a hierarchical clustering algorithm, Agglomerative Clustering. The major parameter is the number of groups. This algorithm was tested for the number of groups (\(n_{\rm groups}\)) from 4 to 14. When \(n_{\rm groups}\)=7, six groups (A, B, C, D, G, and H) were identified and the E+F group was identified as a single group. Larger \(n_{\rm groups}\) values result in splitting groups such as A, D, and H. Smaller \(n_{\rm groups}\) tends to combine groups; when \(n_{\rm groups}\)=4, group B and G are combined to C and F, respectively. While details are different, major results (east, west, and southern sparse groups) are similar to those obtained from the k-means clustering.
All clustering algorithms that we used, in common, properly identified the dense groups although the memberships of stars at the boundaries of host groups is slightly different. The sparse groups were regarded as noise for the DBSCAN and HDBSCAN algorithms while the Agglomerative Clustering and k-means clustering algorithms identified them as real groups. Therefore, we should cautiously adopt the clustering results because the internal structures in SFRs are, in fact, very complex and related to their formation processes. Further information is required to determine if they are real physical systems. The ages and kinematics of group members may provide additional constraints on the identities of individual groups. Since some sparse groups seem to be real groups associated with remaining clouds, adopting the result from the k-means clustering is suitable for the purpose of this study.
### Relative Ages
The star formation history in W5 can be inferred from the age distribution of stellar groups. The representative ages of individual groups are, in general, estimated from comparison of the overall features of CMDs with stellar evolutionary models. Especially, the luminosity of the main-sequence turn-on (MSTO) point (pre-main-sequence to main-sequence) is sensitive to the age of a given stellar group.
Figure 8 displays the distance-corrected CMDs of individual groups with theoretical isochrones. The age of the group C (the open cluster IC 1848, green dots) is known to be about 1-5 Myr (Moffat, 1972; Karr & Martin, 2003; Koenig & Allen, 2011; Lim et al., 2014). We estimated the age of this cluster fitting the MESA isochrones considering the effects of stellar rotation (Choi et al., 2016; Dotter, 2016) to the CMD of the cluster. The distance modulus (DM) of 11.6 mag (2.1 kpc) was applied to the 5 Myr isochrone. The minimum extinction \(A_{V}\) of 1.86 mag was adopted from our previous study (Lim et al., 2014). This reddened isochrone fits well to the ridge of main-sequence as well as the luminosity of the MSTO. Therefore, we adopt 5 Myr as the age of this group. There are a number of stars brighter than the 5 Myr isochrone at given colors. These stars may be highly reddened stars (see the blue curve in the figure).
We compared the MSTO and the overall features of the CMDs of individual stellar groups with that of the group C. The CMD morphology of the two dense groups D (blue) and F(purple) is very similar to that of the group C, and therefore the three dense groups would have similar age (5 Myr). The members of the sparse group A, E, and H are brighter and redder than those of the group C. We plotted the 2 Myr isochrones reddened by 3.00 mag, which passes through the middle of their CMDs. The color spread from the isochrones implies the presence of differential reddening across each group. In fact, they are located at the border of the H ii region where large amounts of gas remain. This result indicates that there is an age difference between the northern sparse groups and dense groups. Therefore, the group division by the k-means clustering is meaningful.
On the other hand, the representative ages of the two southern groups B (red) and G (pink) are unclear as their MSTO is not well defined. The overall morphology of their CMDs are somewhat different from those of the other groups. There is no star between 2 and 4 mag in \(G_{\rm RP}-DM\). The stars brighter than 2 mag close to the 2 Myr isochrone, while the faint stars seem to be older (\(>5\) Myr) than the bright stars. We speculated that these two groups may not be the groups of coeval stars with the same origin.
Figure 7: Identification of stellar groups in W5. A total of eight groups were identified by means of the k-means clustering algorithm. The identified clusters were shown by different colors. The size of dots is proportional to the brightness of individual stars.
### Kinematics
W5 has systemic PMs of \(-0.273\) mas yr\({}^{-1}\) and \(-0.333\) mas yr\({}^{-1}\) along R.A. and decl., respectively. We investigated the kinematics of individual groups. The tangential velocities (\(V_{\rm R.A.}\) and \(V_{\rm decl.}\)) were computed from the PMs multiplied by the distance of 2.1 kpc. Figure 9 exhibits the distributions of velocities with respect to R.A. and decl. Since most groups are distributed along the east-west direction rather than the north-south direction, it is easier to probe some trends in position-velocity plane along R.A.
There is no clear tendency between \(V_{\rm R.A.}\) and positions of stars along R.A., while a gradual variation of \(V_{\rm decl.}\) along R.A. is detected. \(V_{\rm decl.}\) decrease at 0.08 km s\({}^{-1}\) pc\({}^{-1}\). Any significant large-scale variation in RV was not found. We present the median velocities (\(V_{\rm R.A.,med}\), \(V_{\rm decl.,med}\), and RV\({}_{\rm LSR,med}\)) in Table 2.
We computed the standard deviation of the velocities of individual groups after excluding some outliers. The median measurement errors were adopted as the typical velocity errors. The velocity dispersions of given groups were then obtained from quadratic subtraction between the standard deviation values and the typical velocity
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline Group & R.A. (2000) decl. & (2000) & \(\mu_{\alpha}\cos\delta_{\rm med}\) & \(\mu_{\delta_{\rm med}}\) & \(V_{\rm R.A.,med}\) & \(V_{\rm decl.,med}\) & RV\({}_{\rm LSR,med}\) & \(\sigma(V_{\rm R.A.})\) & \(\sigma(V_{\rm decl.})\) & \(\sigma({\rm RV_{\rm LSR}})\) & N & \(\Lambda\) \\ & [deg] & [deg] & [mas yr\({}^{-1}\)] & [mas yr\({}^{-1}\)] & [km s\({}^{-1}\)] & [km s\({}^{-1}\)] & [km s\({}^{-1}\)] & [km s\({}^{-1}\)] & [km s\({}^{-1}\)] & [km s\({}^{-1}\)] & [km s\({}^{-1}\)] & \\ \hline A & 41.996133 & 60.703094 & -0.215 & -0.341 & -2.1 & -3.4 & -29.1 & 5.3 & 2.8 & \(\cdots\) & 38(1) & 1.08 \\ B & 42.537271 & 59.593970 & -0.616 & -0.479 & -6.1 & -4.8 & \(\cdots\) & 6.9 & 6.1 & \(\cdots\) & 13(0) & 0.62 \\ C & 42.811804 & 60.393039 & -0.387 & -0.442 & -3.9 & -4.4 & -36.4 & 2.4 & 2.1 & 3.5 & 159(15) & 1.13 \\ D & 43.522667 & 60.606917 & -0.138 & -0.330 & -1.4 & -3.3 & -38.3 & 3.5 & 2.4 & 2.7 & 115(14) & 1.08 \\ E & 44.537468 & 60.884745 & -0.291 & -0.044 & -2.9 & -0.4 & -39.2 & 6.8 & 5.2 & \(\cdots\) & 25(4) & 0.87 \\ F & 44.785335 & 60.561629 & -0.277 & -0.038 & -2.8 & -0.4 & -39.0 & 2.4 & 2.0 & 2.4 & 84(25) & 1.12 \\ G & 45.152338 & 59.595502 & 0.205 & -0.631 & 2.0 & -6.3 & \(\cdots\) & 5.6 & 8.7 & \(\cdots\) & 10(0) & 0.74 \\ H & 45.331372 & 60.488579 & -0.110 & -0.055 & -1.1 & -0.5 & -37.4 & 3.1 & 3.8 & 0.6 & 46(12) & 1.73 \\ \hline \end{tabular} Note. – Column (1) : Group name. Columns (2) and (3) : Position of groups. Columns (4) and (5) : Median PMe along R.A. and decl. Columns (6–8) : Median tangential velocity along R.A., median tangential velocity along decl., and median RV. Columns (9–11): Dispersion of tangential velocity along R.A., dispersion of tangential velocity along decl., and RV dispersion. Column (12) : The number of group members. The numbers in parenthesis represent the number of group members with RV measurements. Columns (13) : Result of MST.
\end{table}
Table 2: Properties of individual groups
Figure 8: CMD of stellar groups. The colors of dots represent the members of given groups corresponding to the color codes shown in Figure 7. The CMD of the group C is plotted by gray dots for comparison. The red and blue curves exhibit the 5 Myr isochrones reddened by a total extinction (\(A_{V}\)) of 1.86 and 3.00 mag, respectively, while the 2 Myr isochrone reddened by \(A_{V}\) of 3.00 is plotted by gray curves. The CMDs of individual stellar groups are compared with that of the open cluster group C (green).
errors. For RVs, we computed the velocity dispersions of the four groups that have more than ten stars with RV measurements. The velocity dispersions are presented in Table 2.
The dense, populous groups C, D, and F tend to have velocity dispersions smaller than those of the other groups. In addition, the motions of stars in such groups seem to be nearly isotropic, given similar velocity dispersions among the tangential velocities and RV. The group A shows a large velocity dispersion in \(V_{\rm R.A.}\), while it has a small value in \(V_{\rm decl.}\) (see also Figure 9). The groups B and G have particularly large velocity dispersions.
We investigated the motions of individual groups within W5. Figure 10 shows the median PMs of individual groups relative to the systemic motion of W5. The number of members in W5 West is twice that in W5 East, so the groups A, C, and D have relative PMs close to the systemic motion of W5. The eastern groups E, F, and H are moving toward north, while the southern groups B and G are radially receding away from the center of this association.
Finally, the motion of individual members relative to the center of their host groups is probed in Figure 11. The panels in the first and third rows of the figure exhibit the relative PM vectors of individual members in given groups. The group members show different pattern of motion. Some members of the groups C, D, and F seem to be moving outward from their group center. The PM vectors of members in the other groups have somewhat random direction.
In order to quantitatively investigate the direction of their motion, we computed the vectorial angle (\(\Phi\)) that is defined by the angle between the position vector from the group center and the relative PM vector of members (Lim et al., 2019, 2020, 2021; Lim et al., 2022). A \(\Phi=0^{\circ}\) indicates that a star is radially receding away from the group center, while a \(\Phi=180^{\circ}\) means that it is sinking inward.
The panels in the second and fourth rows of Figure 11 displays the \(\Phi\) distribution with respect to the projected distances from the center of each group. More than 40% of members in the groups C, D, and F have \(\Phi\) values close to \(0^{\circ}\). This result indicates that these groups are expanding. The group B and G also shows a pattern of expansion, but the number of members are too small to confirm this claim. The members of the other groups show random \(\Phi\) distributions, indicating random motion. The biggest difference between the dense groups and the two southern groups is that the dense groups have nearly isotropic inner region, which means that these groups are self gravitating.
Figure 12 displays the integrated intensity map of the \({}^{12}\)CO \(J=1-0\) taken from Heyer et al. (1998). The integrated intensity map clearly shows the cavities in W5, where the groups C, D, and F incubating massive stars are located (see also Koenig et al., 2008). Most molecular clouds are distributed in the north of the three groups. The members of the stellar groups A, E, and H are spread over the northern clouds. Meanwhile, a small amount of clouds remain in the southern region.
The molecular clouds have RVs in a range of \(-45\) km s\({}^{-1}\) to \(-35\) km s\({}^{-1}\) as shown in the position-velocity
Figure 10: Mean PMs of stellar groups relative to the systemic motion of W5. The brownish arrows represent the relative PM vectors of individual groups. The other symbols are the same as those of Figure 7.
Figure 9: Position-velocity diagrams of stars. The colors of dots are the same as those in Figure 7. The vertical lines represent the errors of velocity measurements.
(PV) diagrams of Figure 12. Some clouds appear to be influenced by massive stars in groups C, D, and F. This aspect is found in the PV diagram along R.A. Although there is a scatter in stellar RVs, the RVs of molecular clouds are slightly smaller than those of the adjacent stellar groups hosting massive stars. Similar results were also found in some star-forming regions (Lim et al., 2018, 2021).
## 5 Star formation in W5
The extent of W5 is over 70 pc, and a high-level of substructures are found within the SFR. The dense stellar groups C, D, and F are located in the cavities of the giant H ii regions. Their age estimated from the MSTO is, in common, about 5 Myr, indicating that they formed in almost the same epochs. These dense groups are older than the other sparse groups, and therefore the star formation had been ignited at their current locations.
These dense groups have velocity dispersions smaller than the other groups. If the small velocity dispersions indicate their physical state of their natal cloud,
Figure 11: Motions of stars in their host groups. The panels in the first and third rows display the spatial distributions of stars in given groups. The straight lines with different lengths represent the PM vectors of stars relative to their host groups. The \(\Phi\) distributions are shown in the panels in the second and fourth rows. The colors of dots corresponds to those of stellar groups in Figure 7.
the cloud might have rapidly reached a subvirial state. Such a process may favorably occur in dense filaments by gravitational instability (Andre et al., 2010), which may lead to cluster formation (Bonnell et al., 2011; Kruijssen, 2012). Hence, the three dense groups might have formed in the densest region in a giant molecular cloud on a very short timescale.
About 40% of stars in the dense groups are escaping from their host groups. Some stars scattered over this SFR may have originated from the expansion of such dense groups. However, their expansion pattern is not as significant as that found in IC 1805 which has a core-halo structure (Lim et al., 2020). We therefore argue that the expansion of the dense groups may not be the origin of the overall structure in W5. The age difference between the dense groups and the northern groups supports this argument. Despite, it is expected that their expansion will lead to a distributed stellar population over several Myrs.
Star formation propagated to the northern part of W5 2 Myr ago. The sparse groups (A, E, and H) formed along the ridge of the H ii regions. A number of previous studies have proposed that W5 is the site of feedback-driven star formation (Loren and Wootten, 1978; Thronson et al., 1980; Wilking et al., 1984; Karr and Martin, 2003; Koenig et al., 2008). Koenig et al. (2008) suggested that the radiatively driven implosion mechanism (Klein et al., 1985) operates on a small spatial scale, e.g., cometary globules or elephant trunk structures in the southern ridge of W5 West, while the collect and collapse mechanism (Elmegreen and Lada, 1977) works on a larger scale, e.g., the northern clouds.
If the sparse groups were formed by the expansion of the H ii regions, then they are expected to be receding away from ionizing sources. Since the group members form in compressed clouds, these stars have similar kinematics to that of the remaining clouds. However, observational results do not fully support the argument of Koenig et al. (2008).
In figure 12, the PM vectors of the group A members are shown relative to the group C. Their PM vectors have somewhat random orientation. Only one member has a RV measurement, and its RV is far different from that of the adjacent clouds. The RV data of more members would be required to better determine their systemic RV.
We display the PM vectors of the group E and H members relative to the nearest dense group F in W5 East. Similarly, the group E members show random motions although they have similar RVs to those of the remaining clouds. The group H members are systemically receding away from the group F. We computed the \(\Phi\) values of the group H members relative to the group H. As a result, the \(\Phi\) distribution shows a peak at \(\sim-20^{\circ}\). The fraction of stars showing such a systemic motion is over 50% of all group members. Also, their RVs are consis
Figure 12: Integrated intensity maps and PVs of the entire region (left) and W5 East region (right). These radio data were obtained from \({}^{12}\)CO (\(J=1-0\)) line (Heyer et al., 1998). The gray scale represents the distribution of molecular clouds. The color-coded dots shows the distribution of members in the integrated intensity maps and PVs. The size of dots are proportional to the brightness of members. Arrows indicate the PM vectors of the sparse group members relative to their nearest dense groups (C in W5 West and F in W5 East). The nearest dense group containing ionizing sources in W5 West and East are the group C and F, respectively.
tent with those of the elephant trunks at the eastern edge of the H ii region. The formation of the group H out of the three northern groups is likely associated with feedback from massive stars.
Perhaps, the groups A and E spontaneously formed. They have velocity dispersions larger than those of the dense groups. It implies that their natal cloud was probably less favorable sites of star formation, like low-density thin filaments (Andre et al., 2010). Star formation have proceeded on a timescale longer than the formation timescale of the dense groups.
Figure 12 also exhibits the PM vectors of the members in the southern groups B and G relative to the dense group C and F, respectively. The group B members have randomly oriented PM vectors, while more than 50% of the group G members are moving away from this SFR. As seen in Section 4.2, the group B and G members are not as young as those of the groups A, E, and H. Also, it is not sure that the members of these two groups are coeval populations as seen in their CMDs.
Hence, these group members may have different origins. One possible explanation is that some of them are walkaway stars (de Mink et al., 2014). Runaway and walkaway stars can originate from the end of binary evolution (Blaauw, 1961). In this scenario, the massive primary star undergoes supernova explosion, and then the less massive secondary star is ejected, becoming either a runaway or a walkaway star. Another hypothesis is related to the dynamical ejection of stars in star-forming regions (Poveda et al., 1967; Oh et al., 2015; Oh & Kroupa, 2016). Since there is weak evidence of supernova explosions in W5 (Vallee et al., 1979), the latter one is the more favorable mechanism for the southern groups in W5. The noncoeval population and the abnormal high-mass star population in CMDs (Figure 8) can also be naturally explained according to this mechanism.
Lim et al. (2020) confirmed, in both numerical and observational ways, that subvirial collapse of cluster can lead to isotropic core and expanding halo structure. Maiz Apellaniz et al. (2022) also confirmed some stars and stellar systems isotropically ejected from the Bermuda cluster. They suggested that these ejected stars and stellar systems have a large amount of mass, and therefore such large mass-loss can result in cluster expansion. However, the group B and G members show anisotropic spatial distribution relative to this SFR. They occupy only the southern regions. If group G had the same origin as the halo of IC 1805 or the stellar systems ejected from the Bermuda cluster, it might miss some stars escaping in different directions, or the clustering algorithm we used might not be able to identify them. Based on current observational data, only a few stars are moving outward beyond this SFR.
The group B members have randomly-oriented PM vectors. The dynamical ejection mechanism cannot explain the random orientation in PM vectors. Low-levels of local star formation events may be another possible explanation for the origin of the southern groups. In this case, it should solve the question why there are more early-type stars than later-type stars, given the typical initial mass function (Salpeter, 1955; Kroupa, 2001).
In this study, a total of eight groups were identified. However, it is necessary to search for smaller subgroups in the future work. For instance, stars in the eastern part of group H (\(\Delta\)R.A. \(\sim 60^{\prime}\)) constitute a small aggregation. They seem to be associated with a small H ii bubble east of the W5 East bubble (see figure 7 of Koenig et al., 2008). Their PM vectors relative to the group F show random directions, unlike stars in the western part of the group H. In addition, there is a small subgroup south of the group C. This group seems to be associated with the pillar-like structures at the border of the southern ridge of the W5 West bubble (see also figure 7 of Koenig et al., 2008). These further groupings will help to better understand the formation process of this SFR.
## 6 Summary and Conclusion
We studied the spatial and kinematic properties of young stars in the massive SFR W5 of the Cassiopeia OB6 association using the Gaia EDR3 data and high-resolution spectra to understand the formation process of stellar associations.
A total of 490 out of 2,000 young stars over W5 were selected as members using the Gaia parallaxes and PMs. The spatial distribution of the members reveals high-levels of substructures in W5. We identified eight stellar groups in total by means of the k-mean clustering algorithm. Three dense groups are centered at the cavities of the giant H ii regions, and three sparse groups are found at the border of the H ii bubble. The other two groups were found on the outskirt of the southern bubble. Our results were compared with those obtained from the other unsupervised machine learning algorithms, such as DBSCAN, HDBSCAN, and Agglomerative Clustering.
The dense groups are composed of the oldest stellar population (5 Myr), indicating that they are the first generation of stars in W5. They are now expanding. Three million years after their birth, star formation might have propagated toward the northern regions. Only one group (H) shows the signature of feedback-driven star formation. A number of its members are
moving away from the nearest ionizing sources in the neighboring dense group F. In addition, their RVs are similar to those of the adjacent gas structures. On the other hand, the other two northern groups do not show such signatures, and therefore they might have spontaneously formed in the current positions.
The southern groups B and G seem not to be composed of coeval population. The group B members have randomly oriented PM vectors, while more than half of the group G members are moving away from W5. We discussed their possible origins as walkaway stars from W5 and/or multiple low-level star formation events.
In conclusion, the major star formation process in W5 may be associated with the structure formation in a giant molecular cloud. Multiple star formation might have spontaneously taken place in different positions and epochs. In addition, feedback from massive stars has triggered the formation of a new generation of stars, but the spatial scales at which this mechanism occurs may not as large as Koenig et al. (2008a) suggested. Subsequent dynamical evolution of stellar groups will form a distributed stellar population in several Myr.
The authors thank the anonymous referee for constructive comments and suggestions. The authors would also like to express thanks to Prof. Mark Heyer for providing supplementary data, Dr. Nelson Caldwell, and the other mountain staffs for assisting with Hectochelle observations. Observations reported here were conducted at the MMT Observatory, a joint facility of the University of Arizona and the Smithsonian Institution. This paper has made use of data obtained under the K-GMT Science Program (PIDs: MMT-2020B-001 and MMT-2021B-001) partly supported by the Korea Astronomy and Space Science Institute (KASI) grant funded by the Korean government (MSIT; No. 2023-1-860-02, International Optical Observatory Project) and from the European Space Agency (ESA) mission Gaia ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the Gaia Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. This research has also made use of the SIMBAD database, operated at CDS, Strasbourg, France. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT; grant Nos. NRF2019R1C1C1005224 and 2022R1C1C2004102) and the research grant of Kongju National University in 2022. BL is grateful for Ms. Seulgi Kim's assistance in data reduction and Prof. Jeong-Eun Lee's comments on observing proposal. MMT:6.5m xcsao(Kurtz & Mink, 1998), NumPy(Harris et al., 2020), Scipy(Virtanen et al., 2020) |
2304.10665 | Magnetic field measurement from the Davis-Chandrasekhar-Fermi method
employed with Atomic Alignment | The Davis-Chandrasekhar-Fermi (DCF) method is widely employed to estimate the
mean magnetic field strength in astrophysical plasmas. In this study, we
present a numerical investigation using the DCF method in conjunction with a
promising new diagnostic tool for studying magnetic fields: the polarization of
spectral lines resulting from the atomic alignment effect. We obtain synthetic
spectro-polarimetry observations from 3D magnetohydrodynamic (MHD) turbulence
simulations and estimate the mean magnetic field projected onto the plane of
the sky using the DCF method with GSA polarization maps and a modification to
account for the driving scale of turbulence. We also compare the method to the
classical DCF approach using dust polarization observations. Our observations
indicate that the modified DCF method correctly estimates the plane-of-sky
projected magnetic field strengths for sub-Alfv\'enic turbulence using a newly
proposed correction factor of $\xi' \in 0.35 - 0.75$. We find that the field
strengths are accurately obtained for all magnetic field inclination and
azimuth angles. We also observe a minimum threshold for the mean magnetic field
inclination angle with respect to the line of sight, $\theta_B \sim 16^\circ$,
for the method. The magnetic field dispersion traced by the polarization from
the spectral lines is comparable in accuracy to dust polarization, while
mitigating some of the uncertainties associated with dust observations. The
measurements of the DCF observables from the same atomic/ionic line targets
ensure the same origin for the magnetic field and velocity fluctuations and
offer a possibility of tracing the 3D direction of the magnetic field. | Parth Pavaskar, Huirong Yan, Jungyeon Cho | 2023-04-20T22:18:11Z | http://arxiv.org/abs/2304.10665v2 | # Magnetic field measurement from the Davis-Chandrasekhar-Fermi method employed with Atomic Alignment
###### Abstract
The Davis-Chandrasekhar-Fermi (DCF) method is widely employed to estimate the mean magnetic field strength in astrophysical plasmas. In this study, we present a numerical investigation using the DCF method in conjunction with a promising new diagnostic tool for studying magnetic fields: the polarization of spectral lines resulting from the atomic alignment effect. We obtain synthetic spectro-polarimetry observations from 3D magnetohydrodynamic (MHD) turbulence simulations and estimate the mean magnetic field projected onto the plane of the sky using the DCF method with Ground-State-Alignment (GSA) polarization maps and a modification to account for the driving scale of turbulence. We also compare the method to the classical DCF approach using dust polarization observations. Our observations indicate that the modified DCF method correctly estimates the plane-of-sky projected magnetic field strengths for sub-Alfvenic turbulence using a newly proposed correction factor of \(\xi^{\prime}\in 0.35-0.75\). We find that the field strengths are accurately obtained for all magnetic field inclination and azimuth angles. We also observe a minimum threshold for the mean magnetic field inclination angle with respect to the line of sight, \(\theta_{B}\sim 16^{\circ}\), for the method. The magnetic field dispersion traced by the polarization from the spectral lines is comparable in accuracy to dust polarization, while mitigating some of the uncertainties associated with dust observations. The measurements of the DCF observables from the same atomic/ionic line targets ensure the same origin for the magnetic field and velocity fluctuations and offer a possibility of tracing the 3D direction of the magnetic field.
keywords: plasmas - polarization - methods: numerical - ISM: magnetic fields - (magnetohydrodynamics) MHD
## 1 Introduction
The interstellar medium (ISM) has been extensively studied in the past due to its importance in a wide range of astrophysical phenomena. One particularly crucial aspect of the ISM is the interstellar magnetic fields, which significantly influence the dynamics of the plasma. In addition, the magnetic fields impact several processes, including but not limited to plasma turbulence (Goldreich & Sridhar, 1995; Cho & Vishniac, 2000; Cho & Lazarian, 2003), star formation (Crutcher, 2012; McKee & Ostriker, 2007; Fissel et al., 2016), stellar feedback, cosmic-ray transport and acceleration (Schlickeiser, 2002; Yan & Lazarian, 2002, 2004), accretion disk dynamics, astrophysical jets, and the chemical evolution of the galaxy (see, e.g. Ge et al., 2016). Therefore, accurately measuring the interstellar magnetic fields and their contributions to these processes is crucial in developing consistent theories. However, this measurement is not trivial.
On length scales shorter than the coherence scale of interstellar magnetic fields, the total field can be decomposed into two components; the (global) mean field with a preferential direction and the (local) turbulent field. While there are methods that utilize polarization information to measure the magnetic fields, e.g., the Davis-Chandrasekhar-Fermi method (Davis, 1951; Chandrasekhar & Fermi, 1953, hereinafter the DCF method) and the Polarization-Intensity gradient method (Koch et al., 2012), their probes typically rely on the polarization of emission/absorption arising from magnetically aligned dust. Although widely accepted as the conventional polarization diagnostic, dust alignment measurements may not be completely accurate owing to a number of uncertainties and inconsistencies (see, e.g. Reissl et al., 2014). For instance, an obvious caveat with the conventional DCF method is the utilization of measurements of the line-of-sight (LOS) velocity and polarization from separate targets, i.e., the Doppler shift of spectral lines and the polarization of aligned dust emission or absorption. While modifications have been made to the DCF method to improve its accuracy (Heitsch et al., 2001; Falceta-Goncalves et al., 2008; Hildebrand et al., 2009; Houde et al., 2009; Cho & Yoo, 2016; Federrath, 2016; Skalidis & Tassis, 2021), this inconsistency is typically not addressed in the studies, making it necessary for other methods to be developed to be used in complement with the current techniques to trace the magnetic fields.
Several past studies have shown that in the presence of anisotropic optical pumping, the alignment of angular momenta of atoms and ions in the plasma can lead to the polarization of atomic spectral lines (Yan & Lazarian, 2006, 2007, 2008, 2012; Shangguan & Yan, 2013; Zhang & Yan, 2018). The UV or optical pumping by an anisotropic radiation field can cause uneven population distribution
on the ground/metastable states and align the angular momenta of the atoms. In the presence of an external uniform magnetic field, the atoms are realigned owing to the fast magnetic precession. The resultant spectral lines from the aligned states are thus polarized toward the magnetic field. This effect, named atomic alignment or Ground-State-Alignment (GSA), is a powerful diagnostic in the study of the magnetic fields in the ISM. Both 3D direction and tomography can be retrieved by GSA (Yan and Lazarian, 2012; Yan et al., 2019). More recently, polarized absorption lines from thr ground-state have been identified in a Post-AGB binary system 89Het, giving the observational confirmation of the applicability of the GSA effect (Zhang et al., 2020).
In this paper, we present a study that utilizes polarization observations and line width measurements from the same spectral lines to measure the magnetic field strength. We employ 3D simulations of magnetohydrodynamic (MHD) turbulence to obtain synthetic polarization observations arising from the Ground-State-Alignment (GSA) effect, which we then combine with the Davis-Chandrasekhar-Fermi (DCF) method to estimate the plane-of-sky (POS) projected magnetic field strength. We compare our new technique to the traditional DCF method, including cases of non-perfectly aligned polarized spectral lines.
Our work is organized as follows: we provide a brief explanation of the DCF method and the GSA effect in SS2 and SS3, respectively. In SS4, we describe the simulation setup and the numerical methods used in this study. In SS5, we present our observations and results. Finally, we summarize our work in SS6.
## 2 The modified DCF method
The DCF method (Davis, 1951; Chandrasekhar and Fermi, 1953) is one of the most commonly used techniques for measuring the magnetic fields in a wide range of astrophysical systems, including molecular clouds, HII regions, and the interstellar medium in general. The method is based on the assumption that the magnetic field in a given region is in a state of equipartition with the turbulent motion of the gas inside it (Chandrasekhar and Fermi, 1953). According to the method, the strength of the mean magnetic field projected onto the POS is given by:
\[B_{0,\mathrm{pos}}=\xi\sqrt{4\pi\tilde{\rho}}\,\frac{\delta v_{\mathrm{los}}} {\delta\phi} \tag{1}\]
where \(\tilde{\rho}\) is the mean density, \(\delta v_{\mathrm{los}}\) is the velocity dispersion along the LOS and \(\delta\phi\) is the dispersion in the angle between the turbulent and the mean magnetic fields projected on the POS. This angle dispersion is typically measured as the dispersion in the observed polarization vectors, while the LOS velocity fluctuations can be measured from the widths of optically thin emission lines. The constant \(\xi\) is a correction factor usually taken to be \(\sim 0.5\)(Heitsch et al., 2001; Ostriker et al., 2001) or lower (Liu et al., 2021). The expression is derived from the condition that in Alfvenic turbulence, there exists an equipartition between the kinetic and magnetic energy densities, i.e. the root-mean-square (rms) fluctuations of the velocity and the magnetic field are related. In addition to Alfvenic (incompressible) turbulence, the DCF method also assumes that the velocity and magnetic field fluctuations are isotropic, and that the turbulent magnetic field energy is much smaller than the global B-field energy.
The POS-projected observables are always LOS-integrated, resulting in intrinsic limitations of the observed signals due to LOS averaging effects (Zweibel, 1990; Myers and Goodman, 1991). This effect depends on the number of individual turbulent eddies along the LOS. The error is typically seen as an exaggerated alignment or ordering of polarization vectors, meaning that the polarization angles do not give accurate approximations of \(\phi\). The error usually leads to an underestimation of \(\delta\phi\) or an overestimation of the measured field strength \(B_{0,\mathrm{pos}}\) in the DCF method.
Cho and Yoo (2016) (hereafter CY16) found that this overestimation is roughly equal to a factor of \(\sqrt{\mathrm{N}}\approx\sqrt{\mathrm{L}_{\mathrm{los}}/\mathrm{L}_{\mathrm{f}}}\), where N is the number of independent eddies along the LOS, \(\mathrm{L}_{\mathrm{los}}\) is the length of the system along the LOS, and \(\mathrm{L}_{\mathrm{f}}\) is the driving scale of turbulence. They proposed a modified DCF method to account for the averaging effects, given by:
\[B_{0,\mathrm{pos}}=\xi^{\prime}\sqrt{4\pi\tilde{\rho}}\,\frac{\delta V_{ \mathrm{c}}}{\delta\phi} \tag{2}\]
where \(V_{\mathrm{c}}\) is the normalized velocity centroid of the optically thin line, and the modified correction factor \(\xi^{\prime}\sim 0.7-1.0\). The velocity centroid is defined at the \(i^{\mathrm{th}}\) LOS by
\[V_{\mathrm{c,i}}=\frac{\int v_{\mathrm{los}}\,I_{i}(v_{\mathrm{los}})\,dv_{ \mathrm{los}}}{\int I_{i}(v_{\mathrm{los}})\,dv_{\mathrm{los}}}. \tag{3}\]
\(I_{i}\) is the optically thin emission line profile for the LOS. Since it has been shown by CY16 that the modification makes the DCF method invariant to the turbulence driving scale, we shall use the modified method (equation 2) for all the following numerical tests involving the DCF technique in this paper.
## 3 Ground state alignment
In a typical ISM region where radiation sources, such as massive stars, are embedded in the diffuse plasma, atoms and ions in the plasma are continuously excited through optical pumping. When radiation excitation dominates, the occupation of the atoms/ions is determined by the optical pumping rate. In the case of anisotropic radiation, the net angular momentum in the photons is transferred to
Figure 1: The geometry of our numerical setup. B\({}_{0}\) represents the mean magnetic field, and the LOS is fixed in the Z direction, with the X-Y plane representing the POS. The star symbol represents the source of anisotropic radiation, which is considered to be coming from an infinite distance and is parallel. The angle between the magnetic field and the LOS is denoted by \(\theta_{B}\), while the angle between the projection of the magnetic field on the POS and the X axis is denoted by \(\phi_{B}\). The angle \(\theta_{0}\) follows the same logic for the radiation field direction with respect to the LOS. The angle between the magnetic field and radiation field is denoted by \(\theta_{r}\).
the atoms. If the collisional excitation rate is significantly lower than the radiative excitation rate, the angular momentum transfer causes the atoms to align along the direction of the incident radiation at the rate of the radiative pumping. Furthermore, if the Larmor frequency is larger than the radiative pumping rate in the presence of an external magnetic field, the atoms will be realigned due to fast magnetic precession. This condition can realistically be fulfilled in the diffuse ISM. For micro-Gauss scale magnetic fields in the diffuse medium, the atoms can only be aligned in their ground and/or metastable states. The magnetic realignment (parallel or perpendicular to the B field) depends on the angle between the mean magnetic field and the radiation field direction, \(\theta_{r}\), and the resulting degree of polarization also varies with the magnetic field inclination, \(\theta_{B}\) (the angle between the magnetic field and the LOS). As a result, information on the direction of the magnetic field is encoded in the polarization arising from the aligned atoms and ions. In the case of absorption from the atoms aligned in their ground or metastable states, the polarization direction directly traces the magnetic field in the plane of the sky (Yan & Lazarian, 2006, 2012). For atoms with fine structures, sub-millimeter fine-structure transitions are also polarized in the same manner (Yan & Lazarian, 2008). 1.
Footnote 1: See review by Yan & Lazarian (2012) for the list of absorption, emission as well as fine structure lines and their maximum polarization fractions.
For a background unpolarized pumping source, the GSA effect will only produce linearly polarized lines. The degree of polarization for transitions from \(J_{1}\) to \(J_{2}\) for both absorption and fine structure emission lines is given by Yan & Lazarian (2006, 2008)
\[P=\frac{1.5\,\sigma_{0}^{2}(J_{1},\theta_{r})\,\sin^{2}\!\theta_{B}\,\omega_{ J_{1}J_{2}}^{2}}{\sqrt{2}+\sigma_{0}^{2}(J_{1},\theta_{r})\,\left(1-1.5\sin^{2}\! \theta_{B}\right)\,\omega_{J_{1}J_{2}}^{2}} \tag{4}\]
where \(\theta_{r}\) and \(\theta_{B}\) are the polar coordinates of the magnetic field vector (see Fig. 1). The alignment parameter \(\sigma_{0}^{2}\equiv\rho_{0}^{2}/\rho_{0}^{0}\), is the normalized dipole component of the ground state density matrix, where \(\rho_{0}^{2,0}\) are the irreducible density matrices. The parameter \(\omega_{J_{1}J_{n}}^{2}\equiv\{1,1,2;J_{1},J_{1},J_{n}\}/\{1,1,0;J_{1},J_{1},J_{ n}\}\) is determined by the atomic structure (see Yan & Lazarian, 2012). The sign of \(\sigma_{0}^{2}\) determines the orientation of the polarization vector with respect to the magnetic field. A positive polarization degree means a parallel orientation, while a negative polarization degree indicates a perpendicular orientation. This sign change or flipping of the polarization vector orientation happens at a specific \(\theta_{r}=54.7^{\circ}\), \((180-54.7)^{\circ}\), also known as the Van Vleck angle (Van Vleck, 1925; House, 1974). In real observations, this leads to the magnetic field being mapped with a 90\({}^{\circ}\) degeneracy (VV degeneracy from here onward). In principle, this degeneracy can be broken if more than two lines are identified in the observations.
## 4 Numerical method
The numerical method used in the study is divided into two parts: the generation of synthetic polarization maps, and the analysis of the maps using the modified DCF method described in SS2. In this work, we calculated 3D MHD turbulence simulations with spatial grids of \(512^{3}\) pixels. The set of sub-Alfvenic simulations ranging from \(M_{A}=0.26\) to \(M_{A}=0.8\) is generated using the high-order finite-difference PENCIL-code2. Turbulence is driven solenoidally with an isothermal equation of state i.e \(P=\rho c_{s}^{2}\) where \(\rho\) is the density and \(c_{s}\) is the sound speed. The solenoidal (divergence-free) forcing ensures that the energy fraction of the incompressible Alfven mode dominates over the compressible magnetosonic (fast and slow) modes in the turbulence. The details of all the simulations used in this work are given in Table 1.
Footnote 2: [http://pencil-code.nordita.org](http://pencil-code.nordita.org)
The geometry of the numerical setup is shown in Fig. 1. We fix the LOS along the Z direction of the simulation box so that the POS is the X-Y plane. The incoming radiation is considered to be parallel and originating from an external source in the X-Z plane.
### Calculation of synthetic Stokes maps
The line polarization to be simulated from the GSA effect depends on the directions of the radiation field and the local magnetic field. Without loss of generality, we choose the [C II]\(\lambda 157\)\(\mu\)m (C\({}^{+}\)) fine structure emission line for our synthetic observations. Zhang & Yan (2018) have shown that C\({}^{+}\) can reach high maximum polarization (up to almost 30%). Moreover, C\({}^{+}\) is commonly observed in the diffuse ISM. The degree of polarization arising from the GSA effect for the C\({}^{+}\) line for different mean field inclinations (\(\theta_{B}\)) is shown in Fig. 2. As is evident, the sign of the polarization fraction \(P\) changes at the VV angle (shown by the vertical dotted lines), which means that the polarization vector is aligned parallel to the magnetic field direction
\begin{table}
\begin{tabular}{l c c c c} \hline Name & Grid & Alfvén & Alfvén Mach & Sonic Mach \\ & size & velocity (\(v_{A}\)) & number (\(M_{A}\)) & number (\(M_{s}\)) \\ \hline d\_024 & \(512^{3}\) & 0.24 & 0.80 & 1.68 \\ d\_030 & \(512^{3}\) & 0.30 & 0.66 & 1.98 \\ d\_040 & \(512^{3}\) & 0.40 & 0.50 & 2.00 \\ d\_050 & \(512^{3}\) & 0.50 & 0.40 & 2.00 \\ d\_070 & \(512^{3}\) & 0.70 & 0.26 & 1.82 \\ d\_080 & \(512^{3}\) & 0.80 & 0.20 & 1.65 \\ \hline \end{tabular}
\end{table}
Table 1: Descriptions of MHD simulation cubes. The Alfvén velocity is in code units i.e. in units of \(1/\sqrt{4\pi\rho}\). The Alfvén and sonic Mach numbers are given by \(v/v_{A}\) and \(v/c_{s}\), respectively, where \(v\) is the rms velocity and \(c_{s}\) is the sound speed.
Figure 2: This figure shows the computed degree of polarization versus \(\theta_{r}\) for the fine structure emission line [C II]\(\lambda 157\)\(\mu\)m. The colors represent different \(\theta_{B}\). The positive and negative polarization fractions indicate parallel and perpendicular alignment to the magnetic field, respectively. The Van Vleck angles (\(54.7^{\circ}\), \((180\)-\(54.7)^{\circ}\)), at which the transition takes place are marked by vertical dotted lines.
in the range \(\theta_{r}=(54.7^{\circ},\,125.3^{\circ})\), and perpendicular for other inclinations. To get the synthetic polarization, we first obtain the \(\theta_{B}\) and \(\theta_{r}\) at each grid point relative to the local magnetic field direction, and calculate the total polarization degree \(P(\theta_{r},\theta_{B})\) using the transition equation (4). Next, we calculate the local Stokes parameters \(q_{z}\) and \(u_{z}\) at the grid points as follows
\[q_{z}=P(\theta_{r},\theta_{B})\,\rho\,\cos{2\phi_{B}} \tag{5}\] \[u_{z}=P(\theta_{r},\theta_{B})\,\rho\,\sin{2\phi_{B}} \tag{6}\]
where \(\rho\) and \(\phi_{B}\) are the local density and the local magnetic field azimuth angle, respectively.
Typically in the previous studies of the modified DCF method (CY16; Yoon & Cho, 2019), the dust grains responsible for the polarized emission are assumed to be perfectly aligned with the external magnetic field. While this is done for the sake of simplicity in calculations, this assumption itself can cause errors in magnetic field measurement. Owing to the 90\({}^{\circ}\) VV degeneracy in the polarization of spectral lines by the GSA effect, such an assumption is not straightforward for our case. Consequently, we want to make sure that this assumption does not have a significant impact on the estimation of the magnetic field strength. For this reason, we generate two kinds of line polarization Stokes maps, one of the realistic scenario where the local Stokes parameters are weighted by the quantitative polarization fractions (first term on the right hand side of equations (5) and (6)) using equation (4), and one with perfectly aligned atoms where the polarization fraction is assumed to be 1.
To simulate the LOS averaging in observations, we integrate the Stokes parameters along the LOS (which is the Z direction) to get the observed Stokes parameters
\[Q=\int_{z}q_{z}\,dz \tag{7}\] \[U=\int_{z}u_{z}\,dz \tag{8}\]
Since the background source is unpolarized, lines arising from the GSA effect will be linearly polarized i.e. the Stokes V = 0. After the integration, we obtain 2D Stokes maps with the line averaged Stokes parameters Q and U. For the purpose of comparison with the conventional DCF approach utilizing dust polarization measurements, we also generate synthetic dust polarization maps following the method from Fiege & Pudritz (2000) (see also Zweibel, 1996; Heitsch et al., 2001).
While previous numerical studies involving the DCF method have tested the applicability of the technique in various systems like the ISM and star forming regions, the effect of the orientation of the mean magnetic field, and especially the LOS inclination, is usually neglected. For studies that use dust polarization as a measure of the local field dispersion, it is common practice to assume a mean field aligned with the POS (Ostriker et al., 2001; Padoan et al., 2001). While it is helpful to consider such a case to simplify calculations and calibrate the methods, it rarely reflects the real astrophysical environments. We examine all the possible geometries of the system in our study. This is achieved by generating a range of synthetic polarization maps while rotating the simulation box each time such that the mean magnetic field vector B\({}_{0}\) scans across the entire solid angle, covering all possible orientations, i.e. \(\theta_{B}\in[0,\,\pi],\phi_{B}\in[0,\,2\pi]\), with respect to the radiation field. The rotation of the simulation box is performed using the Euler 3D rotation algorithm3 similar to the one adopted in Lazarian et al. (2018) and Yuen et al. (2018). Moreover, the polar angle of radiation field source \(\theta_{0}\) is changed from 0 to \(\pi/2\) in six equal steps to check the effect of the LOS inclination of the radiation field on the observed polarization signals. The range of polarization maps allow us to employ the DCF method using atomic-line polarization while studying the accuracy of the technique as a function of three distinct parameters, namely the Alfven Mach number (\(M_{A}\)), the magnetic field direction (\(\theta_{B}\) and \(\phi_{B}\)) and the radiation direction (\(\theta_{0}\)).
Footnote 3: [https://www.github.com/doraemonho/LazRotationDev](https://www.github.com/doraemonho/LazRotationDev)
### B-field estimation using DCF analysis
With the 2D polarization maps, we can now apply the modified DCF method analysis to estimate the mean magnetic field strength using equation (2). Since the term \(\sqrt{4\pi\bar{\rho}}\) is normalised to 1 in the MHD simulations, we require the two observables, i.e., the dispersion of the LOS velocity centroids \(\delta V_{c}\) and the local magnetic field dispersion on the POS \(\phi_{\phi}\), to obtain the mean field strength. The centroids \(V_{c}\) are calculated at each LOS using equation (3) to obtain a velocity centroid
Figure 3: An example of the rms normalized mean field strength measured using the modified DCF method at different orientations of the mean magnetic field vector. _Top_: Since the plot is in spherical coordinates, a 2D representation can be confusing and difficult to interpret. _Bottom_: We wrap the 2D heat map around a sphere such that every point on the sphere corresponds to a magnetic field strength measured when the mean magnetic field vector is pointing in that direction. The red and black arrows on the sphere represent the direction of the radiation field and the LOS, respectively.
"map". The dispersion is then calculated following the definition from CY16 as follows
\[\delta V_{\rm c}=\left(\frac{1}{n_{\rm obs}}\sum_{i=1}^{n_{\rm obs}}V_{\rm c,i}^{2 }-\left(\frac{1}{n_{\rm obs}}\sum_{i=1}^{n_{\rm obs}}V_{\rm c,i}\right)^{2} \right)^{1/2} \tag{9}\]
where \(n_{\rm obs}\) is the POS spatial resolution.
The POS local magnetic field dispersion can be estimated in the synthetic observations by measuring the deviation in the polarization vectors. The linear polarization fraction and angle can be recovered on each grid point (LOS) on the Stokes maps using
\[P=\frac{\sqrt{Q^{2}+U^{2}}}{I} \tag{10}\]
\[\phi_{P}=\frac{1}{2}\tan_{2}^{-1}\frac{U}{Q}, \tag{11}\]
where \(\tan_{2}^{-1}\) is the 2-argument arc-tangent function. To convert the Stokes maps into polarization maps, we transform \(P\) and \(\phi_{P}\) into polarization vectors for each LOS. We can then use the minimum variance method to estimate the mean polarization direction \(P_{0}\). This is done to simulate real polarization observations, where the direction of the projected mean magnetic field is not necessarily known 4. The method involves computing the variance in the polarization vectors around arbitrary unit vectors in the range \((0^{\circ},180^{\circ})\). The direction of the unit vector that has the minimum variance in the polarization vectors is chosen as the mean polarization direction, i.e
Footnote 4: while circular statistics are typically used to compute the dispersion from polarization maps (Fisher et al., 1993), we notice that for sub-Alfvenic turbulence (with \(M_{A}<1\)), the difference between the angle dispersion calculated using the circular standard deviation and the minimum variance method is negligible.
\[P_{0}=\mathop{\arg\min}_{u\in(0,\pi)}\left(\sigma^{2}(P_{i},u)\right) \tag{12}\]
where \(P_{i}\) is the polarization vector for the \(i^{th}\) line of sight, \(u\) is an arbitrary vector in the range \((0,\pi)\), and \(\sigma^{2}(P_{i},u)\) is the variance of the polarization vectors around the vector \(u\). The angle dispersion is then simply computed as \(\delta\phi=\sigma(P_{i},P_{0})\). Finally, we can substitute these measures in equation (2) to obtain the POS projected mean magnetic field strength. Since we calculate the Stokes maps for all orientations of the mean magnetic field in the \(\theta_{B}-\phi_{B}\) space, we can use the DCF analysis to estimate the B-field strengths as a function of \(\theta_{B}\) and \(\phi_{B}\) (\(\theta_{B}\) and \(\phi_{B}\) being the polar and azimuth angle, respectively). This is shown with a heat map in the top panel of Fig. 3 with an example simulation. Since the plot is in spherical coordinates of the mean magnetic field vector, a 2D heat map does no represent the dependency accurately. We wrap the values around a sphere (bottom panel of Fig. 3) to make the plot more intuitive, such that the colorbar value at each point on the sphere shows the B-field strength estimated by the DCF method when the mean magnetic field vector in the system points in that direction. This approach provides a clearer representation of the effect of the B-field direction on the DCF method and helps to better convey the results. Thus, while the sphere itself does not mean anything in the geometry of the setup, it allows us to easily visualize the performance of the DCF method as a function of the mean magnetic field inclination and orientation. The line of sight (LOS) and the direction of the incoming parallel radiation is also depicted by black and red arrows, respectively. Consequently, the angle between the radiation arrow and an arrow pointing to any point on the sphere gives us the mean \(\theta_{r}\) for the corresponding geometry.
## 5 Results and Discussion
We divide the results section into three parts. Initially, we study the influence of the Alfven Mach number on the estimated field strength using DCF with GSA polarization. In the second part, we examine how accurately the technique works in different magnetic field orientations. Lastly, we study the performance of our technique when varying the direction of the radiation source relative to the LOS.
### Dependence on the Alfven Mach number
In order to study the influence of the Alfven Mach number, we consider the DCF estimated field strengths for all simulations, covering \(M_{A}\) range in the sub-Alfvenic regime from 0.26 to 0.8. For this purpose, we average the B-field strengths over all possible mean B-field orientations. However, we choose to exclude all the cases where the magnetic field inclination with the LOS is smaller than a threshold angle \(\theta_{B}<16^{\circ}\) from the averages, since the DCF method has intrinsic limitation at small magnetic field inclination angles (see SS5.2 for further discussion). The radiation source is fixed in the POS i.e \(\theta_{0}=90^{\circ}\). For comparison, we utilize both the perfectly aligned and realistic GSA polarization maps (see SS 4.1) to measure the magnetic field dispersion in the DCF method. Lastly, we normalize the measured field strengths with the POS projected rms magnetic field strength from the simulations, which is the ideal value that we aim to measure. Fig 4 shows the normalized field strengths measured using the modified DCF method utilizing the two types of synthetic GSA polarization observations (ideal and realistic alignment, see SS 4.1) for all numerical simulations used in this study.
In both cases for perfect and realistic atomic alignment, the measured field strengths seem consistent for low \(M_{A}\), with a rise in
Figure 4: The POS mean magnetic field strength for different Alfvén Mach numbers (data cubes) computed with (orange) and without (blue) assuming polarization from perfectly aligned atoms. The field strength is averaged over all values in the \(\theta_{B}-\phi_{B}\) space, excluding the low inclination region at \(\theta_{B}<16^{\circ}\). The values are normalised with the POS projected rms magnetic field in the simulation box, such that a value of 1 represents the ideal measurement in the above plot. Radiation source direction is fixed at \(\theta_{0}=90^{\circ}\) i.e. in the POS.
values as \(M_{A}\) increases. Apart from a noticeable difference in the error bounds at high \(M_{A}\), the values for perfect and non-perfect alignment are relatively similar in the sub-Alfvenic regime. Overall, the B-field predictions from the realistic GSA technique are slightly higher than the ideal counterpart for different \(M_{A}\). This distribution can be explained through the intrinsic assumptions considered in the DCF method, which requires that the turbulent B-field energy is much smaller than the mean B-field energy. As \(M_{A}\) increases in the turbulence, the turbulent field energy increases relative to the mean B-field energy. As a result, it is less likely that this assumption is satisfied. The highly turbulent plasma can also lead to abrupt changes in the mean field orientation along the LOS, which can contribute to high LOS averaging errors. From a general perspective, the method seems to typically overestimate the magnetic field strength by a factor of 1.3 to 2.5 in the sub-Alfvenic regime with similar error-bar spreads.
Following their modification to the DCF method, CY16 proposed a correction factor \(\xi^{\prime}=0.7-1\) based on the variation in their measured B-field strengths. However, the general trend in Fig 4 indicates that the correction factor \(\xi^{\prime}\) in the modified DCF equation (2) should be a function of \(M_{A}\) when utilised with polarization from atomic alignment, rather than a constant correction as is typically considered for DCF using dust polarization. While determining the exact dependency of the \(\xi^{\prime}\) on \(M_{A}\) for our technique requires further investigation into the method, it is apparent that the factor is lower than CY16. We propose a new correction factor \(\xi^{\prime}\in 0.35-0.75\) in case of sub-Alfvenic turbulence. In principle, our proposed correction factor is similar to the correction factor used in the conventional DCF method (\(\xi\) in equation(1), typically taken \(\sim 0.5\)). Although it should be noted that CY16 only considered turbulence with \(M_{A}\sim 0.6\) in their simulations, which could have influenced the resulting correction factor in their work.
### Influence of the magnetic field orientation on the DCF method
To investigate how the estimation from the DCF technique is affected by the mean magnetic field direction, we examine the B-field strengths for all magnetic field orientations in 3D (in the \(\theta_{B}-\phi_{B}\) space). Fig. 5 shows the B-field strength estimations from the DCF analysis with polarization from realistic GSA using a correction factor \(\xi^{\prime}=0.5\) for the simulations with \(M=0.66\), 0.4, 0.26, shown from top to bottom. The colorbar in Fig. 5 shows the ratio between the DCF-measured B-field and the actual B-field after the POS projection. As described in section 4.2 (Fig. 3), the values at each point on the sphere show the magnetic field strength when the mean magnetic field vector points in that direction in the geometry given by Fig. 1, i.e. the measured field strength as a function of \(\theta_{B}\) and \(\phi_{B}\). \(\theta_{0}\) is fixed at 90\({}^{\circ}\) in all tests, and the LOS and the direction of the radiation field are shown using the black and red arrows, respectively.
Figure 5: The normalized POS mean magnetic field strength observed using the modified DCF method from polarization observations arising from GSA using a correction factor \(\xi^{\prime}=0.5\). The four columns are used to show the 3D distribution in \(\theta_{B}-\phi_{B}\) space (which is done by rotating the numerical cube) in the X,Y, Y-Z, X-Z and isometric planes. The rows show three different Alfvén Mach numbers (from the top, \(M_{A}=0.66,0.40\) and 0.26). The black and red arrows represent the LOS and the radiation field direction, respectively.
As can be seen from the general distribution on the sphere, the observed mean field strengths are consistent with the rms values at most \(\theta_{B}\) and \(\phi_{B}\) with two noticeable exceptions: underestimations near the LOS and in ring-like regions around the radiation field direction and near the poles of the spheres (low \(\theta_{B}\)). The underestimation in the ring-like region is due the the VV effect, which is an intrinsic property of the atomic alignment process (see SS 3, and Yan & Lazarian (2006, 2008) for a detailed description). When the angle between the mean magnetic field and the radiation direction is close to the VV angle, the fluctuations of the local magnetic field cause the sign of the GSA polarization fraction to change from point to point. This results in the local polarization vectors flipping abruptly between parallel and perpendicular alignment relative to the neighboring grid-points. A large number of such flips along the LOS can cause the magnetic field to be traced with significant inaccuracy after the LOS averaging. As a result, the dispersion in the polarization vectors is large, causing the B-field prediction from DCF to be underestimated. Since majority of the error from the VV effect arises due to the LOS integration of the polarization signal, it can be difficult to recognize the contribution of the VV effect in real observations if the geometry of the system is unknown. However, it is worth mentioning that the condition for the VV degeneracy, that the angle between the radiation field and the magnetic field \(\theta_{\rm r}\approx 54.7^{\circ}\), is a rare and special case of the system orientation that is unlikely to occur in majority of realistic astrophysical environments. Consequently, the phase volume of geometries where the observed polarization is affected by the VV degeneracy is limited. In principle, it is also possible to account for the VV effect by analysing the polarization signals in position-position-velocity (PPV) space and employing for e.g. a nearest neighbor filter to remove the LOS with large fluctuations in polarization vectors that do not correspond to large fluctuations in velocity. Such an approach, however, is not trivial and will be studied in detail in the future.
At small \(\theta_{B}\), i.e. when the LOS and the mean magnetic field are close to being aligned, the projection of the mean field on the the POS is close to zero. Consequently, the polarization signals trace the turbulent field instead of the uniform field in the projection frame. Since the DCF method relies on the assumption that the dispersion in the polarization vectors is equal to the dispersion in the uniform field direction, this results in an intrinsic limitation of the DCF method at small inclination angles. In a detailed study and discussion regarding the inclination angle dependence in the DCF method, Falceta-Goncalves et al. (2008) observed that the DCF method heavily underestimates the field strength as \(\theta_{B}\) approaches \(0^{\circ}\) due to projection effects. More recently, Lazarian et al. (2022) proposed a modification to the DCF method to account for the inclination angle projection effect, given by
\[B_{0,\rm pos}=\sqrt{4\pi\bar{\rho}}\,\frac{\delta\nu_{\rm los}}{\delta\phi}\, \frac{1}{\sin\gamma} \tag{13}\]
where \(\gamma=\theta_{B}\) in this study. Essentially, the modification accounts for the difference between the strength of the projected magnetic field in the 2D plane and the total magnetic field strength through the correction factor \(\sin\gamma\). We took the correction for the projection effect into account by normalizing the values of magnetic field to the rms \(B_{0,\rm pos}\) instead of \(B_{0}\). It is worth mentioning that with the developments of new techniques which are capable of estimating the inclination angle of the mean magnetic field (see, e.g., Yuen et al., 2023; Malik et al., 2023), the mean field strength can be measured using equation (13). Lazarian et al. (2022) also showed that the method is only accurate at inclination angles larger than a minimum threshold, which they measured to be a function of \(M_{A}\), given by \((4\tan^{-1}\,(M_{A}/\sqrt{3}))\). Ensuring if the B-field inclination of the system is larger than this threshold condition can be particularly challenging with real observations, as it is notoriously difficult to estimate the \(M_{A}\) of astrophysical plasma, even if the inclination angle is known. While we do see an \(M_{A}\) dependence in our results in the form of increasing field strengths as we go to higher \(M_{A}\), we find that it is not as strong as their measurement. Instead, we propose a minimum \(\theta_{B}\) threshold independent of \(M_{A}\) for all our simulations. Accordingly, we only consider orientations with \(\theta_{B}>16^{\circ}\) to make sure the projection effects do not influence the DCF estimate averages.
### Significance of the radiation field direction in GSA
The results presented in the previous section were limited to the special case of \(\theta_{0}=90^{\circ}\), i.e. the external radiation source fixed in the POS. For the sake of completeness, we change the location of the radiation source in the X-Z plane and perform the DCF analysis on the generated GSA polarization synthetic maps to check the effect of the radiation field direction on the method.
Fig. 6, which uses the simulation with \(M_{A}=.26\) and a correction factor \(\xi^{\prime}=0.5\), shows the distribution of field strengths in \(\theta_{B}-\phi_{B}\) space for different \(\theta_{0}\). It is evident that the change in \(\theta_{0}\) changes the location of the VV regions, which is expected. However, the changing radiation source does not influence the DCF estimates at geometries in which the system is not affected by VV degeneracy (i.e. \(\theta_{r}\neq\) VV angle). Although the method cannot resolve the magnetic fields in the VV region in its current state and requires some modifications, the VV orientation in itself is a special case geometry. Therefore, we expect the DCF method to work accurately with polarization from spectral lines regardless of the location of the radiation source, as long as the geometry of the system in real observations does not correspond to this special VV case. In addition, the correction factor of \(\xi^{\prime}\in 0.35-0.75\) discussed in section 5.1 applies to our method irrespective of the radiation source geometry.
### Comparison to dust polarization method
In most studies that employ the DCF method or its variations, both historically and presently, dust polarization observations are used to calculate the polarization angle dispersion. To examine how the DCF method utilizing polarization from atomic alignment compares to the classical dust approach, we simulated the synthetic polarization for both the mechanisms and use the DCF analysis to estimate the magnetic fields. The comparison is shown through the measured B-field strengths averaged over all magnetic field orientations versus \(M_{A}\) in Fig. 7. The averaging and normalization is performed similarly to Fig 4 (see SS 5.1). For the purpose of a balanced comparison, we consider only perfectly aligned atoms and dust grains. The comparison is also shown in the \(\theta_{B}-\phi_{B}\) space for three different values of \(M_{A}\) in Fig. 8, where the GSA method uses the correction factor \(\xi^{\prime}=0.5\) as calculated in this work (see SS 5.1), while the dust alignment method uses \(\xi^{\prime}=0.8\) as is suggested for dust polarization by CY16.
From Fig. 7, it is clear that in the sub-Alfvenic regime, the DCF technique using atomic GSA polarization measures the B-field strength with similar precision compared to the dust polarization method. The spread in errors also seem to be consistent for the two methods. From Fig. 8, it can be seen that the only difference in the two methods is the ring-like VV region for the GSA measurements, which is absent in the dust polarization. Although the VV regions can explain the lower averages for GSA in Fig. 7, the estimations from the two approaches outside the VV regions appear to be highly consistent
with each other. Despite the fact that dust polarization does not suffer from the VV degeneracy, it is important to note that the observations of dust maps are usually accompanied by their own uncertainties. Previous observations have shown that the dust grains are asymmetrical and align with the magnetic field lines along their shortest axis due to radiative torques (Davis & Greenstein, 1949, 1951; Cho & Lazarian, 2005; Lazarian, 2007; Andersson et al., 2015), which means that realistic dust alignment traces the magnetic filed direction with a \(90^{\circ}\) flip as well. Since the physical properties such as size and shape of the individual dust grains vary in the diffuse interstellar medium, the efficiency of the radiative alignment is different (Draine & Weingartner, 1996, 1997; Lazarian & Hoang, 2007). As a result, smaller grains which might not be aligned with the magnetic field, also contribute to the observed polarization signals. Especially in low density plasmas, the continuum dust polarization measurements can suffer from decreased signal-to-noise ratios due to low polarization fractions. In addition to physical properties, variation in the chemical composition of the dust also contributes to the unreliable measurements in the observations.
Another challenge with dust polarization that can lead to inaccuracies, particularly in the DCF analysis, arises from the fact that optical/IR continuum observations are used for the polarization dispersion measurements, while velocity dispersion is obtained using optically thin spectral lines. While it is generally assumed that information from these sources originate in the same region in the magnetized medium, it may not always be true. It is straightforward that when the dispersions in velocity and polarization angles are calculated from two different layers along the LOS in the plasma, the DCF method does not give a correct estimate for the B-field strength at either of those layers. This particular uncertainty, as well as those arising from the diversity of the sizes, shapes and compositions of
Figure 6: An example showing normalised field strengths measured in the \(\theta_{B}-\phi_{B}\) space for different \(\theta_{0}\) using the simulation d_070 (\(M_{A}=0.26\)) and \(\xi^{\prime}=0.5\). For each \(\theta_{0}\), the X-Z (first and third rows) and the isometric projections (second and fourth rows) are shown. The color bar normalization is similar to Fig. 5.1. Red and black arrows depict the radiation field direction and the LOS, respectively.
Figure 7: Comparison between field strengths estimated by the DCF method utilizing polarization from perfectly aligned dust (blue) and GSA (orange). \(\theta_{0}\) is fixed at \(90^{\circ}\) for the GSA measurements.
dust particles can be averted by using GSA observations, in which the same polarized atomic line can be used to gain information about the velocity and magnetic field fluctuations (polarization angle dispersion).
In addition, GSA could facilitate a new avenue in magnetic field diagnostics, namely the 3D tomography of the magnetic field in the ISM. In principle, this can be achieved by performing the DCF analysis using velocity slices, i.e, thin wavelength intervals or segments of the line profiles to get information about the magnetic field strength and orientation in the PPV space. However, this will require further numerical and observational studies, which we will address in the future.
### Testing the CY16 method for atomic line polarization
As a motivation for the modification to the DCF method, CY16 showed that the DCF method is affected by the driving scale of the turbulence, and that the conventional DCF method overestimates the POS field strength by a factor proportional to the ratio of the LOS scale and the driving scale of turbulence (\(\sim\sqrt{L_{\rm loss}/L_{f}}\)). We decided to check the efficiency of the modified DCF method while using atomic line polarization instead of dust polarization which was used in CY16. For this purpose, we utilized two separate simulations with similar \(M_{A}\) and \(M_{s}\), but different \(L_{f}\). The details are given in Table 2. Since the simulation box length is normalized to unity, the driving scale of turbulence (\(1/k_{f}\)) for the simulation d_040 is larger by a factor of 5 than the simulation k_024. According to CY16, a discrepancy of \(\sim\sqrt{L_{\rm loss}/L_{f}}\) in the conventional DCF method would lead to the POS mean field strength measured from k_024 to be overestimated than that of d_040 by a factor of \(\sqrt{5}\approx 2.3\). We use the modified DCF method with line polarization from GSA to measure the POS field strengths for the two simulations for different magnetic field orientations, and plotted the averages over \((\theta_{B},\phi_{B})\) against the \(\theta_{0}\) in Fig 9. The B-field strength estimations at low \(\theta_{0}\) show no significant difference, while even as \(\theta_{0}\) approaches \(90^{\circ}\), the largest difference seen in the two simulations is by a factor of \(\sim 1.4\). This is direct evidence that the modified DCF method from CY16 corrects for the averaging effects from multiple eddies along the LOS, even when used with polarization from atomic alignment, and regardless of the radiation source orientation.
## 6 Summary
In this paper, we have employed the modified Davis-Chandrasekhar-Fermi method proposed by Cho & Yoo (2016) along with synthetic polarization observations arising due to the Ground State Alignment effect (Yan & Lazarian, 2006, 2008) in simulated magnetized plasma. Using 3D MHD turbulence simulations with varying plasma properties and geometries, we demonstrate the compatibility of the polarization observations of the GSA effect with conventional techniques like the DCF method and its variations. The method differs from the traditional DCF method by measuring the dispersion of the mean magnetic field direction through polarized spectral lines instead of continuum dust polarization measurements. The paper adopted the [C II] fine structure emission line without loss of generality. The method can be readily applied to archived and new spectro-polarimetry data covering wide wavelength ranges from UV to sub-millimeter (Yan & Lazarian, 2012).
The results from the numerical investigation of the method can be summarized as follows:
* The modified DCF method using polarization maps from atomic ground state alignment gives consistently accurate estimates of the POS projected mean magnetic field strengths in the ISM. We propose a correction factor of \(\xi^{\prime}\in 0.35-0.75\) for sub-Alfveric turbulence.
* The strength of the projected magnetic field in the plane of sky is obtained for all magnetic field inclination angles. We identify a minimum threshold angle for the magnetic field inclination with the line-of-sight of \(\theta_{B}=16^{\circ}\) below which the DCF method does not trace the magnetic fields accurately.
* The DCF method utilizing polarization measurements from atomic alignment is equally accurate as the conventional method utilizing dust polarization observations, while avoiding the uncertainties accompanied by dust alignment such as variations in grain size, shape and chemical composition.
* The spectro-polarimetry combined with spectrometry from the same atomic/ionic lines not only improves the accuracy of the DCF method by ensuring the same origin for the magnetic field and velocity fluctuations, but can also potentially trace the 3D direction and strength of the local magnetic field.
* The modified DCF method from Cho & Yoo (2016) successfully accounts for the correction to the conventional DCF method due to the driving scale of turbulence irrespective of the polarization source. It is also invariant to the geometry of the local radiation source in case of atomic alignment by GSA.
In this study, we present a novel diagnostic for tracing the magnetic field fluctuations through atomic alignment, which can be used in conjunction with the DCF method to estimate the POS-projected mean magnetic field strength in the ISM. We demonstrate that our method improves the accuracy of the conventional DCF approach while taking into account the differences in atomic and dust polarization approaches.
## Acknowledgements
We would like to acknowledge the referee for the constructive suggestions. We acknowledge DESY (Zeuthen, Germany), a member of the Helmholtz Association HGF, and the University of Potsdam for the support and the resources to make this work possible. We are grateful to Heshou Zhang and Bolu Feng for their contributions. We would also like to thank Ka Ho Yuen, Sunil Malik and Thiem Hoang for the helpful discussions.
## Data availability
The data involved in this work will be shared upon reasonable request to the corresponding author.
\begin{table}
\begin{tabular}{l c c} \hline \hline Name & d\_040 & k\_024 \\ \hline Resolution & 512\({}^{3}\) & 512\({}^{3}\) \\ Alfvén velocity (\(v_{A}\)) & 0.40 & 0.12 \\ Alfvén Mach number (\(M_{A}\)) & 0.50 & 0.50 \\ Sonic Mach number (\(M_{A}\)) & 2.00 & 2.50 \\ Driving wavenumber (\(k_{f}\)) & 2 & 10 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Descriptions of MHD simulation cubes with different driving scale \(L_{f}\). The driving wavenumber \(k_{f}\) is in units of \(L_{f}/L\), where \(L_{f}\) is the driving scale of turbulence and \(L\) is the size of the simulation box along one axis. |
2305.07030 | Using Hierarchical Parallelism to Accelerate the Solution of Many Small
Partial Differential Equations | This paper presents efforts to improve the hierarchical parallelism of a two
scale simulation code. Two methods to improve the GPU parallel performance were
developed and compared. The first used the NVIDIA Multi-Process Service and the
second moved the entire sub-problem loop into a single kernel using Kokkos
hierarchical parallelism and a PackedView data structure. Both approaches
improved parallel performance with the second method providing the greatest
improvements. | Jacob Merson, Mark S. Shephard | 2023-05-05T05:42:03Z | http://arxiv.org/abs/2305.07030v1 | Using Hierarchical Parallelism to Accelerate the Solution of Many Small Partial Differential Equations
###### Abstract
This paper presents efforts to improve the hierarchical parallelism of a two scale simulation code. Two methods to improve the GPU parallel performance were developed and compared. The first used the NVIDIA Multi-Process Service and the second moved the entire sub-problem loop into a single kernel using Kokkos hierarchical parallelism and a PackedView data structure. Both approaches improved parallel performance with the second method providing the greatest improvements.
## 1 Introduction
Hierarchical multiscale methods are commonly used to model engineering materials which exhibit complex micromechanical behavior that is not easily captured with standard constitutive modeling [1, 2, 3, 4]. This behavior is often caused by a material having a makeup of discrete constituents such as atoms, molecules, fibers, etc. For a two scale analysis the macroscale, or engineering scale, partial differential equations are often discretized by finite element methods. At each material point in the macroscale, a microscale sub-problem made from discrete components is solved to obtain the material constitutive properties at that point. This information passing scheme is often referred to as the FE2 scheme drawing from the fact that there are is a finite element analysis occurring on two scales [1].
In our multiscale implementation we use parallelism at multiple levels. On the macroscale, we use domain level parallelism to break up our finite element mesh. This is implemented using the SCOREC parallel unstructured meshing infrastructure (PUMI) [5]. Enc microscale sub-problem is independent which leads to an "embarrassingly parallel" algorithm. To couple the scales, we use the Adaptive Multiscale Simulation Infrastructure which breaks processors on the target platform into an independent processor set for each scale [6]. In our previous work, the individual microscale sub-problems were not parallelized. Due to our increased understanding of the sub-scale physics in our problem of interest, we have increased the number of degrees of freedom in the microscale sub-problems by two to three orders of magnitude [7, 8]. This increase in problem size makes parallelization of the individual sub-problems essential to performing analyses with physical relevance.
Due to the increased computational cost associated with changes in the solution method described in section 2 and the increase in microscale problem size, the solution to the microscale problems became a performance bottleneck. Therefore, we ported our code to use GPU parallelism for the microscale problems. For our problem of interest, the microscale problems often have less than 100,000 degrees of freedom. This is not large enough to saturate the GPU with our current analysis methods which primarily consists of vector operations--similar to BLAS level 1 (See figure 1).
To achieve adequate GPU throughput, multiple microscale problems must be solved on each GPU at a time. This was accomplished using two methods. The first was using NVIDIA Multi-Process Service (MPS) which unintrusively allows multiple processes to launch GPU kernels at a time [9]. Although MPS gave a good speedup, our analysis was still limited by kernel launch overhead and by the limited number of processors on each node to launch GPU kernels from. Additionally, great care must be taken to use MPS in an environment with multiple GPUs per node because improper MPS setup causes a drastic reduction in the weak scalability. The second method for increasing GPU throughput was to pack multiple microscale problems into a single kernel launch using the Kokkos hierarchical parallelism construct [10]. A more thorough discussion of this method is given in section 3. To aid in understanding the selection of our parallelization strategy, a more comprehensive discussion of multiscale modeling techniques is presented below.
## 2 Multiscale Modeling of Biological Tissues
One particular application of these methods is to model biological tissues which are made of constituent collagen fiber networks. Typically, modeling biological tissues requires large strain analysis because they are soft and physi
ological strains can often exceed 50%. The FE2 method has been extended to allow for large strains; however, the methods discussed in the literature utilize implicit finite element methods for both analysis scales [2, 11]. Unfortunately, the deformation of fiber networks is highly nonlinear and the network can go through bifurcation points, or may not be isostatic, i.e. the tangent stiffness matrix can be singular during an analysis [12]. As a result, athermal fiber networks, such as collagen networks, are typically modeled with explicit finite element methods [13, 14]. The use of a purely explicit analysis for the sub-scale problems in a multiscale analysis is problematic because kinetic energy is lost in the microscale-to-macroscale coupling. Additionally, inertial effects can change the microscale material properties. To work around these issues, a dynamic relaxation method is used for the microscale problems. Dynamic relaxation works by mapping a static analysis to a damped dynamic explicit one where the system residual is monitored for convergence [15].
The use of the dynamic relaxation method at the microscale greatly improves the global strains which the multiscale method can achieve for fibrous materials, however it imposes a significant computational cost. Our previous studies have shown the multiscale analysis of biological tissues at scale using homogeneous computing technologies with MPI based parallelism [6, 16]. Due to the increased computational cost of dynamic relaxation, physiologically relevant problems are no longer accessible through MPI based parallelism alone. Therefore, the microscale portions of our analysis code have been ported to use GPU accelerators.
The variability in GPU programming environments across hardware vendors poses a significant challenge to the maintainability of a GPU accelerated code which must run on a variety of systems. Therefore, we chose to use the Kokkos C++ library for GPU support. Kokkos is a C++ programming model which is designed to enable performance portability [10, 17]. The Kokkos team has committed to maintaining support for all of the vendors who are providing accelerators for the Department of Energy leadership class computing resources (AMD, Intel, NVIDIA). This support allows for writing a single version of the analysis code that will run across most of the easily accessible GPU accelerators.
## 3 Parallel Implementation
Figure 1 gives the basic dynamic relaxation algorithm we used for the microscale sub-problems. This algorithm is identical to a two step central difference method found in any finite element text book, with the exception that the convergence criteria is based on a force residual measurement rather than time. Note that each sub-scale problem converges at a different rate which can lead to load imbalance.
In the naive approach, this algorithm was carried out using fused kernels for any subsequent operations with the same loop characteristics. The benefits of kernel fusion have been discussed extensively in the literature both for the case of explicit ODEs, and general GPU computations [18, 19, 20]. The GetInternalForces subroutine accounts for two Kernel launches: the first to zero the internal force vector, and the second to scatter the elemental internal forces to the nodes. The current implementation uses atomic operations to scatter the forces.
In this naive approach, a number of microscale sub-problems were assigned to each MPI rank, and were executed serially with respect to each other within each rank. Despite the use of GPU acceleration for the vector operations, this approach had poor performance for the sub-scale problems with small numbers of degrees of freedom when compared with a CPU-only implementation with serial vector operations. To unintrusively improve this naive approach, NVIDIA MPS was used to allow kernels from multiple MPI ranks to run concurrently. The use of MPS led to significant performance improvements for small DOF problems compared with the naive case. Problem size and number of simultaneous MPI ranks used with MPS can have a drastic effect on performance. All MPS results presented in section 4 use 32 MPI ranks per GPU which gives the best performance in the range of problem sizes discussed here.
Since the loop in algorithm 1 executes millions of times per macroscale simulation step, we observed that this approach had significant kernel launch overhead. To overcome this, we moved the entire loop into a single kernel. This was done using Kokkos hierarchical parallelism which uses teams of threads to enable a 2D map to the hardware. The CUDA reciprocal to this mechanism is launching a 1D grid of 1D blocks. Since our sub-scale problems each have less than 10,000 free degrees of freedom, we found that good performance could be achieved by assigning one thread team to each sub-scale problem. Here, we juxtapose the free
Figure 1: Dynamic relaxation algorithm
degrees of freedom which are those without any Dirichlet constraints, to what we call degrees of freedom which are all potential degrees of freedom. Unlike an implicit FEM method, the constrained degrees of freedom cannot be completely eliminated as they are needed for the internal force computation. Reordering the fixed degrees of freedom to a contiguous block at the end of the displacement array allows most of the update algorithm to only operate on the smaller proportion of free degrees of freedom (figure 1).
The choice of number of threads per team had a strong effect on performance. The ideal number of threads per team is a function of the microscale problem size. All presented results use 512 threads per team, which provided a good compromise for the performance of the smallest and largest systems we tested.
A PackedView data structure which has similar semantics to a Kokkos DualView was used to allow effective access to N-D vector data within each thread team [21]. This data structure uses a row vector and value vector, similar to those from compressed row storage (CRS), to store the data associated with all sub-scale problems on the current MPI rank in a contiguous array in memory. Each sub-scale problem gains access to the correct portion of memory through a Kokkos Subview. In some ways, this structure is similar to a Kokkos View of Views. However, with the current implementation the PackedView can not be resized after initialization. A comprehensive performance comparison between the PackedView data structure, and View of Views has not been performed to date. This differs from the StaticCrsGraph in Kokkos which cannot handle non-integral datatypes, and does not have DualView semantics.
Although moving the analysis loop inside of a single kernel launch was effective for our problems of interest, it can easily succumb to low performance from high register pressure. Significant effort had to be made to reduce the register pressure and ensure that multiple warps could be concurrently scheduled. One mechanism we used to reduce register pressure was to move some of the variables which are carried across loop iterations such as the pseudo-time and the loop iteration count into shared memory. We found that performance gain from the reduction in register pressure outweighed the loss in bandwidth from moving these variables to shared memory. The need to reduce register usage in this single kernel implementation led us to favor a stripped-down version of our algorithm which was specific to the physical system at hand. In other words, flexibility of our code had to be sacrificed to obtain improved performance characteristics.
## 4 Results
The performance results presented here, are all computed on a single Volta V100 GPU--part of an IBM AC922 node. Each AC922 node contains \(2\), \(20\) core IBM power 9 processors clocked at 3.15GHz, 512 GiB of RAM, and 6 Volta V100 GPUs. The code is compiled with version 16.1.0 of IBM's XL compiler for host code, version 10.1 of Cuda, and version 3.1 of Kokkos. The MPS results make use of Spectrum MPI version 10.3.
Figures 2 and 3 show the runtime of a single sub-problem divided by runtime normalized by the number of concurrent sub-problems. This gives a measure of the speedup of a single sub-problem when computed in a concurrent batch. Since we are using the analysis technique's own single sub-problem runtime as a baseline for the speedup, we call this the "self speedup". For the naive loop based case (figure 2), we see the self speedup is very flat which indicates the expected linear increase in runtime. The smallest problem size sees a slight self speedup. When thread team based parallelization is used, a significant self speedup is observed (figure 3). Here we see a initial regime of linear self speedup and a plateau regime for large numbers of concurrent sub-problems. In this initial linear scaling regime, the runtime remains flat since the numerical workload is not large enough to overcome the
Figure 3: Speedup of the thread team based analysis normalized by the number of concurrent sub-problems compared with the thread team based single sub-problem. Each data point is the mean of three analysis runs.
Figure 2: Speedup of the naive loop based analysis normalized by the number of concurrent sub-problems compared with the loop based single sub-problem. A flat line corresponds to a linear increase in runtime. Each data point is the mean of three analysis runs.
kernel launch latency. Interestingly, the initial self speedup is almost identical for each of the problem sizes we tried. The plateau region show that as the number of degrees of freedom in the problem increase, the self speedup decreases.
Figure 4 shows the speedup of the thread team based and MPS based analysis methods over the naive loop based approach. In this plot, we see an initial linear scaling and a plateau region. We observe that as the problem size increases, the speedup obtained from the team thread based method decreases. This is likely due to a reduction in percentage of the problem which resides in the cache. We also observe that MPS based parallelism does provide some speedup over the naive approach, but it is not as effective as the thread team based approach. Also, for MPS, the speedup plateau does not depend on the problem size. One way to interpret these results is that for maximum efficiency, at least 80 concurrent sub-problems should be run on each GPU. Since the V100 has 80 SMs (streaming multiprocessors), this is consistent with each Kokkos team (CUDA block) occupying a single SM.
The speedups achieved using thread team parallelism tell a compelling story that moving an analysis loop inside of a single heavy weight kernel can be an effective optimization mechanism for problems that need to solve many problems which cannot saturate the GPU on their own. Although MPS seemed like it might be a reasonable solution, it suffered from still incurring a high kernel call latency due to the many kernels which were being called inside of a hot loop. Additionally, the MPS solution was not able to make as effective use of the cache since many different sub-scale problems were competing to be scheduled simultaneously, and each sub-scale problem that was scheduled in an interleaved fashion would cause cache misses.
## Acknowledgments
This work was supported in part by the National Institutes of Health (NIH) through Grant No. U01 AT010326-06. Also, this material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1744655.
|
2307.12522 | Automated Mapping of Adaptive App GUIs from Phones to TVs | With the increasing interconnection of smart devices, users often desire to
adopt the same app on quite different devices for identical tasks, such as
watching the same movies on both their smartphones and TVs. However, the
significant differences in screen size, aspect ratio, and interaction styles
make it challenging to adapt Graphical User Interfaces (GUIs) across these
devices. Although there are millions of apps available on Google Play, only a
few thousand are designed to support smart TV displays. Existing techniques to
map a mobile app GUI to a TV either adopt a responsive design, which struggles
to bridge the substantial gap between phone and TV or use mirror apps for
improved video display, which requires hardware support and extra engineering
efforts. Instead of developing another app for supporting TVs, we propose a
semi-automated approach to generate corresponding adaptive TV GUIs, given the
phone GUIs as the input. Based on our empirical study of GUI pairs for TVs and
phones in existing apps, we synthesize a list of rules for grouping and
classifying phone GUIs, converting them to TV GUIs, and generating dynamic TV
layouts and source code for the TV display. Our tool is not only beneficial to
developers but also to GUI designers, who can further customize the generated
GUIs for their TV app development. An evaluation and user study demonstrate the
accuracy of our generated GUIs and the usefulness of our tool. | Han Hu, Ruiqi Dong, John Grundy, Thai Minh Nguyen, Huaxiao Liu, Chunyang Chen | 2023-07-24T04:35:51Z | http://arxiv.org/abs/2307.12522v2 | # Automated Mapping of Adaptive App GUIs from Phones to TVs
###### Abstract.
With the increasing interconnection of smart devices, users often desire to adopt the same app on quite different devices for identical tasks, such as watching the same movies on both their smartphones and TV. However, the significant differences in screen size, aspect ratio, and interaction styles make it challenging to adapt Graphical User Interfaces (GUIs) across these devices. Although there are millions of apps available on Google Play, only a few thousand are designed to support smart TV displays. Existing techniques to map a mobile app GUI to a TV either adopt a responsive design, which struggles to bridge the substantial gap between phone and TV or use mirror apps for improved video display, which requires hardware support and extra engineering efforts. Instead of developing another app for supporting TVs, we propose a semi-automated approach to generate corresponding adaptive TV GUIs, given the phone GUIs as the input. Based on our empirical study of GUI pairs for TV and phone in existing apps, we synthesize a list of rules for grouping and classifying phone GUIs, converting them to TV GUIs, and generating dynamic TV layouts and source code for the TV display. Our tool is not only beneficial to developers but also to GUI designers, who can further customize the generated GUIs for their TV app development. An evaluation and user study demonstrate the accuracy of our generated GUIs and the usefulness of our tool.
graphic user interface, cross-screen, UI detection, GUI pattern +
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
+
Footnote †: journal: Automated Mapping of Adaptive App GUIs from Phones to TVs
TV screen with a large dark space on the two sides due to the different aspect ratios as seen in Figure 1 (_Direct mapping_). Additionally, smart TVs and mobile phones interact differently. Mobile phones employ fingers to touch and swipe, while smart TVs require remote controls. TVs need redesigned GUIs to accommodate different interactions as well as differences in size and layout.
Currently, there are three ways commonly used to deal with this issue. First, many GUI development frameworks provide support of responsive and adaptive design [1] for developers to adapt the GUI to any possible screen size, such as Android's Material Design [38] and iOS[9]. The _Desktop mode_ in Figure 1 shows an example of the responsive layout. Since responsive and adaptive layout techniques simply adapt the GUI to the current device's screen size based on the size constraints and proportions of the original phone screen, the GUI has to clumsily enlarge some UI components, increase the distance between UI components, or alter the layout of UI components. Although the GUI occupies the current screen, its layout is quite clumsy and significantly diminishes the user experience. Responsive design is primarily focused on optimizing for small screens like smartphones and tablets. Therefore, it may not be optimized for larger screens like TVs, leading to issues when viewing mobile apps on TVs. Additionally, TVs often have different interaction methods than mobile devices, such as a remote control or voice commands, which may not be supported by the app's responsive design.
Second, there are some screen mirroring apps (e.g., Chromecast [14]), which are used to cast music, video, play games, and display photos on a big screen. However, these are effective primarily for video projection rather than general app screens. Phone manufacturers like Samsung, Apple, Huawei, and OnePlus have introduced the concept of Desktop Mode [19; 32] for cross-screen GUI conversion via their phones. Desktop Mode maps the current phone GUIs to larger external screens via an HDMI adaptor or WiFi by redesigning GUI layouts individually to improve the user experience with a larger screen.
Third, as revealed in our empirical study in Section 2.1, developers often create a brand-new GUI for the corresponding TV application due to their significant differences. While there are only a small number of thousands of apps supporting Android TV [6]1, this is a minuscule fraction compared with the millions of Android apps, which greatly limits users' app usage on the TV. Developing both a mobile app and a standalone TV app from scratch is both time-consuming and labor-intensive. Considering the user side, a comparable design could involve providing data or research on user preferences for consistency across different devices. This method reduces the
Figure 1: Examples of current large screen GUI conversions
need for them to learn new navigation and interaction patterns [40, 42]. Popular apps always share a similar design between phone apps and TV apps, for example, YouTube and Spotify. From the developer side, a consistent GUI can reduce development costs and time, as it allows developers to reuse code and design elements across different devices [42]. Both the phone app and the corresponding TV app have similar functions and can share current designs and resources. Reusing existing materials can result in significant savings on engineering expenses, and a similar user interface makes it easier for mobile app users to adapt to the TV app [42, 46].
To overcome these limitations, we propose an approach for generating the appropriate GUI layout for Android TV based on the existing GUI design for phones. Unlike responsive design, where a screen 'flows' from a phone design into a larger device, adaptive projection in our approach offers tailor-made solutions. Given the app GUI of a page on a mobile phone, our approach automatically generates the corresponding GUI code to make it adaptive to the TV screen. The development team, including the visual designers and developers, will benefit from our approach. Once they finish the phone app development, our approach can automatically generate the GUI design and TV implementation for any phone page. Developers and designers can easily customize it to enhance their productivity without starting from scratch.
To build the tool, we first investigate how many apps currently support smart TV in 5,580 popular apps from Google Play[27]. We find only 5.34% of apps support running on smart TV. Second, we perform a formative study on the characteristics of current TV-phone apps and GUIs. We collect 1405 TV-phone app pairs from Google Play and Dangbei[16] but only collect 589 TV-phone GUI pairs with clear GUI correspondence in these app pairs. We then summarize 12 and 9 categories of GUI components groups common on mobile and TV, respectively, and analyzed their corresponding conversion. Based on the empirical study results, we build our tool in the following steps: group isolated GUI components parsed from GUI metadata captured by UI Automator [60], convert phone GUI groups into corresponding TV groups, optimize their layouts by OR constraints formulas [34], and translate current TV GUI to our language - and platform free GUI domain specific language (DSL) for further rendering.
We choose two of the best-known and most commonly used technologies: desktop mode and direct mapping as the baselines for the evaluation. We select mIoU, overall satisfaction, and structure rationality as the metrics to evaluate the converted TV GUIs by our approach and baselines. We also recruited 20 participants with extensive experience in Android GUI development, design, and use to evaluate the converted TV GUIs. The experimental results show that TV GUIs converted by our approach achieve 21.05%, 10.42%, and 21.31% improvement in mIoU, overall satisfaction, and structure rationality than the best current conversion techniques. Besides, a pilot user study also provides the initial evidence of the usefulness of our tool for bootstrapping adaptive GUI from phone to TV.
Our contributions to this work are summarized as follows:
* As far as we know, this is the first study on automated GUI conversion from smartphone to TV GUI;
* We propose a new approach to generate TV-hosted GUIs from Android mobile phone GUIs;
* We carry out an empirical study to understand the current status of GUI support in the phone app to TV display and how is the GUI mapping patterns between two platforms; and
* We demonstrate the effectiveness of our approach with extensive automated evaluation and manual checking. We also provide initial evidence of our tool's usefulness via a pilot user study.
## 2. Empirical study of Guis between Phones and TVs
To better understand the current status and characteristics of GUI adaptation to smart TV, we conduct a large-scale empirical study to answer two questions: 1) How many phone apps support TV display? 2) How do app GUIs change between TVs and smartphones?
### RQ1 How many phone apps support TV displays?
To answer RQ1, we analyze a large number of industrial Android apps from all 33 categories on Google Play (Zhou et al., 2017). We crawl the top 200 most popular apps in each category (at the time of Jun 2021). Since some apps are not free, we obtain 5,580 apps that support Android phones by default.
According to the official guidelines of developing Android TV (Bordes et al., 2017), it is compulsory for apps running on TV to declare a TV activity with an intent filter _CATEGORY_LEANBACK_LAUNCHER_ in the Android manifest file of Android projects. In addition to the declaration, TV-enabled apps often have separate TV layouts called _layout-television_ and _layout-tv_. Therefore, we decompile Android APKs to check whether the specified intent filters or layout XML files exist. We find only 298 of these apps supporting TV display, accounting for only 5.34% of the total. These 298 apps belong to 29 categories including _Weather_ (16.78%), _Education_ (15.10%), _Tool_ (11.07%).
Even if an app advertises that it supports a TV display, this does not imply that it is appropriately optimized for the TV. We manually check 298 apps that claim to support TV displays. We physically run the apps on a smart TV and critically evaluate how well various GUI pages adapt to the TV display. GUI pages exhibiting discrepancies such as substantial black margins at the screen's bilateral extents, discordant aspect ratio components, disordered layouts, and unoptimized navigation, among other factors - which all may diminish the user experience on TV displays - are regarded as inadequate instances of the TV adaptation. We discover only 11 of 298 apps allow TV display on all GUI pages. 287 out of 298 apps modify a few representative GUI pages, such as the home or landing page, to accommodate TV displays. These 287 apps have an average of 22.3 Android activities, but we only find support for TV display in an average of 4 Android activities. Other GUIs of these apps also look poor on TV, with large black margins on both screen sides, and mismatched aspect ratio components, as the _Direct mapping_ in Figure 1. Due to the significant difference between phone and TV displays, some development teams tend to develop separate apps for different platforms.
### RQ2 How do app GUIs change between TVs and smartphones?
We collected custom-designed TV apps and their matching phone apps for a comparative study since there are relatively few apps that support both TV and phone.
#### 2.2.1. TV-phone App Pairs Collection
We collected 249 TV apps from Google Play's TV category (Zhou et al., 2017) and 2,556 from Dangbei (Dangbei, 2018), which is one of the largest TV app stores. We eliminated any apps that have not been updated for over two years.
We then matched the TV apps' corresponding smartphone apps on Google Play. To begin, we search for phone apps with the same app name, developers, and category. Second, if the corresponding phone version app cannot be found, we broaden our search to include apps with similar app names developed by the same developers. Finally, for the TV apps that do not find the phone-version apps, four volunteers manually collect their matching phone apps.
We match 1,405 TV-phone app pairs, with the three most common categories being _Video_ (42%), _Education_ (_23%_) and _Tool_ (_21%_). Video apps, such as _Youtube_, _IQYII_, are most suitable for smart TVs due to the characteristics of TV itself, so video apps have become the most popular apps on TV. Because smart TVs are mostly used at home, there are a number of educational apps for children. Tool apps, like TV app store, remote control and projection control are also ubiquitous on TVs.
#### 2.2.2. TV-phone GUI Pair Collection
We use the DroitBot [40], Fastbot [11] and Uiautomator2 [61] to automatically explore apps and collect rendered screenshots and metadata of GUIs in apps. The metadata is a documentary object model (DOM) tree of current GUIs, which includes the hierarchy and properties (e.g., class, bounding box, layout) of UI components. We can infer the GUI hierarchy from the DOM Tree hierarchy in metadata. After removing duplicates, we obtain 6,697 Android GUI data and 4,112 TV GUI data. We notice that most TV apps simplify or restructure their GUIs to accommodate various usage scenarios and requirements of smart TVs during this process.
At TV-phone GUI pairing, we exploit the semantic similarity of Android activity names to pair the TV and phone GUIs automatically. Then, we automatically compare UI components on GUI pages in order to match state-level GUI pairings in a lesser granularity. Finally, we manually check the automatically discovered pairs and select the final valid TV-phone pairs. We extract activity names from each GUI and encode them into numerical semantic vectors using a pre-trained BERT [18] model. Then, we match the TV-phone GUI pairs by comparing their semantically close activity vectors. For example, the GUI in activity _homeActivity_ and _mainActivity_ are matched by close semantic vectors. However, one Android activity may have multiple Android fragments [23] and GUI states [29, 43] with different UI components and layouts in current industrial apps. We further compare GUI components between phones and TVs to pair TV-phone GUIs at lower granularity. In pairing, UI components are identified by their types and properties. UI components between phones and TVs with the same types and properties are considered paired GUI components. For example, two _TextViews_ with the same texts, two _ImageViews_ with the same images, two _Buttons_
Figure 2. Examples of Phone GUI Groups
with the same texts are considered the paired components. If more than half of the UI components in two GUIs are paired, they are considered a state-level TV-phone GUI pair. Finally, we manually checked all discovered pairs and identified 589 TV-phone state-level GUI pairs with clear GUI correspondence between phone and TV components.
#### 2.2.3. GUI component grouping
A series of UI components that are near in position and hierarchy of the GUI and tend to have the same functionality are referred to as GUI groups2[30, 51]. Exploring these group changes based on UI groups rather than each individual component is more beneficial [51]. So, we must first identify and categorize GUI groups that are common to both TV and phone before summarising the guidelines for GUI changes from phone to TV.
Footnote 2: Sometimes called GUI patterns
We perform an iterative-open coding process, which is widely used to generate categories in Software Engineering [41, 57, 58]. We do this on 120 randomly selected phone GUIs and 80 TV GUIs (approximately 2% of collected phone and TV GUIs) to categorize their GUI groups. Four volunteers with Android design experience undertook three steps in our open coding procedure. At first, we are inspired by Google design [26] and development guidelines [4], every volunteer categorizes GUI groups in selected GUIs individually. After the initial coding, let four volunteers have a discussion and merge conflicts. They clarify scope boundaries among categories and misunderstandings in this step. In the third step, they iterate to revise classifications and discuss with each other until a consensus is reached. Finally, we determined 12 phone group types: _Icon + Info_, _Tool Bar_, _Bottom Tab Layout_, _Search_, _Top Tab Layout_, _Pic Side Info_, _Pic + Info_, _Side Nav_, _Short Video Player_, _Video/Music Player_, _Big Pic_ and _List View_ and 9 TV group types: _Icon + Info_, _Tool Bar_, _Search_, _Tab Layout_, _Channel_, _Grid Layout_, _Pic + Info_, _Video/Music Player_ and _List View_. Figure 2 and 3 show examples of summarized phone and TV groups. These phone and TV GUI group categories can be divided into two subcategories. GUI groups in the first subcategories are widely used in both phone and TV, e.g. _Tool Bar_, _Search_, and _Video/Music Player_. GUI groups in the second subcategory only exist in phones or TVs respectively, e.g. such as _Bottom Tab Layout_ and _Channel_.
To verify the accuracy of GUI grouping and classification, we first randomly sample 20 TV and smartphone apps in collected 1405 TV-phone app pairs in Section 2.2.1, respectively. The selected apps have covered all 6 TV app categories (Tool, Video, Music & Audio, Education, Entertainment, and Weather) listed on Google Play [25]. Then, we manually calculate the distribution of each GUI group in selected TV and smartphone apps. Finally, we find that these summarized GUI components groups have covered 93.69% and 93.47% of GUI groups on phones and TVs, respectively. In our analysis, we observe the presence of uniquely shaped GUI components in both phone and TV interfaces. These components resist typical categorization due to their complex functional requirements in production and everyday use. They constitute 6.31% and 6.53% of phone and TV GUIs, respectively. These specific GUI components primarily appear within individual apps or among apps from the same developer. We categorize these components under an _Others_ group category for the purpose of our study. To address these edge cases in the GUI conversion process, we propose the development of default templates. Given the relatively minor percentage of GUI components in the _Others_ category and their limited appearance in specific applications, we assert that our categorization effectively represents the vast majority of commonly used phone and TV GUIs.
Table 1 shows the details of GUI group distributions. Subcolumns _Group_ and _Distribution_ of columns _Phone_ and _TV_ denote the GUI group categories and distribution in the experiment. TV's subcolumn _Group_ does not contain the _Others_ GUI group, which comprises 6.53% of all TV GUIs in the experiment. On phones, the most popular categories of components groups are _Icon + Info_ (13.31%), _Tool Bar_ (11.41%) and _List View_ (11.14%), but on TV, categories _Pic + Info_ (19.13%), _Grid
Layout_ (13.37%) and _List View_ (13.18%) are most common categories. The official guideline of TV GUI design [7] suggests two principles for TV GUI design _All TV GUIs should display in landscape mode_ and _The core TV GUIs use card-like views instead of ListView or ViewPager to make better use of horizontal screen space and accommodate TV interaction_. Standard current TV GUIs obey these two principles to use more grid layouts and card-like widgets. Thus card-like categories _Pic + Info_ and _Grid Layout_ are more popular on TVs than on phones.
#### 2.2.4. Component group alignment
After conducting our GUI components grouping study, we notice that the design principles of TV and phone GUIs are vastly different, resulting in no obvious one-to-one alignments between most TV-phone GUI groups. Furthermore, the contents of one phone GUI group may be dispersed throughout numerous groups in the TV GUI, and vice versa. Therefore, we summarize heuristic rules from our collected TV-phone GUI pairs for automatic GUI mapping from phone to TV.
Firstly, we randomly divide the 589 TV-phone GUI pairs into experimental and validation sets in an 8:2 ratio. In the experimental set, four volunteers follow the same three steps to perform open coding to analyze and extract conversion rules. Table 1 shows the extracted GUI group match rules from phone to TV. To accommodate how the TV and remote interact, each TV GUI group uses card-like views. The converted TV GUI group recalculates the new size depending on the quantity and types of components in the existing GUI to fit the TV screen size. According to the GUI group study in Section 2.2.3, we use _Grid Layout_ as our default template for mapping. Component groups with the same meaning _Icon + Info_, _Tool Bar_, _Search_, _Top Tab Layout_, _Video/Music Player_ and _List View_ in phone and TV are transferred directly. According to the characteristics and the official design guideline [7] of TV, _Pic Side Info_, _Pic + Info_ and _Big Pic_ are all converted to _Pic+Info_ in TV. After exploration, current TV GUIs tend to replace components in the phone's _Side Navigation_ and _Bottom Tab Layout_ to _Channel_ in TV, so we follow this trend. _Short Video Player_ should use customized templates in TV, but there's no such TV app with this GUI feature at the moment. As a result, we don't provide the corresponding TV group individually at the moment, instead relying on _Video Player_ to convert.
Figure 3. Examples of Key TV GUI Groups
We use the validation set to verify these mapping rules. Our first step involves manually identifying and extracting GUI components from respective groups within the phone GUI. Subsequently, we locate corresponding components in the matching TV GUI. Our final process entails verifying if these TV GUI components align with the anticipated TV groups and comply with the matching rules established in our experimental set. Given \(m\) instances of _Side Nav_ groups in our validation set, and \(n\) corresponding TV GUI groups classified as \(Channel\) groups, we compute the mapping rule accuracy as the ratio \(n/m\). Note that if the phone GUI group is eliminated in the corresponding TV GUI, the case is considered invalid and will not be counted. The _Mapping Accuracy_ in Table 1 demonstrates the correctness rate of each mapping rule. Finally, we find that the correctness of rules 1, 2, 3, 4, 5 and 7 are 96%, 99%, 91%, 99%, 99% and 100%, respectively, indicating that these direct mapping rules are accurate and universal. Rules 8, 9, 10, 11, and 12 have an accuracy of 83%, 95%, and 87%, 90%, and 99%, respectively, suggesting these change rules are also accurate and common.
### Summary and Implications
Our empirical study shows that: (1) Only 5.34% of popular phone apps support TV displays. (2) In TV-phone GUI pairs, there is not much explicit one-to-one correspondence between phone and TV component groups. (3) We summarize 12 and 9 categories of GUI components groups on phone and TV, covering 93.69% and 93.47% popular phone and TV GUIs, respectively. (4) We extract and evaluate 12 existing GUI group-mapping rules from phone to TV based on summarised GUI component groups.
The lack of TV-display support for phone apps confirms the necessity of tool development for semi-automated GUI mapping between phone and TV. That motivates our study and the empirical findings of component group mapping are the backbone of our proposed approach.
## 3. Semi-automated TV GUI generation
Motivated by our empirical findings and the official layout criteria for Android TV (Beng et al., 2017), we here propose our semi-automated Android-based TV GUI generation approach. Our approach develops a lightweight migration system that converts run-time GUIs in a series of phases, including component recognition and grouping, template matching, layout optimization, and GUI domain-specific languages (DSL) generation. The overall pipeline of our approach is illustrated in Figure 4.
\begin{table}
\begin{tabular}{|c|c|c|c c|c|} \hline \multirow{2}{*}{**Index**} & \multicolumn{2}{c|}{**Phone**} & \multicolumn{2}{c|}{**TV**} & \multirow{2}{*}{**Mapping Accuracy**} \\ & Group & Distribution & Group & & Distribution \\ \hline
1 & Icon + Info & 13.31\% & Icon + Info & 8.32\% & 96\% \\
2 & Tool Bar & 11.41\% & Tool Bar & 7.83\% & 99\% \\
3 & List View & 11.14\% & List View & 13.18\% & 91\% \\
4 & Top Tab Layout & 8.88\% & Top Tab Layout & 7.68\% & 99\% \\
5 & Search & 7.98\% & Search & 7.12\% & 99\% \\
6 & Others & 6.31\% & Grid Layout (Default) & 13.37\% & 90\% \\
7 & Video/Music Player & 3.50\% & Video/Music Player & 7.56\% & 100\% \\ \hline
8 & Pic Side Info & 8.90\% & & & 83\% \\
9 & Pic + Info & 8.67\% & Pic + Info & 19.13\% & 95\% \\
10 & Big Pic & 3.52\% & & & 87\% \\ \hline
11 & Bottom Tab Layout & 10.31\% & & & 90\% \\
12 & Side Nav & 6.07\% & Channel & 9.28\% & 99\% \\ \hline \end{tabular}
\end{table}
Table 1. Component group matching between phone and TV. TV’s subcolumn _Group_ does not contain the _Others_ GUI group, which comprises 6.53% of all TV GUIs in the experiment.
We first gather the run-time GUI metadata generated by UI Automator(Friedman et al., 2017), including screenshots and GUI metadata, and analyze hierarchical metadata to identify GUI components. Secondly, to construct a suitable GUI mapping, we design a component grouping algorithm that leverages the GUI information extracted from metadata to group isolated elements properly. Thirdly, we compare elements' attributes and hierarchical similarities to classify grouped elements into appropriate types and match corresponding TV templates. Then, we optimize the whole GUI layouts to adapt TV screens by OR-constraints (Shi et al., 2019; Shi et al., 2019). Finally, given the incompatibility of phone and TV systems, a cross-platform TV GUI DSL is designed to describe generated TV GUI for further compilation and rendering.
### Component Grouping
As mentioned in Section 2.2.3, we first group isolated GUI components to GUI groups as the basic unit of follow-up work. We parse metadata captured by UI Automator to accurately infer the pix-based coordinate of GUI components' bounding box and classify these components to proper types like _TextView_, _ImageView_, and _Button_. Once each GUI component's bounding boxes and type in a rendered screen are confirmed, the next phase is to group atomic components with similar domain-specific functions to one component group. Figure 5 illustrates a running example for our component grouping algorithm. Overall, the algorithm consists of three-level grouping with different assembly granularities. To begin, we gather all atomic components that have significant relationships, as well as all images and their text descriptions. Second, we then group components in the same row by their hierarchy, type, and total width. Third, we then merge adjacent rows with the same hierarchy and pix-based areas.
#### 3.1.1. Atomic Components Grouping
Regarding text pairs are very close on the Y axis, they tend to be a pair of descriptions, with the caption at the top and the additional explanation of the caption at the bottom (Bahdan et al., 2017; Sohn et al., 2017; Sohn et al., 2017). Texts below or on the right of images always have strong semantics with them, such as previewed movie names, image descriptions, etc (Bahdan et al., 2017; Sohn et al., 2017). Thus, heuristics are designed to aggregate relevant components based on component type, position, size, and structural relationship.
For text pair grouping, given upper text \(T_{u}\) and below texts set \(T_{s}\), if texts in \(T_{s}\) meet the following requirements, they will be grouped with \(T_{u}\): (1) On X-axis, \(T_{u}\) and \(T_{s}\) overlap, or the gap between them must be less than \(0.025\times screen\ horizontal\ resolution\). (2) On Y-axis, \(T_{s}\) is the closet element
Figure 4. Overview of Automated GUI Conversion from Phone to TV
around the \(T_{u}\). The gap between them must be lower than \(0.025\times screen\ vertical\ resolution\). We employ the subsequent three phases to confirm that our heuristics contains sufficient relevant component possibilities and to acquire the best applicable empirical parameters. Following the relevant work of Xiaoyi et. (Xiaoyi et al., 2019) and the official Android GUI design guidelines (Beng et al., 2019; Xiaoyi et al., 2019), we first determine all potential positions and distance range of the images and their accompanying isolated UIs. Second, we randomly selected 10 GUI pages in each category of our collection of phone apps (90 GUI pages in total). We experiment on the selected data to verify whether our heuristic rule can cover all the relevance possibilities. The experimental results indicate that our heuristic principles are capable of covering all possibilities in existing GUIs. Finally, we assessed the impact of all empirical coefficients within the range provided by Android official guidelines on the selected pages in increments of \(0.001\times\) the current screen's horizontal and vertical resolution. We ultimately determined \(0.025\) to be the optimal empirical coefficient. Potential compatibility issues with phones and apps may lead to distorted or misaligned UI components in the GUI, making our heuristic principles inapplicable. However, these circumstances can be rectified by developers through subsequent development.
For image relevant texts grouping, given texts set \(T\) and image \(I\), if texts in \(T\) meet the following requirements, they will be grouped with \(I\): (1) On Y-axis, \(T\) is the closet element below the \(I\) and the gap between \(T\) and \(I\) must be less than \(0.025\times screen\ vertical\ resolution\). (3) On X-axis, the midpoints of \(T\) and \(I\) are the same. If not, their overlap on X-axis must be more than 50% of the smaller one in \(T\) and \(I\). (4) If \(T\) is in a grouped text pair, pack both texts as the relevant texts of \(I\).
#### 3.1.2. Row Components Grouping
According to Google's Android design guide (Groude et al., 2017; Groude et al., 2017), the standard UI design typically treats each row as the fundamental unit. Therefore, the UI components in the phone are arranged in rows. The metadata of the phone's GUI, as illustrated in Figure 4, describes its GUI hierarchy in the form of an XML DOM tree. UI components in the same row are in the same subtree in the DOM tree. The purpose of our row components grouping is thus to trim the subtrees from the DOM tree that contains all UI components in one row. When the algorithm finds that components inside the same DOM subtree have occupied an entire row, it will put all of these components into one group.
Algorithm 1 demonstrates our row components grouping process. It utilizes a DOM tree (\(tree\)) of metadata and a pre-defined group width parameter (\(W_{group}\)) as inputs. The latter is a critical determinant dictating when to partition a subtree representing a row on the screen. Specifically, when the subtree's width extends to match the phone screen's width, it is deemed to occupy an entire row. Given that GUI elements are always displayed with margins on both the left and right sides of the screen, we empirically define the requisite width as 0.85 times the total screen width in
Figure 5. A running example of components grouping. The block dotted box 1 represents atomic grouping, the blue 2 represents row grouping, and the red 3 represents multi-row grouping.
this work. The output of Algorithm 1 is the grouped components \(G\). The leaf nodes in the DOM tree are the specific UI components, so Alg. 1 first walks from the root node of the DOM tree to the terminal leaves, collecting all leaves \(L\) (lines 1-6). Following the identification of leaf nodes, the algorithm backtracks up the DOM tree from each leaf in \(Leaves\) to establish respective subtrees representative of a screen row. For each leaf \(l\), if the width of the current node (\(W_{cur}\)) exceeds the predefined group width \(W_{group}\), prune at node \(l\). If not, backtrack to the current node's parent node \(p\). If the width of \(p\) still does not exceed the predefined group width \(W_{group}\), the traceback procedure continues until the current node's width meets the criterion that is larger than the predefined width (lines 7-12). We then trim the DOM tree from the node where the backtracking stops to get the cropped subtree \(S\) (line 13). To group all UI components within a single row, the algorithm subsequently collects all leaves in the subtree \(S\) as a group \(g_{l}\) (lines 14-19).
```
1:DOM tree \(Tree\), required group width \(W_{group}\)
2:Row components group set \(G\)
3:\(Leaves\leftarrow[]\)
4:for each component \(c\in C\)do
5:if length(getChildren (\(c\))) == 0 then
6:\(Leaves\).add(\(c\)) //collect all leaves in \(tree\)
7:endif
8:endfor
9:for each Leaf \(l\in Leaves\)do
10:\(node_{cur}\gets l\), \(W_{cur}\gets l\).width
11:while\(W_{cur}\) <= \(W_{group}\)do
12:\(p\leftarrow\) getParent(\(l\)) //Backtrack to the parent node \(p\) of \(l\) in \(Tree\)
13:\(node_{cur}\gets p\), \(W_{cur}\gets p\).width
14:endwhile
15:\(S\leftarrow\) trimTree(\(node_{cur}\)) //Trim \(tree\) at node \(node_{cur}\) to get a subtree \(S\)
16:\(g_{l}\leftarrow[]\) // the set to collect all leaves in \(S\)
17:for each \(l^{\prime}\in S\)do
18:if\(l^{\prime}\in L\)then
19:\(g_{l}\).add(\(l^{\prime}\))
20:endif
21:endfor
22:\(G\).add(\(g_{l}\) ) //Add \(g_{l}\) to \(G\)
23:endfor
24:return\(G\)
```
**Algorithm 1** Row Components Grouping
Figure 6 shows a running example of the Alg. 1. The backtracking starts at leaf TextView _Call Me By Fire_ and reaches the leaf's parent subtree 1. However, the width of subtree 1 is less than the required group width \(W_{group}\). We then keep going back to subtree 2, but the width of subtree 2 is still not satisfied. All the way back to subtree 3, its width is more than the required group width. The algorithm thus stops at subtree 3 and trims subtree 3 from the GUI DOM tree as a GUI component row.
#### 3.1.3. Multi-row Components Grouping
Components in one component group may be spread across multiple rows, such as _ListView_ and _GridLayout_. In these component groups, the components of different rows tend to have the same structure and component types. Thus, given two adjacent
rows \(r_{i}\) and \(r_{j}\), based on their bounding boxes, we compute the relative positions of the upper left and lower left corners of components in each row. If components in \(r_{i}\) and \(r_{j}\) have the same types and relative upper left and lower left corners, two rows will be grouped. As shown in Figure 6, subtree 3 has the same structure as the row below, so merge these two rows into one GUI group in step 4. Note that we allow two rows in one group in the implementation with one different component.
### TV Template Matching
We use GUI built-in and visual features to categorize groups and match related templates after component grouping. We first summarize the unique attributes of each type of group. The built-in attributes of the group _Top Tab Layout_, _Bottom Tab Layout_, _Tool Bar_, _Search_, _Video/Music Player_, _List View_, and _Side Nav_ are their component types, position and relationship with their sibling components. The unique built-in attributes of the groups _Top Tab Layout_, _Bottom Tab Layout_ are the component types and their positions in the GUI pages. Both groups will use the GUI components of the tab layout class and will be situated in the upper or lower half of the GUI pages, accordingly. Similarly, the unique built-in attributes of the groups _Tool Bar_, _Search_, and _Side Nav_ are also their specific component types and positions. The group _Tool Bar_ is located at the top of the GUI page. The _Search_ group is located in the upper half of the GUI pages with a search box. The _Side Nav_ group is located on the leftmost side of the GUI page and has a unique side navigation property in the DOM tree. The group _Video/Music Player_ and _List View_ also have player and list view attributes unique to the Dom tree, respectively. The image size and related information position features are used to classify groups _Icon + Info_, _Pic Side Info_, _Pic + Info_, and _Big Pic_.
When matching templates, we count the number of built-in attributes that each group has in common with each template. After matching all templates, if the maximum number of matching attributes exceeds the threshold, the most comparable template is assigned to the group. If it is below the threshold, the group is deemed unrecognized. In the case of unrecognized GUI groups, we provide a general Grid Layout-based template for their conversion. The threshold is set at 2 after multiple iterations of error correction based on experimental feedback.
Figure 7 demonstrates an example of classifying GUI groups. We notice when parsing the metadata of this GUI page that there are some GUI components at the top with the fields _searchText_, _searchBtn_, and _search_container_ in their attributes. These are similar to the attributes of the _Search_ template. As a result, this GUI group is categorized under the category _Search_. The top of the page has a GUI group with the class type _ActionBar-Tab_. Its features of position and class type meet the template _Top Tab Layout_, and hence it is classified to category _Top Tab Layout_. To proceed, we can
Figure 6. A running example of Algorithm 1 and multi-row component grouping.
now continue to identify GUI groups of categories _Big Pic_, _Pic+Info_, and _Bottom Tab Layout_ from the page.
Following the identification of GUI groups in the phone page, we convert these categorized phone GUI groups into matching TV GUI groups using the mapping rule in Table 1.
### Layout optimization
Due to various screen sizes and design principles, direct layout mapping may lead to issues like large white space on the right or too much content squeezed into a small area. The Android TV design guidelines [(7; 59)] state that TV layouts should be landscape and have more card-like components since this is more suited to TV interaction and enables the display of only the most essential image and text contents. So, if we just map each GUI group into the TV screen one by one, or simply change the GUI orientation from portrait to landscape without optimizing the layout, the final produced TV layout would be quite inflexible and violate the design standards of TV GUIs.
On phone GUIs, huge images frequently take up a whole row, as in Figures 2 and 7. However, the design guidelines for Android TV [(3; 7; 59)] state that images taking up an entire row would seem to be quite odd and significantly worsen the user experience. Additionally, if one image is too big, the remaining components will be too small for viewers watching the TV from far away to see. Currently, as _Desktop mode_ in Figure 1, Android's adaptive technology immediately projects onto the TV based on the proportion of its components. The slider picture occupied the whole TV screen and entirely obscured the location of other components after being adjusted by the adaptive layout, resulting in an odd overall effect.
To overcome these issues, we propose a TV-based GUI layout OR-constraints [(34; 35)] to optimize converted TV layouts. The OR-constraint is a combination of soft constraints, with the whole thing being a hard constraint. The hard restrictions must be met, but the soft constraints are not required. We set soft constraints for each component in one row, and all components in one row must meet the hard constraints for the Android TV layout. Unlike template-based approaches, which necessitate pre-designing templates and manually specifying rules for when each alternative should be invoked, constraints-based layout optimization can be more flexible and adaptive for a variety of screens without fixed templates and rules. Different from the Android adaptive GUI layout, the constraint-based layout optimization approach, in conjunction with the TV group template and
Figure 7. An example of template match for the phone GUI.
guidelines, can arrange the layout of UI components on the whole screen. This prevents the scenario when some UI components, due to the original size of the phone, are too huge to be transferred to the TV.
We summarize the layout requirements from the TV design guidelines [2, 3, 59] and our empirical study. We convert these requirements into constraints, and force converted GUI groups to optimize their layouts to satisfy these constraints. We lift three predefined heuristics as basic constraints for TV GUI layout: (1) Arrange GUI widgets from left to right. (2) If there is not enough space in the current row for the following widget, it will begin on the leftmost side of the next row. (3) Each component should be put within a predefined size range. Every TV GUI design must adhere to these three hard constraints. At the same time, TV GUIs also have one soft constraint: (4) There should be no black gaps on either side of the TV screen, which means the UI Component should fill the whole width of the screen. If there isn't enough space in the converted TV GUI, components will be removed in order of their size in the phone GUI, from small to large. The Z3 SMT solver [17] is used to solve OR-constrains.
The following part shows how we formulate these constraints. Given a row with \(n\) components \(W\). For each component \(w_{i}\in W\), assume its left \(w_{i}^{left}\), width \(w_{i}^{width}\), top \(w_{i}^{top}\), and current available width in the row \(r_{a}\). Let \(w_{j}^{height}\in W\) denotes the max components height in the current row. The maximum/minimum widths and heights of components \(w_{i}\) are represented as \(width_{i}^{max}\), \(width_{i}^{min}\), \(height_{i}^{max}\), and \(height_{i}^{min}\). For the constraint (1), we convert it to the following formula:
\[\begin{split} C_{1}:=(w_{i}^{left}=w_{i-1}^{left}+w_{i-1}^{width })\wedge(w_{i}^{top}=w_{i-1}^{top})\\ \wedge(r_{a}>=width_{i}^{min})\end{split} \tag{1}\]
For the constraint (2), it is formulated as
\[\begin{split} C_{2}:=(w_{i}^{left}=0)\wedge(w_{i}^{top}>=w_{j}^{ top}+w_{j}^{height})\\ \wedge(r_{a}<width_{i}^{min})\end{split} \tag{2}\]
For constraint (3), we assign preferred widths, and heights for common GUI component types. So,
\[\begin{split} C_{3}:=(width_{i}^{min}<=w_{i}^{width}<=width_{i} ^{max})\wedge\\ (height_{i}^{min}<=w_{i}^{height}<=height_{i}^{max})\end{split} \tag{3}\]
For constraint (4), let \(r_{to}\) represent the maximum width of the TV screen in one row, and its logical expression is
\[C_{soft}^{1}:=\sum_{i=1}^{n}(w_{i}^{width})=r_{to} \tag{4}\]
Each row represents a single formula unit. If a GUI group covers multiple rows, the GUI group is referred to as a formula unit. For each formula unit, \(C_{1}\) and \(C_{2}\) are OR constraints, and \(C_{3}\) is a hard constraint. In addition to three basic constraints, some weighted soft constraints \(C_{soft}\) are followed.
The final formula is thus:
\[C_{unit}:=((C_{1}\ \vee\ C_{2})\ \wedge\ C_{3})\lor C_{soft}^{k} \tag{5}\]
where \(k\) represents the number of soft constraints. We can dynamically apply new soft constraints to Equation 5 dependent on the demands of the TV GUI.
### DSL for GUI Code Synthesis
Current apps for phones and TVs may be developed in different development environments and programming languages. In order to extend our approach to as many platforms as possible, we design a novel Language- and platform-free GUI Domain Specific Language (DSL) for code synthesis to assist developers with secondary development and cross-platform compilation once we've optimized the layout.
To properly describe the GUI features, we provide a lightweight card-style DSL with a pre-written GUI block library based on the properties of TV GUI. When we get converted TV groups, we build a set of DSL statements by traversing the type, size, position, and relationship of all converted rows from top to bottom and left to right. Every component group is turned to a DSL statement in this project. To limit the size of the DSL keyword vocabulary, the DSL merely concentrates on the types and layout of GUI components in each row.
Current GUI groups are distinguished by their position, kind, size, and hierarchical relationships. To reflect these features, we let \(L_{i}\) represent the layout type of the \(i_{th}\) GUI row, for example, row and multirow. \(C_{j}\) represents the TV GUI group to which the \(j_{th}\) component belongs, for example, _Tool Bar_ and _List View_. \(P_{m}\) represents the additional textual parameters for \(C_{j}\), for example, the image title in GUI group _Pic+Info_. Each DSL statement follows the following syntax rule:
\[Statement_{i}:= L_{i}(C_{1}(P_{1},P_{2},...P_{m}),C_{2}(P_{1},P_{2},...P_{m}),...,C_ {j}(P_{1},P_{2},...,P_{m})) \tag{6}\]
The notation \(L\) signifies the layout phrase, dictating the arrangement of input components. Its possible values, \(Row\) and \(Col\), indicate horizontal and vertical placements of UI contents, respectively. The symbol \(C\) designates the category of the GUI group, encapsulating the nine identified TV GUI groups in Section 2.2.3. Meanwhile, \(P\) denotes the properties of this GUI group, which may include text information for the \(Icon+Info\) and \(Pic+Info\) groups, the selected state of \(TabLayout\), and the image source of \(Pic+Info\), among others.
Figure 8 exhibits an instance of our lightweight DSL. The second line of DSL encapsulates the layout of the first row in the TV GUI. The term \(Row\) is our layout phrase (\(L\)), stipulating the horizontal positioning of all input components. The \(Tab\) in \(Tab(VARIETY)\) symbolizes the \(Top\)_Tab Layout_ category in TV GUI groups (\(C\)), and \(VARIETY\) in \(Tab(VARIETY)\) signifies the text property (\(P\)) within the group. Both the category phases \(Slider\) and \(Img\) are subcategories of the TV GUI group \(Pic+info\) (lines 3, 4, 5). The sub-category that \(Pic+info\) invokes depends on the size of the input image.
The DSLs will be translated into real-world code according to the platform's requirements and renders the TV interface. We pre-write the TV GUI style library and install a client app with the TV GUI style library on TV. According to the DSL keywords, the style library calls the associated GUI code, converts it to real-world code, and invokes the system to render it.
### Implementation
Uiautomator2(Tuan et al., 2018), Fastbot (Tuan et al., 2018), and Droidbot (Droidbot, 2019) are used to capture GUI metadata. All algorithms in the pipeline are implemented by Python3. Our pipeline generates DSL for TV layout, then sends the intermediates to TV. We pre-install a client app on TV to receive generated DSL and translate DSL to real-world code to render TV screens. The TV client app is built on the Leanback (Leanback, 2018) library. Leanback is a library for TV user interface and is provided by Google. To produce a GUI, Leanback must be developed based on the interface. With our DSL, Leanback input is automatically generated without the need for the developer to again spend time creating the GUI. According to the optimized size of each GUI component, each GUI group type is split into three subclasses during implementation: large, medium, and small. Based on Leanback, we developed templates for
all the subclasses of the 9 TV GUI groups we summarized. According to the GUI DSL, the client app calls the corresponding template, generates the corresponding TV GUI code, and renders it.
## 4. Accuracy Evaluation
We carry out experiments to evaluate the accuracy of the GUI grouping and the converted TV GUI effects in our pipeline.
### Accuracy of GUI Grouping
The accuracy of GUI component grouping serves as the foundation for succeeding techniques. Hence, we first evaluate the accuracy of our proposed grouping algorithm.
#### 4.1.1. Procedure
To confirm the generalizability of our grouping algorithm, we randomly select another 10 apps that are not in our dataset. To ensure the quality of selected apps, we only select apps with at least one million Google Play installations. A total of 100 GUI pages are selected as the test dataset, with 10 GUI pages from each app being randomly selected. Next, we perform the grouping algorithm on the selected GUI pages and collect grouping results. The grouping results are then manually checked to ensure their reasonableness.
#### 4.1.2. Metric
A fair result for a GUI group must contain only UI elements with strong logical connections. When manually verifying the rationality of grouping, a group fails if it contains UI components that are logically unrelated to other UI components inside the group. When we refer to 'logically related UI components', we mean an intuitive association of GUI elements based on interactivity and role. Interactivity pertains to the direct communicative relationship between components, such as an input field and a'submit' button. The 'role' encapsulates the components' collective function within the broader UI context. Hence, 'logically related UI groups' signify clusters of UI elements intuitively assembled based on their interaction synergy and shared role in the user interface.
We use the exact match rate to illustrate the percentage of reasonable GUI groups. The exact match is a binary metric, with a value of 1 if correct and 0 otherwise. If there are \(m\) GUI groups on this GUI page, and \(n\) of them are reasonable, then the grouping accuracy of this GUI page is \(n/m\). Figure 10 demonstrates an example of correct and wrong GUI grouping on one GUI page. Groups 1, 3, and 4 in Figure 10 are considered to be correctly grouped since all UI components
Figure 8. An Example of Our TV GUI DSL
within the group are logically related. But group 2 is considered to be grouped incorrectly. This is because, in addition to the search-related UI components in group 2 (the UI components boxed by the blue dashed line in Figure 10), there are two logically unrelated icons, which should not be classified into this search-related UI group. The grouping accuracy of Figure 10 is 0.75. Considering the subjective nature of determining 'logically related UI components', we employ three individuals with a minimum of one year's experience in GUI development to independently conduct the exact match evaluation. Thereafter, the Fleiss Kappa value [(21)] is used to measure the level of agreement among these three evaluators. Fleiss Kappa values are interpreted as follows: [0.01, 0.20] signifies slight agreement, (0.20, 0.40] indicates fair agreement, (0.40, 0.60] represents a moderate agreement, (0.60, 0.80] suggests substantial agreement, and (0.8, 1] signifies near perfect agreement. Instances where the Fleiss Kappa value falls below 0.8 prompts a discussion, analysis, and re-evaluation by the three evaluators until a Fleiss Kappa value exceeding 0.8 is achieved. To evaluate our method's ability to minimize the isolated UI components, we follow related works [(13; 67)] to select the proportion of reduced UI components as the second metric. Suppose there are \(J\) UI components on the original GUI pages, and \(K\) UI components and UI groups are left after grouping, then the ratio of the reduced components is \((j-k)/j\).
#### 4.1.3. Baseline
Xiaoyi et al. [(67)] propose a similar approach to group UI components for efficient navigation. They develop multiple heuristics that group UI components based on their UI types, sizes, and spatial relationships on the rendered phone screen. Considering similar application scenarios, we choose their method as the baseline for our experiments.
#### 4.1.4. Results
Table 2 summarizes the information of the selected 10 apps and the accuracy results of the GUI grouping. The column _#Installation_ demonstrates the number of app installations. The Fleiss Kappa value of the first grouping results for three evaluators of our 10 apps are all between 0.91 and 1. Three evaluators discuss the different parts and finally agree on a unified final result. The subcolumn _Ours_ and _BL_ show our and the baseline' experimental results in the exact match rate and reduced UI components rates, respectively. Our approach achieves 0.81 in average exact match, which is 10.96% higher than the baseline _BL_[(67)] (0.73). Our approach reduces isolated GUI components in GUI pages by an average of 58%, which is 20.83% higher than the baseline (48%). Both results demonstrate the effectiveness of our GUI grouping algorithms.
Figure 10 depicts the results of a pair of our method and baseline's method after grouping, with the first subplot representing our method and the second represen
\begin{table}
\begin{tabular}{|l|l|l|l|l l|l l|} \hline ID & App & Category & \#Installation (Million) & \multicolumn{2}{c|}{ExactMatch} & \multicolumn{2}{c|}{ReducedUI} \\ & & & Ours & BL [(67)] & Ours & BL [(67)] \\ \hline
1 & iQIYI Video & Entertainment & 5 & 0.86 & 0.71 & 0.62 & 0.49 \\
2 & Coursera & Educational & 10 & 0.72 & 0.61 & 0.55 & 0.37 \\
3 & Evernote & Productivity & 100 & 0.80 & 0.77 & 0.61 & 0.43 \\
4 & Kodi & Tool & 50 & 0.85 & 0.81 & 0.62 & 0.52 \\
5 & Pinterest & Lifestyle & 500 & 0.74 & 0.62 & 0.56 & 0.40 \\
6 & Wonder & Art \& Design & 5 & 0.79 & 0.72 & 0.53 & 0.47 \\
7 & Fiverr & Bussiness & 10 & 0.83 & 0.69 & 0.55 & 0.51 \\
8 & ABC listen & Music \& Audio & 1 & 0.91 & 0.81 & 0.64 & 0.56 \\
9 & Fitbit & Health \& Fitness & 50 & 0.84 & 0.78 & 0.56 & 0.59 \\
10 & Kik & Communication & 100 & 0.76 & 0.73 & 0.52 & 0.45 \\ \hline \end{tabular}
\begin{tabular}{|l|l|l|l l|l l|} \hline & \multicolumn{4}{c|}{Average} & \multicolumn{2}{c|}{**0.81**} & \multicolumn{2}{c|}{0.73} & \multicolumn{2}{c|}{**0.58**} & \multicolumn{2}{c|}{0.48} \\ \hline \end{tabular}
\end{table}
Table 2. Evaluation Results of Accuracy of GUI Grouping
approach splits the GUI page into 5 GUI groups and other GUI components. Correspondingly, the baseline separates the page into 8 distinct groupings and components. Clearly, our grouping results are more precise and overlook fewer individual components. For example, in our group 2 and the corresponding groups 2 and 3 of the baseline, our method considers the possible related information surrounding the text, successfully groups the related images on the right side together, and merges the adjacent groups with the same structure, whereas the grouping result of the baseline method omits the related image data. In more complex scenarios, such as group 3 and group 4 in subfigure (a), corresponding to groups 4, 5, 6, and 7 in subfigure (b), the results of groups 4 and 5 in subfigure (b) omit the majority of the information since baseline's approach cannot handle the case of numerous lines of text. Our approach is built with more general atomic, row, and multi-row grouping methods, so that the group results contain as much important data as feasible, and reduces the number of groups on a GUI page to expedite the subsequent conversion operation.
### Accuracy of GUI Conversion
Once the TV GUI DSLs have been generated, we convert them into source code and then run them to get the rendered run-time GUI pages. On the one hand, the impact of the same run-time GUI can be accomplished through a variety of code. On the other hand, there is a gap between the GUI source code and the rendered GUI effect, and the GUI source code does not reflect the rendered GUI effect in its entirety [56]. To demonstrate the efficacy of the approach, we choose to evaluate the rendered effect of apps running through the translated DSLs. We perform both an automatic evaluation and a user study to evaluate the performance of the whole automated GUI conversion approach. All 589 pairs in Section 2.2 are used in the automated evaluation to objectively evaluate the overall effect of the method. The quality of visual transformation is strongly dependent on human perception, so we also include a user study to assess our approach's performance. Due to the great efforts of the user study, we randomly sample 42 (7%) GUI pairs among 589 pairs from 8 apps in 6 categories.
#### 4.2.1. Evaluation Metrics
According to recent studies [12; 49], when using phones and TVs, users largely rely on the layout of images and texts to understand GUIs. Uniform GUIs facilitate user adaptability from mobile to TV app. Additionally, a consistent GUI promotes a reduction in developmental cost and time by enabling the reuse of code and design elements [42; 46]. Therefore, we substantiate the accuracy of GUI conversion by quantifying the similarity between TV GUIs
that are automatically generated and those manually designed, serving as the ground truth in this study. The mIOU [33], which could effectively evaluate the layout gap between each type of the component in one GUI page, has been widely used in GUI evaluation [12, 37, 38, 50, 64, 68, 69]. Based on its characteristics and suitability for our specific study, we select mIoU as the metric to evaluate the layout similarity of generated TV GUIs and ground truths. The TV versions in phone-TV GUI pairs have been redesigned and optimized with a more logical layout of text and images in the GUI to accommodate the features of the TV. So we select these redesigned TV GUIs as the ground truth of the corresponding phone GUI in the automated evaluation.
The mIoU (Mean Intersection Over Union), also known as the Jaccard Index, is a prominent image segmentation assessment measure that computes the IOU for each class before calculating the average over classes. The IoU is calculated by dividing the overlap area between predicted class positions and ground truth by the area of union between predicted position and ground truth. This is computed by:
\[mIoU=\frac{1}{k}\sum_{i=0}^{k}\frac{TP(i)}{TP(i)+FP(i)+FN(i)} \tag{7}\]
where \(k\) means \(k\) classes in both images, \(TP(i)\), \(FP(i)\) and \(FN(i)\) represent the distribution of true positive, false positive, and false negative of \(i_{th}\) class between two compared images.
Our empirical study in section 2.2 shows that the current GUIs of paired phones and TVs often do not have strict correspondence. Besides, whether a GUI design makes sense depends significantly on the user's subjective perception. A GUI with a low mIOU in the automatic evaluation may be deemed acceptable by some users. For example, some GUI may have a more reasonable GUI design than the ground truth. These reasonable GUI designs got low scores because their layouts are not in line with the ground truth. To eliminate the bias, we adopted two metrics [68], structure rationality and overall satisfaction for participants in the user study to rate the quality of the generated TV GUI by considering the characteristics of the TV apps. These metrics were inspired by the web GUI evaluation [37, 38, 69] and image evaluation [31, 54]. First, structure rationality is used to evaluate component layout rationality, which refers to the placement of components in the GUI as well as the reasoning behind their combination and sorting. Second, overall satisfaction is to evaluate the overall design's pleasing qualities. For each metric, the participants will give a score ranging from 1 to 5, with 1 being the least satisfactoriness and 5 representing the highest satisfactoriness.
#### 4.2.2. Baselines
Desktop mode [19, 32] is widely used in various smartphones, such as Samsung, OnePlus, Huawei, and Oppo. It allows users to connect an external display to an Android smartphone or tablet to make content easier to view, just like on a TV or computer. The desktop mode is optimized for larger displays with resizable windows and a different layout for GUIs. HDMI adapter or WiFi is required to use the desktop mode. In this experiment, considering the current use range and maturity of computer mode, we first choose Huawei EMUI desktop mode as the baseline, which is one of the earliest phone models to support desktop mode.
Currently, Google provides the big-screen responsive layout component for Android system [1, 3, 6, 7, 28]. When an Android app runs on different-sized screens, adaptive and responsive Android GUI components will adjust their positions and sizes to fit the screen size of each device. In our empirical study, we found that part of the apps using these technologies automatically adapts to different screens. Even though these technologies often offer a worse user experience than a fully hand-optimized GUI, it is nevertheless worthwhile to compare them to semi-automatic methods. Thus, our second baseline is the result of directly mapping adaptive Android GUI from phone to TV. Unlike the desktop mode, no external equipment is required for direct mapping.
In the user study, we also compare our method to the redesigned TV GUI, which serves as the ground truth for the automated evaluation. The comparison with the ground truth will provide a clear image of the efficacy gap between our method and the redesigned GUI, which requires extra work.
#### 4.2.3. Procedures
In the automatic evaluation, we select redesigned TV GUIs in 589 pairs as our ground truths. We use our proposed approach to convert every phone GUI to DSLs of TV GUI. Then we use the client app on TV to generate code and render GUIs on the Android TV emulators [5]. The emulator is configured with 4 CPU processors, 4 GiB of RAM, and 1 GiB of SD card. The API level version of the system image is 26. When GUIs are rendered, we get the metadata of GUIs to generate their corresponding wireframes. The content of pictures and text in the GUI is not taken into consideration since we are comparing the structure of the rendered GUI, not the pixels of the produced GUI, therefore we convert images to red blocks and texts to green blocks in wireframes. In the same way, we generate wireframes of baselines and our approach's generated TV GUI. Finally, we evaluate the mIoU between wireframes of our generated TV GUIs and ground truths.
In the user study, we recruit 20 participants who are professional designers and developers with more than 3-year Android development experiment for this user study. We first introduce them with a detailed explanation of tasks and the two GUI evaluation metrics structure rationality and overall satisfaction. Meanwhile, we provide participants with all generated GUI designs from different methods and ask them to give the score of each GUI design in two metrics of the user study. Note that they do not know which TV GUI is from which method and all of them will evaluate the TV GUI design individually without any discussion. For each test case, participants are asked to choose one GUI they think works best.
#### 4.2.4. Results
Table 3 demonstrates average results on the testing set of our approach and three baselines. The ground truth GUI, which requires extra engineering effort to redesign one-to-one, is undoubtedly more favorable than the other three automatically generated approaches. In 19 cases out of 42, _Ground Truth_ is deemed the most efficient, followed by our method (15), _Desktop mode (7)_, and Direct mapping (1). It reaches the highest structure rationality (4.27) and overall satisfaction (4.20). However, this approach requires customization for each GUI page, adding significantly to the engineering expenses and making it non-generalizable. Compared to ground truth's approach, ours performs marginally worse (0.17 lower on _Structure Rationality_ and 1.2 lower on Overall Satisfaction). Our approach produces the best results across 15 cases, which is quite similar to the ground truth (19), and we do not require the additional engineering costs associated with tailoring each GUI page. Our approach outperforms the other two baselines in mIoU, overall satisfaction, and structure rationality by significant margins in both metrics. Compared with the baseline _desktop mode_, our approach achieves 21.05%, 10.42%, and 21.31% improvement in mIoU, overall satisfaction, and structure rationality. According to the experimental findings, our method outperforms the other two automated methods (_Direct mapping_ and _Desktop mode_). Our approach can also produce comparable outcomes when compared to the ground truth without incurring additional expenses.
To further analyze our method and the other two automatic methods (_Direct mapping_ and _Desktop mode_), we plot the boxplots of the scores of these three methods over _Structure Rationality_ and _Overall Satisfaction_ in user study in Figure 11. In both box plots, the gap between our first and third quartiles was lower than in _Desktop mode_, indicating that our ratings are more concentrated and stable. The maximums of _Desktop mode_ in both box plots are higher than ours. There are a few cases that have been individually optimized for _Desktop mode_ display, so the scores are particularly high, which increases the maximum score of the whole. However, if the page is not individually optimized, the effect of _Desktop mode_ is significantly lower than that of our method. The scores of
_Direct mapping_ are all significantly lower than the other two, indicating that users generally do not accept this conversion.
To understand the significance of the differences between the user study results of baselines and our approach, we carry out the Mann-Whitney U test [20] on the overall satisfaction and structure rationality with _Direct mapping_ and _Desktop mode_, respectively. Since there are two baselines, we carry out the test between our approach with each baseline separately. The test results in Table 3 show that our tool can significantly create better GUI conversion in overall satisfaction and structural rationality from phone to TV than _Desktop mode_ (both \(p-value<0.01\)) and _Direct mapping_ (both \(p-value<0.01\)).
To better illustrate our results, Figure 12 lists examples of generated TV GUI by different approaches. For the case above in Figure 12, the average structure rationality for _Direct mapping_, our tool and _Desktop mode_ are 1.8, 4.4, and 3.2. The average overall satisfaction for _Direct mapping_, our tool and _Desktop mode_ are 1.7, 4.0, and 3.6. For the case below in Figure 12, the average structure rationality for _Direct mapping_, our tool and _Desktop mode_ are 1.4, 4.6, and 1.5. The average overall satisfaction for _Direct mapping_, our tool and _Desktop mode_ are 2.0, 4.3, and 1.9. Generally, for _Desktop model_, only a few specific apps and pages are manually optimized, so when testing with various types of pages, uncustomized pages are original phone pages without optimization, resulting in low scores. Faced with large TV screens, most cases are very blunt without considering the design criteria of the current platform's widgets and users' usage habits. _Direct mapping_ places all components as where they are on the phone, resulting in a very poor user experience for TV users. Besides, it can only convert part of GUI components, which has poor universality. This it is generally not accepted by users.
Our approach sometimes may also generate inappropriate TV mapping, especially for those non-standard GUI inputs. For the phone GUI in Figure 13, promotional images of films in the
Figure 11: Score distribution of the structure rationality and overall satisfaction of three approaches
second row do not show completely due to the size of the mobile screen. In this case, UI Automator incorrectly provides us with the name of these films with details in _Bottom Tab Layout_. Thus, our approach fails to get the correct name of films due to the limitation of UI Automator, just like the example shown in red boxes. As our approach gains confusing hierarchies from UI Automator, it does not correctly identify the _Bottom Tab Layout_. Therefore, the approach fails _Bottom Tab Layout_ to convert it into a TV _Channel_.
## 5. Usefulness Evaluation
According to the findings of our empirical study in Section 2.1, more developers prefer to redevelop the GUI pages of new TV apps. Even while experienced developers can produce the matching TV GUI pages quickly, it still affects the development efficiency and wastes some valuable resources in the phone GUI. We carry out a user study to evaluate the usefulness of our generated TV GUI for bootstrapping corresponding TV GUI design and implementation by real-world developers.
### Procedures
We recruit 6 participants who are all working in software companies and have at least one-year Android GUI developing experience. Participants are required to design and implement the corresponding conversion TV GUI by referring to the given Phone GUI. We provide 6 phone GUIs
Figure 12. Conversion Examples by Direct Mapping, our tool, and Desktop Mode Phone GUI
Figure 13. An example of metadata data error
which have covered main 12 phone GUI groups. The official corresponding designed TV GUIs of 6 phone GUIs are also collected for the subsequent satisfaction evaluation. Participants are required to design and implement the layout skeleton and set component properties, including the view type, size, order, and padding. Note that participants are allowed to replace some component types with placeholders without affecting the rendering and overall design.
The study consists of two groups of three participants: the experimental group, consisting of three participants, is asked to proceed on the basis of our tool, and the control group, which starts work from scratch on a TV app design. The experiment group is allowed to use our tool to automatically generate a draft TV GUI and update the generated source code to re-design TV GUI directly. Each member in the experimental group learns in advance how to use our tool to generate source code and render it on TV. Participants in both groups have comparable development experience by pre-study their developing background to ensure both groups have similar expertise in total. All participants are only allowed to use Android Studio and Java to avoid bias and have up to 20 minutes for each implementation. Three academic Ph.D. students who are not involved in the study are asked to evaluate 6 participants' results and rate their satisfaction on a five-point scale, with 5 being the highest and 1 the lowest. The satisfaction metric[37; 38; 69] is to evaluate the overall pleasing qualities of this GUI page. The evaluators must determine if the layout, content, and UI type of all UI components on this GUI page are appropriate. When rating, raters do not know which TV GUI is developed by which group and the manual-designed TV GUI by companies are used for the reference. Similar to the accuracy evaluation, we use the Fleiss Kappa value [21] to measure the agreement among the three raters. Fleiss Kappa values in the range of [0.01, 0.20], (0.20, 0.40], (0.40, 0.60], (0.60, 0.80], and (0.8, 1] correspond to the slight, fair, moderate, substantial, and almost perfect agreement, respectively. If the Fleiss Kappa value is less than 0.8, the divergent cases will be discussed, analyzed, and re-scored by the three raters until the Fleiss Kappa value is greater than 0.8. We provide 6 participants with the same development environment and resources and record the time it takes participants to complete each TV GUI.
### Results
The box plot in Figure 11 and the average score in Table 4 show that the experimental group implements TV GUI conversion faster (average of 11.56 minutes) with a higher satisfaction score (average of 3.35) than the control group (average of 16.28 minutes and 1.66 satisfaction score). Members of the experimental group use the DSL to design the GUI, but members of the control group write actual code to design the GUI. Therefore, their working implementations are different, and the outcomes of the control group members are closer to the actual app implementation. Theoretically, different working implementations could lead to bias in the results, but all invited
\begin{table}
\begin{tabular}{c|c|c} \hline Indicator & Control & Experiment \\ Cost Time (min) & 16.28 & 11.56\({}^{**}\) \\ Average Score & 1.66 & 3.35\({}^{**}\) \\ \hline \end{tabular}
\end{table}
Table 4. Average cost time and satisfaction scores of control and experimental groups. \({}^{**}\) denote the statistical significance \(p-value<0.01\).
Figure 14. Distribution of cost time and satisfaction scores of control and experimental groups
professionals have at least one year of Android development experience. They master the Android GUI development procedure. Some developers use placeholders to replace some UI components, allowing them to focus on developing the GUI rather than completing the code logic and syntax. While the working implementations of the two groups may not be equal, the efforts of redesigning and constructing an adequate TV GUI are the decisive factor in determining the outcomes of the two groups experiments. Our tool successfully and effectively assists developers in developing more suitable TV GUIs faster, taking into account the apparent time difference between 11.56 minutes and 15.6 minutes on average. In experiments, two participants in the control group fail to finish at least one GUI conversion within 20 minutes. However, everyone in the experimental group completed the conversion within 20 minutes and had time to personalize the GUIs. This results in significantly higher satisfaction scores than the control group (an average score increase of 1.64).
Figure 15 shows an example of two TV GUI conversions from the experimental and control group. Note that we only evaluate the structure and overall soundness of the GUI, so we do not evaluate the content in the participant's GUI, allowing placeholders. When designing GUI, members in the experimental group can refer to the output layout of our tool, and can customize the TV GUI on this basis. Our tool is an excellent inspiration for the experimental group, saving them time in designing and developing the TV GUI and giving them a bottom line for their work. We also carry the Mann-Whitney U test on the cost time and satisfactory scores. The test results suggest that our tool can significantly help the experimental group convert phone GUI faster(\(p-value<0.01\)) and create better TV GUI (\(p-value<0.01\)).
A participant in the experimental group commented on our tool that _"The automatic conversion results provided by the tool can give me good development design guidance and hints, greatly improving my development efficiency"_. Overall, the user study results provide preliminary evidence of the usefulness of our tool. The majority of participants think that the GUI DSL we built is simple to comprehend and use. They said that our GUI DSL is a useful solution to address the present issue of GUI code reuse caused by the version differences between mobile phone systems and television systems. Our GUI DSL also makes it simple for designers without any programming background to collaborate on projects. Participants also pointed out some flaws in our existing approach. It is not appropriate for converting real-time demanding apps like games since the generated GUI needs to be re-rendered by the client app on the TV. Because the image on the mobile GUI isn't designed with a large-screen TV, sometimes the image will seem stretched on the TV GUI after conversion, which can negatively impact the user experience. To address these issues, we will design more efficient GUI rendering technology solutions and design the display of image resources after screen size conversion in future work.
## 6. Threats to Validity
Potential threats to validity in our user study for performance evaluation primarily stem from subjectivity or bias inherent in participants' abilities and backgrounds. Similarly, the experiment on GUI grouping accuracy could be influenced by individual interpretations of 'logically related UI components.' We have taken measures to mitigate these threats: participants are all GUI development professionals with diverse skills and experiences. We've provided comprehensive examples of 'logically related' components pre-experiment to ensure a shared understanding. We refrain from revealing which results are ours or the baselines' during the user study. Moreover, results are objectively assessed by comparing the mIoU to the redesigned native TV GUI, with tool performance evaluated via a blend of user study and automated evaluation results. In the usefulness evaluation, we primarily relied on'satisfaction score' and implementation time as the evaluative metrics for our approach and the baselines. This may not fully encapsulate all elements pertinent to GUI usability and aesthetics. However, while the'satisfaction score' is central to our usefulness evaluation, we
employ other metrics in separate experiments designed to measure different aspects of our tool. Collectively, these various metrics corroborate the effectiveness of our proposed tool, contributing to a comprehensive assessment of its performance. To broaden the scope of the tool's efficacy evaluation, we intend to introduce additional valid metrics in future work.
The existence of a small number of online service apps that leverage web technologies for cross-platform adaption poses an additional threat to the validity of user studies. Some cross-platform frameworks, such as Flutter [22] and React JS [55], claim support for Android TV adaptation. However, we discovered that there are no comparable apps for Android TV. Therefore, we did not collect any valid online service-based phone-TV app pairs. There may be new GUI features in online service apps. Thus we will continue to collect online service apps in our future research and study their features to improve our algorithm.
The major internal validity of the conversion pipeline is that some GUI components are difficult to classify and hence challenging to convert appropriately according to the GUI category. In the GUI grouping phase, there may be GUI pages that do not follow the Android GUI design principles, resulting in mistakes in GUI groups, which may cause errors in subsequent group classification and mapping. Therefore, pipeline errors may be introduced throughout these steps of our semi-automated TV GUI generation pipeline due to the possibility of an error during GUI recognition and grouping. Errors in the GUI recognition and GUI grouping phrases may ultimately result in unsatisfactory overall performance. Current data collection tools (UI Automator) cannot accurately obtain metadata of some third-party GUI and \(WebView\) views, which further makes it difficult to classify. To mitigate the validity, we use Grid Layout as the default template to map these unrecognized GUI groups, which can be seen in Table 1. Grid Layout templates can ensure that the information on mobile phones is not lost after mapping and basically meet the user experience.
The major threat to the external validity of the conversion pipeline is the fragmentation of Android devices. The screen, OS version, and UI styles of Android devices are markedly different and difficult to unify. To mitigate this issue, we design language- and platform-free GUI DSL for code synthesis. The DSL translator pre-installed on the relevant TV converts the DSL into the GUI source code based on the TV's properties. Improper code translation may potentially result in a
Figure 15: An example of experimental and control group
poor final GUI conversion. Consequently, we define the TV GUI libraries in advance and employ quality-checked GUI libraries to alleviate this potential threat.
The potential generalization vulnerability of the UI grouping and conversion procedures based on heuristic rules is also a threat to validity. GUI developers may not follow the prevalent UI layout guidelines when designing GUI pages, which may cause our grouping heuristics to become ineffective. To mitigate the threat and ensure that our approaches are applicable to all existing phone-TV GUI pair types, we evaluate our heuristics in external real-world apps in Section 4.1. Google advises that the Android TV app's UI be designed with a card-like layout, thus we implement a default grid layout-based template for dealing with the unexpected rare UI types. Additionally, we provide OR constraints for Android TV. These constraints are more flexible than rules and automatically calculate TV GUI layouts that correspond to the present GUI. The use of OR constraints broadens the generalizability of our method. Our approach also proposes to convert GUI to a cross-platform GUI DSL that allows programmers to further customize the source code for various platforms and versions. In exceptional circumstances, the developer can rewrite a portion of the source code to conform to the current GUI layout specifications based on the generated DSL. To improve the efficiency of UI grouping, we plan to collect the grouped UI components on mobile devices and TVs, train a deep learning model, and allow the deep learning model to automatically match GUI components of the same domain.
## 7. Related Work
### GUI implementation
Implementing a GUI focuses on making the GUI work with proper layouts and widgets of a GUI framework. Nguyen and Csallner (Nguyen and Csallner, 2007) reverse engineer the UI screenshots by rule-based image processing method and generate GUI code. They support only a small set of most commonly used GUI components. More powerful deep learning-based methods (Nguyen and Csallner, 2007; Nguyen and Csallner, 2007; Nguyen and Csallner, 2007) have been recently proposed to leverage the big data of automatically collected UI screenshots and corresponding code. Some recent works explore issues between UI designs and their implementations. Moran et al. (Moran et al., 2017) check if the implemented GUI violates the original UI design by comparing the image's similarity with computer vision techniques. The follow-up work (Nguyen and Csallner, 2007) further detects and summarizes GUI changes in evolving mobile apps. Similarly, the semantic vector for the UI design from our work can help detect the inconsistency among UI designs within the same app.
Different from those works on GUI implementation which are highly related to conventional GUI development, we are targeting at specifically GUI projecting from small-screen mobile phones to the corresponding one on large-screen TVs.
### GUI component grouping
There are some similar components of clustering and page segmentation algorithms in web page analysis. Yandrapally et al. (Yandrapally et al., 2016) propose a near-duplication detection algorithm to study near-duplication components in web pages. They characterize and merge functional near-duplicates by summarizing categories in existing web pages. Crescenzi et al. (Crescenzi et al., 2017) propose a structural abstraction clustering algorithm for web pages that groups web pages based on this abstraction. To assess end-to-end web tests, Yandrapally et al. (Yandrapally et al., 2016) utilize VIPS (Yandrapally et al., 2016) for web page segmentation and XPaths of web elements inside these fragments for establishing their equivalence. VIPS (Yandrapally et al., 2016) is a popular page segmentation, it proposes an automatic top-down, tag-tree independent approach to detect web content structure. Mahajan et al. (Mahajan et al., 2017; Mahajan et al., 2018) design a clustering technique for web elements that are based on a combination of visual aspects and DOM-based XPath similarity.
Although our study focuses on the grouping and segmentation of mobile GUIs, their work on the clustering of web components enlightens us, and we incorporate their insights into mobile GUI grouping.
### GUI migration across platforms
Due to the difficulty of GUI migration across different platforms, very few related works are carried out for solving this problem. Pihlajam (Pihlajam, 2018) observed multiple games across web and mobile platforms for summarizing several UI patterns for user interface adaptation. Wong et al. (Wong et al., 2019) developed a scalable GUI system that dynamically transforms platform-specific GUI widgets migrated within an application between any of a plurality of heterogeneous device platforms. Verhaeghe et al. (Verhaeghe et al., 2019) developed a set of rules for GUI Migration using MDE from GWT to Angular 6.
Although these studies explore the mapping (or partial mapping) between different platforms and programming languages, none of these works are investigating app GUI adaptation between phones and TV. Instead of the white-box migration or layout migration, our study focuses more on black-box GUI adaptation without the source code of the original GUI on the phone. That black-box characteristics and significant differences in screen sizes motivate us to develop a brand new approach to bridge the gap between phone and TV GUI. Besides, time and resource consumption must be considered because of phones' current hardware limitations.
### Practical tools in industry
Finally, it is worth mentioning some related non-academic projects. There are many third-party libraries or frameworks which support cross-platform adaptation such as React.js (Feltier et al., 2019), Flutter (Flutter, 2019), and also default Android development (Bordes et al., 2019). Although these frameworks can cover multiple platforms such as smartphones, desktops, and tablets, TV is rarely covered due to its ultra-large screen. Developers have to commit much additional effort to design new layouts that can be easily understood from 10 feet away and provide navigation that works with just a directional pad and a select button to make their app successful on TV devices. That is one reason why few apps support GUI adaptation, and the small number of smart TV users further discourages app developers.
Samsung supports screen mirroring from the Samsung device to the TV including photos, videos, presentations, and games (Samsung et al., 2019). There are also other similar connections between smartphones and TV such as Chromecast(Chromeast, 2018), Xiaomi TV stick (Xiaomi, 2018) or wired connection via HDMI, etc. Most of them are working well in only video projection or some customized apps, and they require additional support from the TV side. Directly showing phone GUI on TV will bring many rendering issues such as small components, large black margins, unreasonable interaction, etc. To overcome those issues, we propose an automated approach to project the phone GUI to the TV on the run time.
## 8. Conclusion
At the moment, adaptive technologies between phone and TV GUI are unable to fulfill the demands of app developers, and the cost of developing and maintaining new applications is prohibitive, therefore building an automated GUI conversion tool from phone to TV is challenging but worthwhile work for developers. An automated approach for converting phone GUI to TV GUI is presented in this study. Before proposing our approach, we carry out empirical studies to explore how many current apps support TV displays and how current apps convert phone GUI to TV GUI. Our tool consists of four integral stages: GUI components grouping, template matching, layout optimization, and DSL for code synthesis. Finally, we compile generated GUI DSL to source code for rendering the final TV GUIs. Our approach offers obvious benefits over existing mainstream technologies, according to an automated evaluation in 589 valid phone-TV GUI pairs and a user study from 20
Android professionals. Besides, a pilot user study also illustrates the usefulness of our tool for app developers.
In the future, we will keep improving our algorithm for generating mapping GUIs from phone to TV. With more and more apps developed for supporting TV, we will construct a large parallel corpus of TV-phone GUI pairs. Based on that data, we will develop an end-to-end machine learning algorithm that will be more generalized than the current approach. On the other hand, we will extend our tool to other platforms such as smartwatches, tablets, and vehicle screens.
|
2310.17095 | Evolution of Fullerenes in Circumstellar Envelopes by Carbon
Condensation: Insights from Reactive Molecular Dynamics Simulations | Fullerenes, including \ce{C60} and \ce{C70}, have been detected in various
astronomical environments. Understanding how their structures evolve over time
is essential for gaining insights into their life cycle and making further
observations. To address this, we conducted reactive molecular dynamics
simulations to investigate the evolution of fullerenes in the circumstellar
envelopes surrounding carbon-rich asymptotic giant branch stars. Our
simulations employed a bottom-up chemistry scheme, wherein fullerenes grow by
absorbing and condensing small carbon-based molecules. The results revealed the
formation of different structures through heterogeneous reactions based on
hydrogen concentration, leading to the emergence of onion-like nanostructures
or single-layer fullerenes. To examine the impact of these structural changes
on the infrared emission characteristics of fullerenes, we performed quantum
chemical calculations. The results indicate that as fullerenes grow larger,
additional emission features are introduced in the infrared spectrum. Moreover,
two-layered fullerenes show noticeable blue-shift or weakening effects on the
bands associated with out-of-plane vibration modes. | Zhisen Meng, Zhao Wang | 2023-10-26T01:33:29Z | http://arxiv.org/abs/2310.17095v1 | Evolution of Fullerenes in Circumstellar Envelopes by Carbon Condensation: Insights from Reactive Molecular Dynamics Simulations
###### Abstract
Fullerenes, including C\({}_{60}\) and C\({}_{70}\), have been detected in various astronomical environments. Understanding how their structures evolve over time is essential for gaining insights into their life cycle and making further observations. To address this, we conducted reactive molecular dynamics simulations to investigate the evolution of fullerenes in the circumstellar envelopes surrounding carbon-rich asymptotic giant branch stars. Our simulations employed a bottom-up chemistry scheme, wherein fullerenes grow by absorbing and condensing small carbon-based molecules. The results revealed the formation of different structures through heterogeneous reactions based on hydrogen concentration, leading to the emergence of onion-like nanostructures or single-layer fullerenes. To examine the impact of these structural changes on the infrared emission characteristics of fullerenes, we performed quantum chemical calculations. The results indicate that as fullerenes grow larger, additional emission features are introduced in the infrared spectrum. Moreover, two-layered fullerenes show noticeable blue-shift or weakening effects on the bands associated with out-of-plane vibration modes.
## I Introduction
Fullerenes, such as C\({}_{60}\) and C\({}_{70}\), were discovered by [1] in a laser ablation experiment on graphite, and were then confirmed by spectroscopic studies [2; 3; 4]. Although they are considered to be stable in harsh interstellar conditions, direct observational evidence of cosmic fullerenes was not obtained until 2010 [5]. Fullerenes have since been reported to exist in various astronomical environments, including asymptotic giant branch (AGB) stars [6], reflection nebulae [7; 8], planetary nebulae [9; 5], young stellar objects [10], photodissociation regions [11], and diffuse interstellar medium (ISM) [12]. Fullerenes are vital to the life cycle of large carbonaceous molecules/particles in interstellar space [13]. Understanding their formation and evolution is hence crucial not only for their further observation but also for comprehending the evolution of carbon-rich dust, which is intricately linked to the formation, life, and death of stars [14].
There are two primary mechanisms for the formation of fullerenes: top-down and bottom-up chemistry. Fullerenes have often been observed to coexist with polycyclic aromatic hydrocarbons (PAHs) [15; 8; 16], with abundance observed to decrease with increasing concentration of PAHs [17]. This has led to the exploration of the top-down formation mechanism, in which large PAH molecules or other carbonaceous dust are transformed into fullerenes [18; 19; 20; 21]. Dehydrogenation of PAHs is considered a critical step in this process, occurring before the folding of a large PAH into a fullerene cage [22; 23; 24].
On the other hand, the bottom-up approach, which is more commonly considered, is assumed to take place in the dense and hot envelopes of evolved stars. It typically involves reactions between carbon chain molecules followed by hydrocarbon addition [25; 26; 27; 28; 29]. Based on ab initio calculations [30], it has been demonstrated that closed-shell molecules with adjacent pentagons exhibit thermodynamic stability. Subsequently, Morokuma, Irle, and coworkers introduced the concept of the "shrinking hot giant road" using density functional tight-binding molecular dynamics simulations [31; 32; 33; 34; 35; 36; 37; 38]. This proposed mechanism suggests that the formation of larger fullerene molecules (comprising more than 100 atoms) initiates from carbon chains through self-catalytic reactions within the thermal carbon vapor. As these larger fullerenes are exposed to ultraviolet radiation, they undergo shrinkage reactions, transforming into C\({}_{60}\) and C\({}_{70}\), while releasing smaller molecular fragments like C\({}_{2}\)[17; 23; 37].
Despite extensive efforts to elucidate the formation mechanisms of interstellar fullerenes, their subsequent evolution in ISM remains poorly understood. The scarcity of C\({}_{60}\) detections in planetary nebulae suggests that these molecules may undergo evolutionary reactions and structural transformations in space, leading to significant changes in their spectral features [14; 39]. Laboratory experiments are valuable for simulating interstellar reactions, but they often face challenges in determining the chemical pathway due to the presence of highly reactive intermediates with short lifetimes. Quantum chemical calculations (QCCs) provide a solution by assisting in the identification of intermediates and the elucidation of reaction pathways [40; 41]. However, the computational cost associated with QCCs often limits their applicability in studying large molecules such as fullerenes. In this context, classical molecular dynamics (MD) simulations based on reactive force fields offer a promising alternative [42]. These simulations strike a balance between computational accuracy and efficiency, enabling the investigation of interstellar formation of carbon-based large molecules and nanoparticles under complex conditions [43; 44; 45; 46; 47; 48; 49; 50; 51].
In this study, we utilize reactive MD simulations to in
vestigate the process of fullerene evolution in the circumstellar envelopes (CSEs) of carbon-rich AGB stars. The CSEs of AGB stars, including dust condensation zone, are well-known for their abundance of small carbonaceous chains such as C\({}_{2}\), making them primary sites for the production of interstellar carbonaceous dust, and earning them the designation of "cosmic dust factories" [52; 53; 54; 55]. The main objective of this study is to provide valuable insights into the structural evolution of fullerenes, driven by the adsorption of carbonaceous chains, under various simulated conditions that mimic the CSEs of AGB stars.
## II Methods
Surface adsorption reactions have been identified as a significant mechanism for the growth of interstellar dust [56; 57]. In the bottom-up chemistry approach, the growth of fullerenes is primarily driven by the adsorption and enrichment of small carbonaceous molecules. The dust condensation zone within the CSEs of carbon-rich AGB stars offers a favorable astronomical environment for these adsorption reactions, as it provides the necessary materials and thermal energy for the reactions to take place [56; 58].
Given that the primary objective of this study was to explore the enrichment and evolution of C\({}_{2}\) small molecules on the surface of stable C\({}_{60}\) fullerene, we conducted simulations to investigate the condensation process of C\({}_{2}\) molecules onto a C\({}_{60}\) seed particle. The initial configuration consisted of a periodic simulation cell with dimensions of \(80\times 80\times 80\) A\({}^{3}\), with the C\({}_{60}\) molecule placed at its center. A total of 200 C\({}_{2}\) molecules (\(N_{\rm C}=400\)) were gradually introduced into the cell at random positions over a duration of 20 ns. Additionally, a variable number of neutral hydrogen atoms \(N_{\rm H}\) were included in the system, with different H concentrations corresponding to \(N_{\rm H}/N_{\rm C}\) ratios of 0, 0.05, 0.15, or 0.50.
During the simulations, the majority of the added C\({}_{2}\) and H particles adsorbed onto the surface of the C\({}_{60}\) seed particle through van der Waals interactions [59; 60]. This adsorption process facilitated chemical reactions between the C\({}_{2}\) and H particles, leading to the formation of an initial structure as depicted in Figure 1. The simulations were conducted at a specified temperature \(T_{\rm ad}\) to mimic the thermal energy available in the dust condensation zone within the CSEs of carbon-rich AGB stars. The thermal energy in this region provides favorable conditions for the adsorption reactions to take place and contributes to the growth of fullerenes through the enrichment of carbonaceous molecules.
To simulate the evolution of the formed initial structure, the temperature was progressively increased from \(T_{\rm ad}\) to \(T_{\rm trans}\) over a duration of 8 ns. This temperature increase led to the transformation of the initial structure into a final structure, whose characteristics depended on the specific temperature values and the hydrogen concentration. These factors are known to be influenced by the condition of the CSE and the distance from the central AGB star [61; 62]. Although the representative temperature near AGB star regions is 2000 to 3500 K [63], the dust formation regions in its CSEs are typically found at distances ranging from \(2.5\times 10^{14}\) to \(5\times 10^{15}\) cm from the star's center, with temperatures estimated to be in the range of 100 to 1000 K [58]. Therefore, our choice of \(T_{\rm ad}\) fell within this temperature range. However, we set \(T_{\rm trans}\) to be in the range of 1500 to 4000 K in order to accelerate the reactions, as the actual timescale of the reactions is much longer than what can be simulated using MD. Hence, the specific value of \(T_{\rm trans}\) does not hold direct thermodynamic significance but serves as a parameter to represent the reaction rate within the constraints of the simulation time.
The interactions between atoms in our simulations were modeled using the ReaxFF reactive force field [64]. ReaxFF is an interatomic potential function that takes into account bond-order dependence and consists of eight terms: \(E_{\rm bond},E_{\rm over},E_{\rm under},E_{\rm lp},E_{\rm val},E_{\rm tors},E_{\rm vdW}\), and \(E_{\rm Coulomb}\), as shown in Equation 1.
\[E_{\rm p}=E_{\rm bond}+E_{\rm over}+E_{\rm under}+E_{\rm lp}+E_{\rm val}+E_{ \rm tors}+E_{\rm vdW}+E_{\rm Coulomb}. \tag{1}\]
These terms describe the potential energy of the system with respect to bond length and order, overcoordination penalty, undercoordination stability, lone pair, bond angle, single bond torsion, van der Waals interactions, and Coulomb interactions, respectively. The detailed expression of each term, parameterization, and benchmarking of the ReaxFF force field can be found in Ashraf and van Duin [64]. We chose the ReaxFF force field for its proven success in modeling the formation of fullerenes and related molecules or nanostructures using MD simulations,
Figure 1: Schematics for the simulated evolution of C\({}_{60}\) via adsorption reactions.
as demonstrated in previous studies such as Ostroumova et al. [47], Orekhov et al. [49], Martin et al. [65], Mainitz et al. [66], Mao et al. [67], Li et al. [68].
In this study, the simulations were performed using the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) [69]. The temperature control was achieved using the canonical Nose-Hoover thermostat with a time step of 0.1 fs. It is important to note that, for simplicity, we considered C\({}_{2}\) molecules as the initial reactants, although other carbon sources, such as larger carbonaceous chains or PAHs, could also be present [70; 71; 72; 73; 74; 75; 76]. The ratio between the number of C and H atoms, \(N_{\rm H}/N_{\rm C}\), was determined based on the abundance ratio of \(\leq 0.03\) between C\({}_{2}\) and C\({}_{60}\) in IRC+10216 [52], while the number of hydrogen atoms was set to balance the formation and photodissociation of C - H bonds [24]. In the Supplementary Data[101], we have included an example of the simulation code used in our study, as well as the corresponding simulation outputs. This will facilitate readers who are interested in replicating the simulations and conducting further investigations.
## III Results and Discussions
### Initial Structures
In our simulations, we observed the formation of different initial structures with varying shapes under different conditions, as depicted in Figure 2. The compactness of these structures was characterized using the radius of gyration \(RG\), which is a measure of the distribution of atoms from the center of mass [77]. A lower \(RG\) indicates a more condensed structure for a given number of atoms. In Panel (a) of Figure 2, the size of the circles represents the \(RG\) values of the initial structures. We observed that at low concentrations of H, the initial structures exhibited a high degree of condensation, as indicated by the small circles in the bottom of Panel (a). In these cases, the adsorbed C\({}_{2}\) molecules formed aligned long carbon chains at low temperatures, as shown in Panel (b). At higher temperatures, we observed that the chains interconnected to form cage-like structures, as shown in Panel (c). This is in agreement with the observation of Anders and Urbassek [50], which suggests that warm organic cluster particles have a tendency to stick together through chemical reaction when colliding.
On the contrary, at high H concentrations, it becomes challenging for the C\({}_{2}\) molecules to form the same types of structures. This is due to many C\({}_{2}\) molecules being saturated with H atoms, which occupy the chemically active sites and hinder the formation of interconnected chains. This observation aligns with the findings of Cherchneff [78], who observed from the molecular abundance of IRC+10216 that excess hydrogen led to the formation of hydrocarbon molecules rather than carbon chains. As a result, the C\({}_{2}\) form small hydrocarbon molecules such as C\({}_{2}\)H\({}_{2}\), C\({}_{2}\)H\({}_{4}\), C\({}_{4}\)H\({}_{6}\), and C\({}_{5}\)H\({}_{5}\), which remain as individual entities. These molecules are primarily held together by vdW forces on the surface of the C\({}_{60}\) at low temperatures. However, when the temperature exceeds the binding energy between these molecules, they can no longer remain bound, leading to their dissociation, as shown in Panel (e) of Figure 2. The case depicted in Panel (d) represents an intermediate scenario between the condensed structure and the dissociated state. It is worth noting that previous studies have demonstrated that excessive hydrogenation can lead to the dissociative fragmentation of molecules [79; 51].
### Final Structures
We explore further possible evolution of the condensed initial structures. In the CSEs of AGB stars, the dust particles experience various astronomical events such as photoprocessing, stellar winds, and shocks, which could result in a sharp increase in their temperature and induce heterogeneous reactions [80]. In our simulations, after the initial structures were formed through adsorption, we subjected them to high temperatures to explore their potential structural transformations. Our results revealed two distinct evolution pathways leading to different types of final structures. In the case of low H con
Figure 2: (a) \(RG\) of the initial structures formed with different \(N_{\rm H}/N_{\rm C}\) at various adsorption temperatures \(T_{\rm ad}\). The bubbles’ size and colors represent the \(RG\) value and different classes of the initial structures, respectively. (b-e) Snapshots of the formed initial structures.
centration, onion-like nanostructures consisting of two layers of fullerenes were observed, as depicted in the top panels of Figure 3. Conversely, for the case of high H concentration, single-layer fullerenes were formed, as shown in the bottom panels of Figure 3.
It is worth noting that the specific temperature at which the transformation occurs does not definitively determine the type of final structure, except in the case of 2500 K, which appears insufficient to trigger the formation of fullerene within the simulated time scale. This finding is consistent with the observation of Cami et al. [81], which shows that PAHs become the main product of evolution in the hydrogen-rich environment at temperatures between 1000 and 1700 K. Moreover, it is important to acknowledge that fullerene transformation could still potentially occur at that temperature, taking into account that the actual reaction time is significantly longer than what is in the MD study despite of a much lower concentration.
The C-H bonds played a key role in determining the final structures. The plot in Figure 4 illustrates the evolution of the number of C-C and C-H bonds, denoted as \(N_{\rm C-C}\) and \(N_{\rm C-H}\), revealing three distinct stages of structural transformation. Initially, during the first 0.8 nanoseconds, the bond numbers remain relatively stable. In the second stage, a notable increase in \(N_{\rm C-C}\) occurs as the temperature surpasses a specific threshold (1500 K), indicating a significant structural transition. This transition involves the connection of adsorbed molecules through unoccupied sites, with the highest bonding rate observed in the absence of hydrogen. This process is similar to the ring-condensation process outlined in [31], marking the phase in which carbon rings commence formation. Subsequently, at around 1.5 nanoseconds and at a temperature of approximately 2500 K, a substantial breakage of C-H bonds commences. This leads to the generation of new unoccupied carbon sites available for bonding, which typically promotes the formation of new carbon rings [82].
The selection between the two distinct types of final structures predominantly occurred during the transition from the second to the third stage. During this stage of evolution, the predominant process involves cage closure, characterized by the collapse of a sizable ring that undergoes a closing mechanism [31], while the closure of a double-layered fullerene was not observed. In scenarios with low hydrogen concentration, the majority of adsorbed carbon atoms formed bonds with each other in the second stage, resulting in the creation of a relatively stable layer that enveloped the central C\({}_{60}\). The temperature during this phase was insufficient to facilitate the transformation of intra-layer \(sp^{2}\) bonds into inter-layer \(sp^{3}\) bonds. Conversely, in high hydrogen concentration scenarios, the breakage of C-H bonds in the third stage led to the emergence of numerous unoccupied carbon sites. As the temperature increased in the third stage, these sites interacted with the central C\({}_{60}\), eventually causing it to dissolve into a larger, single-layered fullerene cage. This process is depicted in Panel (c) of Figure 4 taking the case of \(T_{\rm trans}=4000\) K for example.
During the process of compact cage contraction, we observed events of small molecule pop-out, involving species like C\({}_{2}\) and H\({}_{2}\), referred to as the "shrinking hot giant" road in [37]. It is worth noting that in cases of low hydrogen concentration, w
Figure 3: Final structures formed with different H concentrations through heterogeneous reactions at different temperatures. The gray, red, and blue dots represent H and C atoms originating from C\({}_{60}\) and C\({}_{2}\), respectively.
Figure 4: Evolution of C–C (a) and C–H bonds (b) during heating to \(T_{\rm trans}=4000\) K for different H concentrations. The background colors indicate three stages of structural transformation. (c) Configurations of the simulated system at different time for the case of \(N_{\rm H}/N_{\rm C}=0.5\).
high enough, the C\({}_{60}\) could also react with atoms in the outer layer; however, the two-layered structures were ultimately maintained. While direct evidence is lacking, it is plausible that dissociated hydrogen atoms played a catalytic role in the opening of the central C\({}_{60}\)[83]. We note that the formation of multilayer fullerenes studied here differs from the previous high-density pure carbon gas ultrafast cooling mechanism [47]. The adsorption-driven process studied here is more likely to occur in the long-term evolution of low-density interstellar environments.
The formation of H\({}_{2}\) molecules in the ISM remains a challenging question in astronomy [84]. The exothermic nature of the combination of hydrogen atoms to form H\({}_{2}\) poses a dilemma, as H\({}_{2}\) lacks an efficient mechanism to dissipate the released heat in the low-density environment of the ISM. Current understanding suggests that the primary route for interstellar H\({}_{2}\) formation is through the Eley-Rideal reaction on dust or ice particles, which act as heat reservoirs [85; 86]. These particles play a crucial role in H\({}_{2}\) formation by facilitating the dissipation of heat [86; 87; 88; 89; 90; 91; 92; 93]. Conversely, hydrogen can also facilitate the formation of complex organic molecules [83; 94]. In the simulations conducted for this study, the formation of H\({}_{2}\) molecules was observed as by-products during the growth of fullerenes. It was found that the formation of H\({}_{2}\) is closely correlated with the breaking of C\(-\)H bonds, as shown in Figure 5, suggesting a potential catalytic role of evolving fullerenes in the H\({}_{2}\) formation process. However, it is important to note that the specific question regarding heat dissipation during H\({}_{2}\) formation was not directly addressed in these simulations, as a global thermostat was applied. To fully understand the intricacies of heat dissipation in the H\({}_{2}\) formation process, a separate set of simulations specifically designed to explore this aspect and more in-depth analyses would be necessary.
### Infrared spectrum
To explore the impact of the evolution towards the two different types of fullerenes on their infrared spectra, we employed density functional theory (DFT) for geometry optimizations and frequency calculations [95]. The calculations were conducted at the B3LYP/6-31G(d) level of theory for three selected samples: a C\({}_{60}\) molecule, a C\({}_{180}\) molecule, and a two-layered C\({}_{60}\)@C\({}_{180}\) composite, as depicted in the inset of Figure 6. We chose these smaller molecules for simplicity, considering the significant computational challenges associated with performing calculations on larger simulated final structures. Given the large size of the system, we computed the spectra under the assumption of harmonic vibrations. To account for anharmonicity effects, we uniformly scaled the calculated spectra by a factor of 0.9613, as proposed by Borowski [96]. This scaling factor enables us to approximate the influence of anharmonicity on the spectra and improve their accuracy. Additionally, for an accurate description of the interaction between different layers of fullerenes [97], we included the D3 version of Grimme's empirical dispersion interaction [98], along with the Becke-Johnson damping function [99].
Figure 6 (a) displays the computed IR spectra of the three samples. It is noted that when comparing C\({}_{180}\) with C\({}_{60}\), additional emission features are observed at specific wavelengths (10.84, 14.65, and 29.86 \(\upmu\)m) for C\({}_{180}\). The complex vibrational behavior of C\({}_{180}\) indicated by the additional features aligned with previous observation of [100; 5]. Upon comparing the summed spectrum of C\({}_{60}\) and C\({}_{180}\) (purple, dashed line) with that of C\({}_{60}\)@C\({}_{180}\) (green line), notable observations are made. Specifically, the 14.73, 19.93, and 29.86 \(\upmu\)m bands of the summed spectrum exhibit hindrance or a blue-shifted pattern when the two molecules are combined. This alteration in the IR features, characterized by a decrease in intensity accompanied by an increase in frequency, can typically be attributed to the increased rigidity of the molecule in the direction of vibration when the C\({}_{60}\) molecule is encapsulated within C\({}_{180}\). This suggests that these bands likely correspond to vibrations perpendicular to the surface. A more detailed analysis of the vibrational modes associated with these bands, as depicted in Figure 6 (b), provides further confirmation of this hypothesis. Therefore, the blue-shifted behavior of these bands holds particular interest for astronomical observations targeting onion-like fullerenes.
Figure 5: (a) Ratio of the number of H\({}_{2}\) molecules formed over the total number of H atoms initially present in the simulation cell as a function of time, for different H concentrations. (b) Time evolution of the C\(-\)H bonds in the system.
## IV Conclusions
In conclusion, this study employed MD simulations to investigate the evolution of fullerenes in environments resembling those found in the CSEs of AGB stars. The simulations revealed two distinct types of transformations that occur when fullerenes adsorb carbon chains, resulting in the formation of single- and double-layered carbon nanostructures. The formation of single-layered structures was observed under high H concentration, while the formation of double-layered structures occurred under low H concentration. We note that the simulations did not investigate the transformation of fullerenes into larger sizes with additional layers due to system size limitations. Additionally, the study found that the rate of H\({}_{2}\) molecule formation is closely linked to the breaking of C-H bonds in the evolving fullerene, implying a potential catalytic role of fullerene in H\({}_{2}\) formation.
Furthermore, DFT calculations were conducted on three fullerene molecules to investigate the impact of structural evolution on IR emission features. The results demonstrated that as fullerenes undergo structural evolution towards larger sizes, additional emission features emerge at specific wavelengths, such as 10.84, 14.65, and 29.86 \(\upmu\)m. Additionally, the presence of a double layer in the fullerene structure led to noticeable blue-shift or weakening effects on the bands at 14.73, 19.93, and 29.86 \(\upmu\)m. These findings provide valuable insights into the structural changes and spectral characteristics of fullerenes, suggesting the possibility of fullerene growth in the ISM through the adsorption of small carbon molecules and subsequent heterogeneous reactions.
###### Acknowledgements.
The authors acknowledge financial support from the National Natural Science Foundation of China (11964002). Prof. Yong Zhang is acknowledged for fruitful discussion.
## Data Availability
The Supplementary Data include an example of the LAMMPS input script that was used in our study, along with the corresponding simulation outputs. They are available at [https://github.com/mengzss/Fullerene_Evolution.git](https://github.com/mengzss/Fullerene_Evolution.git).
|
2305.13045 | The accelerated expansion in $F(G,T_{μν}T^{μν})$ gravity | In the present manuscript the basic Einstein--Hilbert cosmological model is
extended, by adding a new functional $F(G, T_{\mu\nu}T^{\mu\nu})$ in the
fundamental action, encoding specific geometrical effects due to a nontrivial
coupling with the Gauss-Bonnet invariant ($G$), and the energy--momentum
squared term ($T_{\mu\nu}T^{\mu\nu}$). After obtaining the corresponding
gravitational field equations for the specific decomposition where $F(G,
T_{\mu\nu}T^{\mu\nu})=f(G)+g(T_{\mu\nu}T^{\mu\nu})$, we have explored the
physical features of the cosmological model by considering the linear stability
theory, an important analytical tool in the cosmological theory which can
reveal the dynamical characteristics of the phase space. The analytical
exploration of the corresponding phase space structure revealed that the
present model can represent a viable dark energy model, with various stationary
points where the effective equation of state corresponds to a de--Sitter epoch,
possible explaining the early and late time acceleration of the Universe. | Mihai Marciu, Dana Maria Ioan | 2023-05-22T13:54:13Z | http://arxiv.org/abs/2305.13045v1 | # The accelerated expansion in \(F(G,T_{\mu\nu}T^{\mu\nu})\) gravity
###### Abstract
In the present manuscript the basic Einstein-Hilbert cosmological model is extended, by adding a new functional \(F(G,T_{\mu\nu}T^{\mu\nu})\) in the fundamental action, encoding specific geometrical effects due to a nontrivial coupling with the Gauss-Bonnet invariant (\(G\)), and the energy-momentum squared term (\(T_{\mu\nu}T^{\mu\nu}\)). After obtaining the corresponding gravitational field equations for the specific decomposition where \(F(G,T_{\mu\nu}T^{\mu\nu})=f(G)+g(T_{\mu\nu}T^{\mu\nu})\), we have explored the physical features of the cosmological model by considering the linear stability theory, an important analytical tool in the cosmological theory which can reveal the dynamical characteristics of the phase space. The analytical exploration of the corresponding phase space structure revealed that the present model can represent a viable dark energy model, with various stationary points where the effective equation of state corresponds to a de-Sitter epoch, possible explaining the early and late time acceleration of the Universe.
## I Introduction
In the present cosmological context the accelerated expansion [1] represents an enigmatic phenomenon associated with the evolution of our Universe at the level of background dynamics. This phenomenon has been discovered almost two decades ago [2; 3], triggering various developments in science and technology. The simplest dark energy model explaining the accelerated expansion of our Universe is associated to the \(\Lambda\)CDM model [4; 5; 6; 7], a specific cosmological theory which is based on a cosmological constant \(\Lambda\) added to the Einstein's field equations. The \(\Lambda\)CDM model suffers from various theoretical limitations [8; 9; 10] and cannot explain the dynamical evolution of the dark energy equation of state, as probed through various astrophysical observations [11; 12; 13; 14; 15]. In principle the \(\Lambda\)CDM model [10] can be regarded as an effective approximate approach which is associated with a constant equation of state, without addressing the \(H_{0}\) tension [16; 17; 18] in a fundamental manner.
In the cosmological theories the modified gravity approaches [19; 20; 21; 22] represent a novel paradigm which further extends the fundamental action, embedding various invariant components, aiming for a more complete and consistent theory of gravitation. The most natural extension of gravity is represented by the \(f(R)\) theory, a specific approach based on a functional which depends on scalar curvature [23]. Since then, many alternative theories have been proposed [4; 24; 25; 26], aiming for a more consistent theory [27; 28; 29] which can explain the accelerated expansion of our Universe, embedding various dynamical effects associated to the dark sector. In these theories the interplay between matter and geometry has been questioned in different approaches [30; 31; 32; 33].
In the modified gravity theories a particular extension is related to the energy-momentum squared gravity [34; 35; 36], a novel theory which can explain various physical effects at cosmological scales [37; 38; 39]. The latter theory is constructed by considering the interplay between matter and geometry, taking into account an invariant which is based on a specific self-contraction of the energy-momentum tensor [34]. The energy-momentum squared gravity has attracted some attention in modern cosmological theories [40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52], representing a viable approach also from the astrophysical point of view [53; 54]. Specific wormholes solutions have been considered in the energy-momentum squared gravity [55], analyzing the physical implications. The inclusion of the Gauss-Bonnet topological invariant in the energy-momentum squared gravity has been considered recently [56; 57; 44; 58] for specific relativistic systems.
An important approach in modern cosmological theories is related to the Gauss-Bonnet invariant, a special topological component in the four-dimensional space-time [59; 60]. The inclusion of the Gauss-Bonnet invariant has been considered in various modern theories of gravitation [61; 62; 63; 64; 65; 66; 67], representing a viable approach for specific physical systems, possible explaining the dark energy phenomenon [68].
In this paper we shall consider a modified gravity model build in the fundamental framework of the Einstein-Hilbert action, embedding the geometrical interplay between the Gauss-Bonnet invariant and the energy-momentum-squared component [56; 57; 44; 58]. The fundamental action in our model contains a generic functional which depends
on the Gauss-Bonnet invariant [64] and the energy-momentum-squared term in a decomposed manner. The physical characteristics are evaluated by considering the linear stability theory for an exponential decomposition, analyzing the phase space structure and the possibility of reaching the accelerated expansion. Such an approach further extends the Einstein-Hilbert action by taking into account the effects due to the geometrical characteristics of space-time, including also the interplay with the matter sector, embedding the elementary properties of the latter component.
The plan of our paper is the following. In Sec. II we propose the fundamental action for our toy model, obtaining the corresponding modified Friedmann equations. Then, in Sec. III we discuss the physical properties and the emergence of the accelerated expansion in the current cosmological model by considering the linear stability theory in the case of an exponential behavior. Lastly, in Sec. IV we summarize the principal obtained results and give the main concluding remarks.
## II The action and the field equations
In what follows we shall propose a cosmological model described by the following action [44; 56; 57]:
\[S=\int d^{4}x\sqrt{-\bar{g}}\Bigg{[}\frac{R}{2}+F(G,T^{2})\Bigg{]}+\int d^{4}x \sqrt{-\bar{g}}L_{m}, \tag{1}\]
where the generic function embedded into the Einstein-Hilbert action can be decomposed in two specific terms, \(F(G,T^{2})=f(G)+g(T^{2})\). In this case we have assumed a non-linear dependence of the action by the Gauss-Bonnet invariant (\(G\)), and the energy-momentum squared invariant (\(T^{2}=T_{\mu\nu}T^{\mu\nu}\)) [56; 57]. Before proceeding to the computations of the modified Friedmann relations, we have to specify that the background dynamics can be described by the Robertson-Walker metric:
\[ds^{2}=-dt^{2}+a^{2}(t)(dx^{2}+dy^{2}+dz^{2}), \tag{2}\]
where \(a(t)\) is the cosmic scale factor which characterizes the expansion of the Universe at the large scale structure. In general, the Gauss-Bonnet invariant is defined in the following way,
\[G=R^{2}-4R_{\mu\nu}R^{\mu\nu}+R_{\mu\nu\xi\sigma}R^{\mu\nu\xi\sigma}. \tag{3}\]
For the above metric (2) the Gauss-Bonnet invariant reduces to the following expression [64],
\[G=24H^{2}(H^{2}+\dot{H}), \tag{4}\]
where \(H(t)\) represents the Hubble parameter defined as: \(H=\frac{\dot{a}}{a}\), where the dot represents the derivative with respect to the cosmic time. In this case the scalar curvature acquires the following expression,
\[R=6(2H^{2}+\dot{H}). \tag{5}\]
The second invariant in our action (1) is represented by the energy-momentum-squared term [37], defined as:
\[T^{2}=T_{\mu\nu}T^{\mu\nu}, \tag{6}\]
where \(T_{\mu\nu}\) describes the energy-momentum of the matter sector, \(T_{\mu\nu}=diag[\rho,p,p,p]\), with \(\rho\) the density and \(p\) the pressure of the matter component which behaves closely as a non-relativistic fluid with zero pressure. If we further assume a barotropic equation of state for the matter sector,
\[p=w\rho, \tag{7}\]
with \(w\) the barotropic constant parameter, then the energy-momentum-squared invariant takes the following expression:
\[T^{2}=\rho^{2}(1+3w^{2}). \tag{8}\]
The variation of the action (1) with respect to the inverse metric \(g^{\mu\nu}\) lead to the following modified Friedmann relations [37; 44]:
\[3H^{2}=\rho+\Bigg{[}Gf^{\prime}(G)-f(G)-24H^{3}f^{\prime}(G)\Bigg{]}+2\frac{ \partial g(T^{2})}{\partial T^{2}}(\rho^{2}+4\rho p+3p^{2})-g(T^{2}), \tag{9}\]
\[-3H^{2}-2\dot{H}=p+\left[f(G)-Gf^{\prime}(G)+16(\dot{H}+H^{2})f^{\prime}(\dot{G})+ 8H^{2}f^{\prime}\ddot{(}G)\right]+g(T^{2}), \tag{10}\]
where we have assumed the following definitions:
\[^{\prime}=\frac{d}{dG}, \tag{11}\]
\[\dot{\cdot}=\frac{d}{dt}, \tag{12}\]
\[\ddot{-}=\frac{d^{2}}{dt^{2}}. \tag{13}\]
We can further define the energy density associated to the geometrical dark energy component [37],
\[\rho_{de}=Gf^{\prime}(G)-f(G)-24H^{3}f^{\prime}(\dot{G})+2\frac{\partial g(T^{ 2})}{\partial T^{2}}(\rho^{2}+4\rho p+3p^{2})-g(T^{2}), \tag{14}\]
and the corresponding pressure:
\[p_{de}=f(G)-Gf^{\prime}(G)+16(\dot{H}+H^{2})f^{\prime}(\dot{G})+8H^{2}f^{ \prime}\ddot{(}G)+g(T^{2}). \tag{15}\]
Then, we can define the dark energy equation of state due to the geometrical coupling of the invariant constituents,
\[w_{de}=\frac{p_{de}}{\rho_{de}}, \tag{16}\]
and the total/effective equation of state for the background dynamics:
\[w_{tot}=-1-\frac{2}{3}\frac{\dot{H}}{H^{2}}. \tag{17}\]
Lastly, we define the matter density parameter in the usual manner,
\[\Omega_{m}=\frac{\rho}{3H^{2}} \tag{18}\]
and the geometrical dark energy density parameter,
\[\Omega_{de}=\frac{\rho_{de}}{3H^{2}}, \tag{19}\]
satisfying the constraint equation: \(\Omega_{m}+\Omega_{de}=1\).
## III Dynamical properties for an exponential model
In order to study the dynamical properties for the exponential model where \(F(G,T_{\mu\nu}T^{\mu\nu})=f_{0}e^{\alpha G}+g_{0}e^{\beta T^{2}}\), with \(\alpha,\beta,f_{0},g_{0}\) constant parameters, we need to introduce the following auxiliary variables:
\[s=\Omega_{m}=\frac{\rho}{3H^{2}}, \tag{20}\]
\[x=\frac{G}{3H^{2}}\frac{df(G)}{dG}, \tag{21}\]
\[y=8Hf^{\prime}\dot{(}G), \tag{22}\]
\[z=2\frac{dg(T^{2})}{d(T^{2})}\rho(1+4w+3w^{2}), \tag{23}\]
\[u=\frac{f(G)}{3H^{2}}, \tag{24}\]
\[v=\frac{g(T^{2})}{3H^{2}}. \tag{25}\]
In terms of these auxiliary variables, we can write the Friedmann constraint equation (9) in the following way:
\[s=\frac{1+u+v-x+y}{1+z}, \tag{26}\]
expressing the matter density parameter in terms of the remaining auxiliary variables.
Next, we introduce the e-fold number \(N=log(a)\) and write the corresponding autonomous system of differential equations:
\[\frac{dx}{dN}=\frac{1}{4\alpha uz(z+1)}\Big{[}9\beta u^{2}vw^{2}y +12\beta u^{2}vwy+3\beta u^{2}vy+9\beta uv^{2}w^{2}y+12\beta uv^{2}vy+3\beta uv ^{2}y-18\beta uvw^{2}x^{2}+9\beta uvw^{2}y^{2}\\ +9\beta uvw^{2}y-24\beta uvwx^{2}+12\beta uvwy^{2}+12\beta uvwy -6\beta uvx^{2}+3\beta uvy^{2}+3\beta uvy+8\alpha uxz^{2}+8\alpha uxz\\ -18\beta v^{2}w^{2}x^{2}+9\beta v^{2}w^{2}xy-24\beta v^{2}wx^{2 }+12\beta v^{2}wxy-6\beta v^{2}x^{2}+3\beta v^{2}xy+18\beta ww^{2}x^{3}-18 \beta vw^{2}x^{2}-27\beta vw^{2}x^{2}y+9\beta wx^{2}\\ +9\beta vw^{2}xy+24\beta vwx^{3}-24\beta wx^{2}-36\beta wx^{2}y+ 12\beta wwxy^{2}+12\beta wwxy+6\beta vx^{3}-6\beta vx^{2}-9\beta vx^{2}y+3 \beta vxy^{2}+3\beta vxy\Big{]}, \tag{27}\]
\[\frac{dy}{dN}=\frac{1}{4\alpha uz(z+1)}\Big{[}-12\alpha u^{2}wz-12 \alpha u^{2}z^{2}-12\alpha u^{2}z-18\beta uvw^{2}x-9\beta uvw^{2}xy-24\beta uvwx -12\beta uvwxy\\ -12\alpha uvwz-6\beta uvx-3\beta uvxy-12\alpha wz^{2}-12\alpha wz +12\alpha wxz-12\alpha wwyz-12\alpha uwz+12\alpha uxz^{2}+12\alpha uxz\\ -4\alpha uyz^{2}-4\alpha uyz-4\alpha uz^{2}-4\alpha uz-18\beta v^ {2}w^{2}x-9\beta v^{2}w^{2}xy-24\beta v^{2}wx-12\beta v^{2}wxy-6\beta v^{2}x-3 \beta v^{2}xy\\ +18\beta vw^{2}x^{2}+9\beta vw^{2}x^{2}y-18\beta vw^{2}x-9\beta vw^{2 }xy^{2}-27\beta wx^{2}y+24\beta vwx^{2}+12\beta wwx^{2}y-24\beta vwx-12\beta w wxy ^{2}-36\beta vwxy\\ +6\beta vx^{2}+3\beta vx^{2}y-6\beta vx-3\beta vxy^{2}-9\beta vxy \Big{]}, \tag{28}\]
Figure 1: The variation of the matter density parameter \(s\) for the \(A\) cosmological solution in a specific region of interest (\(v=1,\beta=-1,x=1\)).
\[\frac{dz}{dN}=\Big{[}-9uw^{3}z^{2}-9uw^{2}z^{3}-9uw^{2}z^{2}-3uwz^{2} -3uz^{3}-3uz^{2}-18vw^{3}z^{2}-9vw^{3}z-18vw^{2}z^{3}-39vw^{2}z^{2}\\ -21vw^{2}z-12vwz^{3}-30vwz^{2}-15vwz-6vz^{3}-9vz^{2}-3vz+9w^{3}xz^{ 2}-9w^{3}yz^{2}-9w^{3}z^{2}\\ +9w^{2}xz^{3}+9w^{2}xz^{2}-9w^{2}yz^{3}-9w^{2}yz^{2}-9w^{2}z^{3}-9 w^{2}z^{2}+3wxz^{2}-3wyz^{2}-3wz^{2}+3xz^{3}\\ +3xz^{2}-3yz^{3}-3yz^{2}-3z^{3}-3z^{2}\Big{]}.\\ \Big{[}(3w^{2}+1)z^{2}(u-x+y+1)+v(w^{2}(6z^{2}+6z+3)+4w(2z^{2}+3 z+1)+2z^{2}+2z+1)\Big{]}^{-1}, \tag{29}\]
\[\frac{du}{dN}=\frac{1}{4\alpha z(z+1)}\Big{[}-18\beta uvw^{2}x+9 \beta uvw^{2}y-24\beta uvwx+12\beta uvwy-6\beta uvx+3\beta uvy+8\alpha uz^{2}+ 8\alpha uz\\ -18\beta v^{2}w^{2}x+9\beta v^{2}w^{2}y-24\beta v^{2}wx+12\beta v ^{2}wy-6\beta v^{2}x+3\beta v^{2}y+18\beta wv^{2}x^{2}-18\beta vw^{2}x-27\beta vw ^{2}xy+9\beta vw^{2}y^{2}\\ +9\beta vw^{2}y+24\beta vwx^{2}-24\beta vwx-36\beta vwxy+12\beta v wy ^{2}+12\beta vwy+6\beta vx^{2}-6\beta vx-9\beta vxy+3\beta vy^{2}+3\beta vy \Big{]}, \tag{30}\]
\[\frac{dv}{dN}=\Big{[}-v(u^{2}(3w^{2}+1)z^{2}(3\beta v(w+1)(3w+1)x +2\alpha(z+1)(3w+z+3))+u(3\beta v^{2}(w+1)(3w+1)x((w(9w+8)\\ +3)z^{2}+6w(w+2)z+w(3w+4)+2z+1)-2vz(3\beta(w+1)(3w+1)(3w^{2}+1)xz( x-y-1)\\ +\alpha(z+1)(-9w^{3}z+3w^{2}(z^{2}+z+2)+w(z(16z+21)+8)+z^{2}+z+2)) -2\alpha(3w^{2}+1)z^{2}(z+1)(3w+z+3)(x-y-1))\\ +3\beta v(w+1)(3w+1)x(v-x+y+1)(v(w^{2}(6z(z+1)+3)+4w(z+1)(2z+1)+2 z(z+1)+1)-(3w^{2}+1)z^{2}(x-y-1))\Big{]}\Big{]}\cdot\\ \Big{[}2\alpha uz(z+1)((3w^{2}+1)z^{2}(u-x+y+1)+v(w^{2}(6z(z+1)+3) +4w(z+1)(2z+1)+2z(z+1)+1))\Big{]}^{-1}. \tag{31}\]
In this case, where we have an exponential decomposition, we have obtained four critical points which are associated to a de-Sitter behavior (\(w_{tot}=-1\)). For these solutions the cosmological model acts as a geometrical dark energy component, driving the accelerated expansion of the Universe as a cosmological constant. In what follows, we shall describe each cosmological solution in detail, analyzing the dynamical consequences.
The first cosmological solution found for the exponential case is located at the following coordinates:
\[A:\left(y=0,z=-w-1,u=-\frac{3\beta v(3w+1)x(v-x+1)}{9\beta vwx+3\beta vx-4 \alpha w}\right), \tag{32}\]
describing a de-Sitter epoch where the matter density parameter is equal to:
\[s=-\frac{4\alpha(v-x+1)}{4\alpha w-3\beta v(3w+1)x}. \tag{33}\]
For this solution we note that the \(x,z\) and \(v\) variables are independent. The \(y\) component which is related to the time variation of the Gauss-Bonnet geometrical invariant is set to zero. We can note that the location in the phase space structure is influenced by the barotropic equation of state of the matter sector, and \(\alpha,\beta\) parameters which are describing the strength of the coupling functions. In Fig. 1 we have presented a possible region of interest for the matter density parameter where \(s\in(0,1)\). In the general case where all the parameters are not set, the final expressions for the specific eigenvalues are too complex to be written in the manuscript. However, by setting some of the parameters (\(\alpha=1,v=1,x=1,\beta=-1\)), we have obtained some simple expressions of the resulting eigenvalues:
\[\Big{[}0,0,-\frac{3(w+1)(w(51w+22)+7)}{w(w(51w+86)+19)+4},\\ \frac{1}{2}\left(\frac{w(w+1)(3w+1)(397w+99)(w(w(51w+86)+19)+4)}{ \sqrt{w^{2}(w+1)^{2}(3w+1)^{2}(13w+3)(397w+99)(w(w(51w+86)+19)+4)^{2}}}-3\right), \\ \frac{1}{2}\left(-\frac{w(w+1)(3w+1)(397w+99)(w(w(51w+86)+19)+4)}{ \sqrt{w^{2}(w+1)^{2}(3w+1)^{2}(13w+3)(397w+99)(w(w(51w+86)+19)+4)^{2}}}-3\right) \Big{]}. \tag{34}\]
As can be observed, the solution is non-hyperbolic, due to the existence of two zero eigenvalues. Hence, we can use the linear stability theory only to study the specific cases where the dynamics corresponds to a saddle behavior. A specific region where we have obtained a saddle behavior is presented in Fig. 3. The evolution in the phase space structure towards the A cosmological solution can be seen in Fig. 2. In this case the corresponding eigenvalues have the following values:
\[\big{[}0,0,-1.00194,-1.5+3.36584i,-1.5-3.36584i\big{]}. \tag{35}\]
The second cosmological solution is found at the coordinates:
\[B^{\pm}:\Big{(}x=\frac{3\beta\pm\sqrt{3}\sqrt{\beta(3w+1)(3\beta +16\alpha w+9\beta w)}+9\beta w}{6\beta+18\beta w},y=0,z=-w-1,\\ v=\frac{\pm\sqrt{3}\sqrt{\beta(3w+1)(3\beta+16\alpha w+9\beta w )}-3\beta(3w+1)}{6(\beta+3\beta w)}\Big{)}, \tag{36}\]
with the corresponding matter density parameter equal to: \(s=-\frac{u}{w}\). Due to the specific form of the matter density parameter, we can observe that the case of a pressure-less dark matter fluid cannot be considered, leading to a divergence. Hence, we can only approximate the case of a pressure-less dark matter fluid, \(w\to 0\). The eigenvalues for the \(B^{+}\) solutions are the following:
\[\Big{[}0,0,-\frac{3(w+1)(3w+1)\left(6\beta u+9\beta(2u-1)w^{2}+ \sqrt{3}w\sqrt{\beta(3w+1)(3\beta+16\alpha w+9\beta w)}-3\beta w\right)}{6 \beta u(w+1)(3w+1)\left(3w^{2}+1\right)+(3w+5)w^{2}\left(\sqrt{3}\sqrt{\beta(3 w+1)(3\beta+16\alpha w+9\beta w)}-3\beta(3w+1)\right)},E_{4},E_{5}\Big{]}, \tag{37}\]
where \(E_{4},E_{5}\) have complicated expressions and are not displayed in the manuscript. In Fig. 4 we have displayed a saddle region for the \(B^{+}\) solution, taking into account also the existence conditions which imply \(s\in(0,1)\).
The last de-Sitter cosmological solution found in the present analysis is located in the phase space structure at the coordinates:
\[C:\Big{(}x=\frac{4\left(\alpha+\alpha v+6\alpha vw^{2}+\alpha vw +3\alpha w^{2}\right)}{4\alpha+3\beta v^{2}+27\beta v^{2}w^{2}+18\beta v^{2}w +12\alpha w^{2}},y=0,z=-w-1,u=\\ -\frac{3\beta v^{2}(3w+1)^{2}\left(6vw^{2}+vw+v+3w^{2}+1\right)}{ \left(3w^{2}+1\right)\left(4\alpha+3\beta v^{2}+27\beta v^{2}w^{2}+18\beta v^{ 2}w+12\alpha w^{2}\right)}\Big{)}, \tag{38}\]
with the matter density parameter influenced by the dark matter equation of state and specific coupling of the energy-momentum-squared function,
\[s=\frac{v(3w+1)}{3w^{2}+1}. \tag{39}\]
The variation of the corresponding matter density parameter for the \(C\) cosmological solution is presented in Fig. 5. It can be seen that in the case of a pressure-less dark matter component the matter density parameter satisfies the observational constraints, being influenced also by the \(v\) variable which encodes geometrical effects due to the specific form of the energy-momentum-squared function.
For this critical line we have obtained the following eigenvalues (\(v=1,\beta=1\)):
\[\Bigg{[}0,0,0,\frac{C_{1}\pm\frac{\sqrt{2}\sqrt{w^{2}(3w+1)^{4}(w^{2}-1)^{2}(3 w^{2}+1)(9w^{2}+w+2)(50\alpha^{2}(3w^{2}+1)(9w^{2}+w+2)+12\alpha(3w^{2}+1)(3w+1)^{2} +9(3w+1)^{4}(4\alpha+3w((4\alpha+9)w+6)+3)^{2}}}{\alpha(w-1)w(w+1)(3w+1)^{2}(3 w^{2}+1)(9w^{2}+w+2)}}}{4(4\alpha+3w((4\alpha+9)w+6)+3)}\Bigg{]}, \tag{40}\]
where we have defined:
\[C_{1}=6(-4\alpha-3w((4\alpha+9)w+6)-3). \tag{41}\]
In Fig. 6 we have presented a region where the dynamical corresponds to a saddle behavior, possible explaining the late time acceleration of the Universe in the background dynamics. Note that the last critical line is also non-hyperbolic, having three zero eigenvalues. The transition from the critical point \(C\) towards the \(A\) cosmological solution in the xOy plane can be observed in Fig. 7 for specific initial conditions near the \(C\) solution, validating the obtained analytical solutions. Lastly, in Fig. 8 we have represented the variation of the total (effective) equation of state for the present cosmological scenario. We note that the evolution can pass from a matter domination epoch towards a super-accelerated era, attaining the cosmological constant boundary from below at late times, crossing the phantom divide line in the early stages.
Figure 3: A specific region of interest for the \(A\) cosmological solution where the dynamics corresponds to a saddle dynamical behavior. (\(v=1,\beta=-1,x=1\)).
Figure 2: The evolution towards the A cosmological solution in the phase space structure (\(w=0.0001,\alpha=-2,\beta=-1\)).
## IV Summary and conclusions
In this paper we have proposed a model in the theoretical framework of modified gravity, where the fundamental Einstein-Hilbert action is extended, by considering a more complete theory. The latter theory denoted as \(F(G,T_{\mu\nu}T^{\mu\nu})\) is based on two specific components. The first component takes into account possible physical effects due to the consideration of the Gauss-Bonnet invariant (\(G\)), encoding geometrical aspects in the generic theory. The second component in our action is based on the energy-momentum-squared invariant (\(T_{\mu\nu}T^{\mu\nu}\)), embedding geometrical effects from the specific form of the energy-momentum tensor. In this cosmological model we have assumed that the generic function which depends on the Gauss-Bonnet invariant and the energy-momentum-squared term can be decomposed in an independent manner, \(F(G,T_{\mu\nu}T^{\mu\nu})=f(G)+g(T_{\mu\nu}T^{\mu\nu})\). After we have proposed the generic action for our present model, we have obtained the modified Friedmann relations by varying the action with respect to the inverse metric, assuming that the background can be described by the Robertson-Walker metric. Here we note that the continuity equation is not satisfied due to the inclusion of the energy-momentum-squared term in the specific action, a particular aspect for these theories. After obtaining the dynamical equations, we have studied the physical aspects of our cosmological model by considering the linear stability theory. In this study we have assumed an exponential representation for the generic function in our action, \(F(G,T_{\mu\nu}T^{\mu\nu})=f_{0}e^{\alpha G}+g_{0}e^{\beta T^{2}}\), where \(\alpha,\beta,f_{0},g_{0}\) are constant parameters. In this particular case we have introduced the auxiliary variables which are required in order to apply the linear stability theory. After introducing the auxiliary variables associated to the phase space structure, we have computed the critical points of the present cosmological model for the exponential case.
In the phase space structure we have identified various critical points which correspond to a de-Sitter epoch where the model behaves closely as a cosmological constant, particular solutions which can explain the late time stage of the Universe. As can be seen from the analysis, for these solutions the effective matter density parameter is influenced by different coupling terms, and the barotropic equation of state for the matter sector. From a dynamical point of view we have identified possible regions of interest for the coupling constants which correspond to a saddle dynamical
Figure 4: A specific region of interest for the \(B^{+}\) cosmological solution where the dynamics corresponds to a saddle dynamical behavior. (\(u=-0.1,\beta=1\)).
Figure 5: The variation of the matter density parameter \(s\) for the \(C\) cosmological solution.
behavior in the late time stage of the Universe. These critical points are particular solutions which correspond in principle to various epochs in the dynamical trajectory of our Universe. In this case the phase space structure has four critical points, associated to a de-Sitter epoch. For each of the corresponding critical points we have established the dynamical behavior, obtaining possible regions of interest for different parameters which can describe the present model.
The analysis of the phase space structure showed that the present cosmological model can describe the late-time accelerated expansion of the Universe and the dynamical behavior of the effective equation of state. However, due to the de-Sitter solutions found in the phase space structure, we have to further assume that the matter and the radiation epochs appear by fine-tuning the initial conditions of the current trajectory. The present paper can be extended in various cosmological applications. For example, it would be interesting to study the cosmological model by considering an observational study with the recent cosmological data, obtaining constraints for various parameters from an astrophysical point of view. Another possible aspect is represented by the inflationary era, a particular stage in the evolution of the Universe which can be further analyzed. These particular extensions can provide support for a more complete theory of gravity and are left as future projects.
###### Acknowledgements.
We would like to thank Prof. Dr. Virgil Baran for various discussions which lead to the development of the present project. The computational part of this work was performed using the computer stations provided by CNFIS through
Figure 6: A specific region of interest for the \(C\) cosmological solution where the dynamics corresponds to a saddle dynamical behavior. (\(v=1,\beta=1\)).
Figure 7: The evolution from the critical point C towards the A cosmological solution in the xOy plane where \(w=0.0001,\alpha=-2,\beta=-1\).
the project CNFIS-FDI-2020-035.
|
2304.04610 | Attention at SemEval-2023 Task 10: Explainable Detection of Online
Sexism (EDOS) | In this paper, we have worked on interpretability, trust, and understanding
of the decisions made by models in the form of classification tasks. The task
is divided into 3 subtasks. The first task consists of determining Binary
Sexism Detection. The second task describes the Category of Sexism. The third
task describes a more Fine-grained Category of Sexism. Our work explores
solving these tasks as a classification problem by fine-tuning
transformer-based architecture. We have performed several experiments with our
architecture, including combining multiple transformers, using domain adaptive
pretraining on the unlabelled dataset provided by Reddit and Gab, Joint
learning, and taking different layers of transformers as input to a
classification head. Our system (with team name Attention) was able to achieve
a macro F1 score of 0.839 for task A, 0.5835 macro F1 score for task B and
0.3356 macro F1 score for task C at the Codalab SemEval Competition. Later we
improved the accuracy of Task B to 0.6228 and Task C to 0.3693 in the test set. | Debashish Roy, Manish Shrivastava | 2023-04-10T14:24:52Z | http://arxiv.org/abs/2304.04610v1 | # Attention at SemEval-2023 Task 10: Explainable Detection of Online Sexism (EDOS)
###### Abstract
In this paper, we have worked on interpretability, trust, and understanding of the decisions made by models in the form of classification tasks. The task is divided into 3 subtasks. The first task consists of determining Binary Sexism Detection. The second task describes the Category of Sexism. The third task describes a more Fine-grained Category of Sexism. Our work explores solving these tasks as a classification problem by fine-tuning transformer-based architecture. We have performed several experiments with our architecture, including combining multiple transformers, using domain adaptive pretraining on the unlabelled dataset provided by Reddit and Gab, Joint learning, and taking different layers of transformers as input to a classification head. Our system (with team name 'Attention') was able to achieve a macro F1 score of 0.839 for task A, 0.5835 macro F1 score for task B and 0.3356 macro F1 score for task C at the Condalab SemEval Competition. Later we improved the accuracy of Task B to 0.6228 and Task C to 0.3693 in the test set.
## 1 Introduction
Online sexism is a growing problem in this era, considering the large scale of online texts. Detecting online Sexism is an important NLP task, and has a lot of social impacts related to it. Sexism is defined as any abuse or negative sentiment that is directed towards women based on their gender or based on their gender combined with one or more other identity attributes (e.g. black women, trans women).
Automated tools are widely available to detect sexism but mostly don't work with the issue of explaining why the text is sexist. Flagging what is sexist content and also explaining why it is sexist is very important aspects of these automated tools. We tried to focus on interpretability, trust, and understanding of the decisions made by models. Having an automated system that is capable of understanding, interpreting, and classifying sexist text will be a major step to make online sites safe.
Explainable Detection of Online Sexism (EDOS) at SemEval 2023 (Kirk et al., 2023), tries to solve this problem by exploring different systems that can categorize the dataset created with texts from Reddit and Gab.
The problem is divided into 3 subtasks. The first task is formulated as Binary Sexism Detection, where the main idea is to detect whether a text is sexist or not.
The second task describes the Category of Sexism. So, if the text is sexist, under which category it belongs? The categories are threats, derogation (treating someone as little worthy), animosity (a strong dislike or unfriendly feeling), and prejudiced discussions (an unfair feeling of dislike for a person or group because of race, sex, religion, etc). This makes this task a four-class classification problem given the text is sexist.
The third task describes the Fine-grained Vector of Sexist text. For posts that are sexist, systems were supposed to predict more fine-grained aspects of the category of sexism. Which can help in interpretability and understanding why the post is sexist on a much deeper level. The sexist posts are classified into an 11-class classification problem, given the text is sexist. The 11 classes are Descriptive attacks, Aggressive and emotive attacks, Causal use of gendered slurs, Profanities, and insults, Immutable gender differences and gender stereotypes, Supporting systemic discrimination against women as a group, Incitement, and encouragement of harm, Dehumanising attacks & overt sexual objectification, Supporting mistreatment of individual women, Backhanded gendered compliments, Threats of harm, Condescending explanations or unwelcome advice.
To solve this problem, we have fine-tuned transformer-based architecture. Our final submission is made up of two models: RoBERTa (Zhuang
et al., 2021) and DeBERTa (He et al., 2020). The last layer is then passed to the MLP. At the last layer, we have the number of classes. We have also explored Domain Adaptive Pre Training using masked language modeling on both of these transformers and used their weights for the classification task. The models remain the same for all the tasks except the last layer changes based on the number of classes.
Our system ranked 35th in sub-task 1, 50th in sub-task 2, and 44th in sub-task 3, out of 583 participants.
All of our code is made publicly available on Github1. The domain adaptive pretrained versions of RoBERTa and DeBERTa are also made publicly available at Huggingface 2
Footnote 1: [https://github.com/debashish05/Explainable_Detection_of_Online_Sexism](https://github.com/debashish05/Explainable_Detection_of_Online_Sexism)
Footnote 2: [https://huggingface.co/debashish-roy/](https://huggingface.co/debashish-roy/)
## 2 Background
Transformer-based models such as RoBERTa and DeBERTa has shown outstanding performance on wide domains of NLP, including text classification. These models can capture a sort of structure in the hateful speech as well (Basile et al., 2019).
Domain Adaptive Pre Training is used in many domains of NLP, to make the model focus on the specific domain rather than a large variety of text it is trained on. Don't Stop Pre Training: Adapt Language Models to Domains and Tasks (Gururangan et al., 2020) talk about the increase in the performance of a model if it is pretrained on similar types of data.
SemEval-2021 Task 7: HaHackathon, Detecting and Rating Humor and Offense (Meaney et al., 2021) talks about the importance of pretraining and using ensembling methods to achieve high accuracy. It also described the importance of lexical features.
This SemEval task itself provided the dataset. The dataset contains text in English. The dataset contains 20,000 texts. Half of the text was taken from Gab and the other half from Reddit. The entries are then manually annotated by three trained annotator women. For each text, we have (a) label-sexist, describing whether the label is sexist or not, (b) label-category: given the text is sexist what is the category of sexism, in the case of non-sexist text it is none. (c) label-vector: it is more fine-grained details about the category of sexism.
The data is divided into 70% as training data, 10% as validation data, and 20% as test data. The distribution of labels for all three tasks is described in Table 1, Table 2, and Table 3. The major issue with the dataset for task C is less number of samples. All of the text is less than 64 words, so while generating tokens for transformers we have kept the max length to 64.
## 3 System Overview
We have used Transformers with several variations. Based on this variation we try to come up with the best model.
* **RoBERTa:** It is built on BERT masked language strategy, but excludes BERT's next-sentence pretraining objective, and training with much larger mini-batches and learning rates. RoBERTa BASE model consists of 12 transformer layers, 12 self-attention heads per layer, and a hidden size of 768.
* **DeBERTa:** We have used the BASE model which consists of 12 transformer layers, 12 self-attention heads per layer, and a hidden size of 768. It tries to improve RoBERTa by using two techniques: a disentangled attention mechanism and an enhanced mask decoder.
* **Domain Adaptive Pretraining:** We try to tailor a pretrained model to the domain of a target task. To do so we do masked language modeling over the text related to the domain.
With these two transformers, we have experimented with the architecture. The different architecture is as follows:
1. **RoBERTa/DeBERTa (last layer) + MLP:** We have taken the last layer of the RoBERTa
\begin{table}
\begin{tabular}{|c|c|} \hline
**Classes** & **Instances** \\ \hline Sexist Text & 3398 \\ \hline Non-Sexist Text & 10602 \\ \hline \end{tabular}
\end{table}
Table 1: Class distribution for sub-task 1 train dataset.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Classes** & **Instances** \\ \hline Threats plan to harm, and incitement & 310 \\ \hline Dergation & 1590 \\ \hline Animosity & 1165 \\ \hline Prejudiced discussions & 333 \\ \hline \end{tabular}
\end{table}
Table 2: Class distribution for sub-task 2 train dataset.
the base model followed by some MLP layers and fine-tuned over the training data. The same setup has been used with DeBERTa as well.
2. **RoBERTa/DeBERTa (average of all layers) + MLP:** Instead of taking the last layer only, here we have taken the average of all the layers in the RoBERTa base model. Which are then followed by some MLP layers and fine-tuned over the training data. The same setup has been used with DeBERTa as well.
3. **RoBERTa and DeBERTa concatenation of the last layer + MLP:** Output from the last layers of RoBERTa and DeBERTa were concatenated. Then this is followed by some fully connected layers of MLP. The last layer consists of neurons with a number of classes.
4. **RoBERTa and DeBERTa combined (before concatenation of two transformers they are passed through MLP):** Output from the last layers of RoBERTa and DeBERTa were passed through different MLP layers and then concatenated with each other. Then this is followed by some more fully connected layers. The last layer consists of neurons with a number of classes. The model is presented in Fig 1.
5. **Joint Learning for Task B:** For task B, earlier we were only taking the text that is only sexist. But now we will use all the data. And instead of treating the label category which is class, we are adding another class which implies the text is non-sexist. This makes the problem a 5-class classification problem, rather than a 4-class with the increase in a large amount of data. At the inference time since we are supposed to output only from the 4 classes, we will give the class with the largest probability as output. But if the class is not sexist text, then return the second largest probability class.
6. **Experiment 4 with Domain Adaptive Pretraining:** This model is the same as experiment 4 with the exception that we are using Domain Adaptive Pretrained RoBERTa and DeBERTa. The transformers are trained on the 2M unlabelled texts provided in the task itself.
All these models are used for all the tasks with a change in the last layer with the number of classes.
Figure 1: Architecture of our system.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Classes** & **Instances** \\ \hline Threats of harm & 56 \\ \hline Inctiement and encouragement of harm & 254 \\ \hline Descriptive attacks & 717 \\ \hline Aggressive and emotive attacks & 673 \\ \hline Dehumanising attacks \& overt sexual objectification & 200 \\ \hline Casual use of gendered slurs, profanities, and insults & 637 \\ \hline Immutable gender differences and gender stereotypes & 417 \\ \hline Backhanded gendered compliments & 64 \\ \hline Condescending explanations or unwelcome advice & 47 \\ \hline Supporting mistreatment of individual women & 75 \\ \hline Supporting systemic discrimination against women as a group & 258 \\ \hline \end{tabular}
\end{table}
Table 3: Class distribution for sub-task 3 train dataset.
## 4 Experimental setup
The dataset provided was already divided into train, dev, and test. We have done some preprocessing of the text, which includes converting emoji to text, converting all the characters to lowercase, converting the link to <link> tag, removing punctuations, removing words with numbers, removing stop words, and using WordNet Lemmatizer. But after the cleaning step accuracy was low compared to the non-cleaned text. A probable reason is with transformer tokenization, subword tokenization can perform better than normal tokenization.
The hyperparameter used in this experiment are epoch = 20, Learning Rate = 1e-5, Loss = Cross Entropy loss, Optimizer = AdamW.
We have used the Macro F1 score as a measure of evaluation for all three tasks.
## 5 Results
Results for different experiments are mentioned in Table 4. Our official submission is experiment number 8. Later we experimented with 5 and 6 after the deadline. We have achieved a rank of 35th in task A, 50th in task B and 44th in task C.
With Domain Adaptive Pretraining the evaluation loss of Roberta changes from 2.49 to 1.98. For Deberta it changes from 11.52 to 2.27. We have also seen reduced perplexity scores, which measure the amount of randomness in our model or how confused the model is, so the less the better.
## 6 Error Analysis
These error analyses are performed on experiment no 6.
### Task A
With the given confusion matrix in Table 6 test data, we can infer that the model will predict the right label 96% of the time if the text is not sexist. Issues come when the tweet is sexist. Which is a major problem. Approximately 35% of the time the model is classifying sexist text as non-sexist, which is a dangerous thing.
For example, this text: "Don't run away like a little jewish girl." and "woman here Opinion discarded." are treated as not sexist., but they are sexist.
Out of the 346 texts that are sexist but classified as not sexist. Applying some sort of rule-based system where if this word comes then it will be sexist will not work. Because we need to understand the
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Predicted-\(>\)** & \multicolumn{1}{c|}{Nt sexist} & \multicolumn{1}{c|}{Sexist} \\ \hline Not sexist (Actual) & 2909 & 121 \\ \hline Sexist (Actual) & **346** & 624 \\ \hline \end{tabular}
\end{table}
Table 6: Confusion Matrix for Task A
\begin{table}
\begin{tabular}{|c|l|c|c|c|c|c|c|} \hline & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{**Task A Macro F1**} & \multicolumn{1}{c|}{**Task B Macro F1**} & \multicolumn{1}{c|}{**Task C Macro F1**} \\ \hline
**S.No** & **Experiment** & **Val** & **Test** & **Val** & **Test** & **Val** & **Test** \\ \hline
1 & RoBERTa last layer + MLP & 82.90 & 81.47 & 59.74 & 58.29 & 59.70 & 33.48 \\ \hline
2 & DeBERTa last layer + MLP & 83.24 & 81.99 & 61.05 & 59.07 & 29.85 & 30.09 \\ \hline
3 & RoBERTa avg of all layers + MLP & 79.85 & 78.65 & 34.83 & 45.71 & 26.76 & 21.58 \\ \hline
4 & DeBERTa avg of all layer + MLP & 80.40 & 78.62 & 59.79 & 55.15 & 22.11 & 21.61 \\ \hline
5 & RoBERTa+DeBERTa+ (Embeddings of these two are concatenated before concatenating these are passed through MLP) + MLP & 82.96 & 82.26 & 62.47 & **62.28** & 33.99 & 31.61 \\ \hline
6 & Experiment 5 + Domain Adaptive Pre Training with unlabelled text & **84.27** & 82.66 & 63.48 & 60.85 & **35.30** & **36.93** \\ \hline
7 & Joint Learning for task B using task (A and B’s data), the last layer is of 5 neurons with labels from task B and one non-sexist text & NA & NA & 54.57 & 53.29 & NA & NA \\ \hline
8 & RoBERTa+DeBERTa+ (Embedding of RoBERTa and DeBERTa are concatenated and passed through MLP) + Domain Adaptive Pretraining & 84.13 & **83.9** & **67.00** & 58.35 & 34.33 & 33.56 \\ \hline \end{tabular}
\end{table}
Table 4: Results of experiments.
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline
**Predicted-\(>\)** & \multicolumn{1}{c|}{Threats, plans to harm and incitement} & \multicolumn{1}{c|}{Derogation} & \multicolumn{1}{c|}{Animosity} & \multicolumn{1}{c|}{Prejudiced} \\ \hline Threats, plans to harm and incitement (Actual) & 62 & 11 & 12 & 4 \\ \hline Derogation (Actual) & 16 & 303 & **107** & 28 \\ \hline Animosity (Actual) & 11 & **112** & 199 & 11 \\ \hline Prejudiced discussions (Actual) & 9 & 27 & 12 & 46 \\ \hline \end{tabular}
\end{table}
Table 5: Confusion Matrix for Task B
whole context. Most of the time there is no hidden meaning in the sentence.
One of the main reasons for poor performance in predicting text is sexist, which is less number of samples that are sexist. If we can get more samples that are sexist, our model can try to learn the pattern among them and perform better. We should have tried using distil version of these transformers, as they have less number of weights, which can be trained with fewer samples. And the different distil versions of the transformer can be used, as the number of samples to train on keeps on getting smaller.
### Task B
The confusion matrix for task B is provided in Table 5. The model is confused many times between Derogation and Animosity. Approximately 25% of the time when the text is derogation it is predicting animosity. And nearly 33% time when the text is animosity it is treating it as derogation. So the model is not able to clearly distinguish between these classes. Adding more data can help us solve this issue.
## 7 Conclusion
Domain Adaptive Pre-Training helps us get some extra accuracy numbers in all three sub-tasks. Combining RoBERTa and DeBERTa makes the model big but also brings more variety of aspects to the model.
Although the macro F1 accuracy is quite good, the data instances where it is failing are very costly in the case of task A. Taking a text down from social media for some time even if it is not sexist is less harmful than keeping a sexist text on social platforms. The model is failing to predict sexist texts. A manual inspection of the failed text of this type suggested that there is a large scope for improvement in the model.
Despite that, the model shows very promising results, where the number of samples to train on is large. To further improve the accuracy we can use more annotated data from people. Although this will be costly, it will be a one-time effort.
The model that we submitted for Task B was not able to generalize well. The macro F1 dropped from 67% to 58.35%. This can be due to over-fitting or dissimilarity between the distribution of the test and train set.
One of the ideas that we look forward to working with is to predict pseudo labels for unlabelled tasks. For example, if our best model is sure that the label belongs to a particular task with more than 99% probability then use that data instance as training data for the model. In this way, we can increase the data to train for our model.
Another idea that can be looked for is to generate text of a similar type Bhatt and Shrivastava (2022) using decoder-based transformers. For example, we can take text which is less in number and can train on causal language objectives over the labeled samples. This results in more data samples to train on.
## 8 Acknowledgments
I want to thank Sagar Joshi for helping me understand the task. I would also like to thank Tathagata Raha and Vijayasaradhi Indurthi for giving possible directions to explore the problem. We also want to thank the organizers of this task
|
2306.15164 | DSRM: Boost Textual Adversarial Training with Distribution Shift Risk
Minimization | Adversarial training is one of the best-performing methods in improving the
robustness of deep language models. However, robust models come at the cost of
high time consumption, as they require multi-step gradient ascents or word
substitutions to obtain adversarial samples. In addition, these generated
samples are deficient in grammatical quality and semantic consistency, which
impairs the effectiveness of adversarial training. To address these problems,
we introduce a novel, effective procedure for instead adversarial training with
only clean data. Our procedure, distribution shift risk minimization (DSRM),
estimates the adversarial loss by perturbing the input data's probability
distribution rather than their embeddings. This formulation results in a robust
model that minimizes the expected global loss under adversarial attacks. Our
approach requires zero adversarial samples for training and reduces time
consumption by up to 70\% compared to current best-performing adversarial
training methods. Experiments demonstrate that DSRM considerably improves
BERT's resistance to textual adversarial attacks and achieves state-of-the-art
robust accuracy on various benchmarks. | Songyang Gao, Shihan Dou, Yan Liu, Xiao Wang, Qi Zhang, Zhongyu Wei, Jin Ma, Ying Shan | 2023-06-27T02:46:08Z | http://arxiv.org/abs/2306.15164v1 | # DSRM: Boost Textual Adversarial Training with Distribution Shift Risk Minimization
###### Abstract
Adversarial training is one of the best-performing methods in improving the robustness of deep language models. However, robust models come at the cost of high time consumption, as they require multi-step gradient ascents or word substitutions to obtain adversarial samples. In addition, these generated samples are deficient in grammatical quality and semantic consistency, which impairs the effectiveness of adversarial training. To address these problems, we introduce a novel, effective procedure for instead adversarial training with only clean data. Our procedure, distribution shift risk minimization (DSRM), estimates the adversarial loss by perturbing the input data's probability distribution rather than their embeddings. This formulation results in a robust model that minimizes the expected global loss under adversarial attacks. Our approach requires zero adversarial samples for training and reduces time consumption by up to 70% compared to current best-performing adversarial training methods. Experiments demonstrate that DSRM considerably improves BERT's resistance to textual adversarial attacks and achieves state-of-the-art robust accuracy on various benchmarks.
## 1 Introduction
Despite their impressive performance on various NLP tasks, deep neural networks (DNNs), like BERT Devlin et al. (2019), are highly vulnerable to adversarial exemplars, which arise by adding imperceptible perturbations among natural samples under semantic and syntactic constraints Zeng et al. (2021); Lin et al. (2021). Such vulnerability of DNNs has attracted extensive attention in enhancing defence techniques against adversarial examples Li et al. (2021); Xi et al. (2022), where the adversarial training approach (AT) Goodfellow et al. (2015) is empirically one of the best-performing algorithms to train networks robust to adversarial perturbations Uesato et al. (2018); Athalye et al. (2018). Formally, adversarial training attempts to solve the following min-max problem under loss function \(L\):
\[\min_{\mathbf{\theta}\in\Theta}\underbrace{\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{P}_{0 }}\overbrace{\max_{\|\mathbf{\delta}\|_{p}\leqslant\epsilon}L(\mathbf{\theta},\mathbf{x}+ \mathbf{\delta},y)}^{\text{Adversarial Samples (AT)}}}_{\text{Distribution Shift (Ours)}},\]
where \(\mathbf{\theta}\in\Theta\) are the model parameters, and \((\mathbf{x},y)\) denotes the input data and label, which follow the joint distribution \(\mathcal{P}_{0}\). The curly brackets show the difference in research focus between our approach and vanilla adversarial training.
Due to the non-convexity of neural networks, finding the analytic solution to the above inner maximization (marked in red) is very difficult Wang et al. (2021). The most common approach is to estimate the adversarial loss from the results of several gradient ascents, such as PGD Madry et al. (2018) and FreeLB Zhu et al. (2019). Li and Qiu (2021) and Zhu et al. (2022) generate meaningful sentences by restricting such perturbations to the discrete token embedding space, achieving competitive robustness with better interpretability Shreya and Khapra (2022).
However, the impressive performance in adversarial training comes at the cost of excessive computational consumption, which makes it infeasible for large-scale NLP tasks Andriushchenko and Flammarion (2020). For example, FreeLB++ Li et al. (2021), which increases the perturbation intensity of the FreeLB algorithm to serve as one of the state-of-the-art methods, achieves optimal performance with nearly 15 times the training time. Moreover, the adversarial samples generated by the aforementioned methods exhibit poor grammatical quality, which is unreasonable in the real world when being manually reviewed Hauser et al. (2021); Chiang and Lee (2022). Some works attempt to speed up the training procedure by obtaining cheaper adversarial samples Wong et al. (2019) or
generating diverse adversarial samples at a negligible additional cost (Shafahi et al., 2019). However, they still require a complex process for adversarial samples and suffer performance degradation in robustness.
In this work, from another perspective of the overall distribution rather than the individual adversarial samples, we ask the following question: _Can we directly estimate and optimize the expectation of the adversarial loss without computing specific perturbed samples, thus circumventing the above-mentioned problems in adversarial training?_
DSRM formalize the distribution distance between clean and adversarial samples to answer the question. Our methodology interprets the generation of adversarial samples as an additional sampling process on the representation space, whose probability density is not uniformly distributed like clean samples. Adversarial samples with higher loss are maximum points in more neighbourhoods and possess a higher probability of being generated. We subsequently proved that the intensity of adversarial perturbations naturally bound the Wasserstein distance between these two distributions. Based on this observation, we propose an upper bound for the adversarial loss, which can be effectively estimated only using the clean training data. By optimizing this upper bound, we can obtain the benefits of adversarial training without computing adversarial samples. In particular, we make the following contributions:
* We propose DSRM, a novel procedure that transforms the training data to a specific distribution to obtain an upper bound on the adversarial loss. Our _codes1_ are publicly available. Footnote 1: [https://github.com/SleepThroughDifficulties/DSRM](https://github.com/SleepThroughDifficulties/DSRM)
* We illustrate the validity of our framework with rigorous proofs and provide a practical algorithm based on DSRM, which trains models adversarially without constructing adversarial data.
* Through empirical studies on numerous NLP tasks, we show that DSRM significantly improves the adversarial robustness of the language model compared to classical adversarial training methods. In addition, we demonstrate our method's superiority in training speed, which is approximately twice as fast as the vanilla PGD algorithm.
## 2 Related Work
### Adversarial Training
Goodfellow et al. (2015) first proposed to generate adversarial samples and utilize them for training. Subsequently, the PGD algorithm (Madry et al., 2018) exploits multi-step gradient ascent to search for the optimal perturbations, refining adversarial training into an effective defence technique. Some other works tailored training algorithms for NLP fields to ensure that the adversarial samples have actual sentences. They craft perturbation by replacing words under the guidance of semantic consistency (Li et al., 2020) or token similarity in the embedding space (Li and Qiu, 2021). However, these algorithms are computationally expensive and trigger explorations to improve training efficiency (Zhang et al., 2019). The FreeAT (Shafahi et al., 2019) and FreeLB (Zhu et al., 2019) attempt to simplify the computation of gradients to obtain acceleration effects, which construct multiple adversarial samples simultaneously in one gradient ascent step. Our DSRM approach is orthogonal to these acceleration techniques as we conduct gradient ascent over the data distribution rather than the input space.
### Textual Adversarial Samples
Gradient-based algorithms confront a major challenge in NLP: the texts are discrete, so gradients cannot be directly applied to discrete tokens. Zhu et al. (2019) conducts adversarial training by restricting perturbation to the embedding space, which is less interpretable due to the lack of adversarial texts. Some works address this problem by searching for substitution that is similar to gradient-based perturbation (Cheng et al., 2020; Li and Qiu, 2021). Such substitution strategies can combine with additional rules, such as synonym dictionaries or language models to detect the semantic consistency of adversarial samples (Si et al., 2021; Zhou et al., 2021). However, recent works observe that adversarial samples generated by these substitution methods are often filled with syntactic errors and do not preserve the semantics of the original inputs (Hauser et al., 2021; Chiang and Lee, 2022). Wang et al. (2022) constructs discriminative models to select beneficial adversarial samples, such a procedure further increases the time consumption of adversarial training. In this paper, we propose to estimate the global adversarial loss with only clean data, thus circumventing the defects in adversarial sample generation and selection.
## 3 Methodology
In this section, we first introduce our distribution shift risk minimization (DSRM) objective, a novel upper bound estimation for robust optimization, and subsequently, how to optimize the model parameters under DSRM.
Throughout our paper, we denote vectors as \(\mathbf{a}\), sets as \(\mathcal{A}\), probability distributions as \(\mathcal{P}\), and definition as \(\triangleq\). Specificly, we denote an all-1 vector of length \(b\) as \(\widetilde{1}_{1\times b}\). Considering a model parameterized by \(\mathbf{\theta}\in\Theta\), the per-data loss function is denoted as \(L(\mathbf{\theta},\mathbf{x},y):\Theta\times\mathcal{X}\times\mathcal{Y}\to\mathbb{R} _{+}\). Observing only the training set \(\mathcal{S}_{t}\), the goal of model training is to select model parameters \(\mathbf{\theta}\) that are robust to adversarial attacks.
### Adversarial Loss Estimation by Distribution Shift
We initiate our derivation with vanilla PGD objective (Madry et al., 2017). Formally, PGD attempts to solve the following min-max problem:
\[\min_{\mathbf{\theta}\in\Theta}\rho(\mathbf{\theta})\triangleq\mathbb{E}_{(\mathbf{x},y) \sim\mathcal{P}_{0}}\max_{\|\mathbf{\delta}\|_{p}\leqslant\varepsilon}L(\mathbf{ \theta},\mathbf{x}+\mathbf{\delta},y),\]
where \(\mathbf{\theta}\in\Theta\) are the model parameters, and \((\mathbf{x},y)\) denotes the input data and label, which follow the joint distribution \(\mathcal{P}_{0}\).
Instead of computing the optimal perturbation for each data point, we directly study the \(\rho(\mathbf{\theta})\) from the data distribution perspective. During the training process of PGD, each input \(\mathbf{x}\) corresponds to an implicit adversarial sample. We describe such mapping relationship with a transformation functions \(f:X\ \times\ Y\to X\) as:
\[f_{\varepsilon,\mathbf{\theta}}(\mathbf{x},y)\triangleq\mathbf{x}+\arg\max_{ \{\mathbf{\delta}:\|\mathbf{\delta}\|_{p}\leq\varepsilon\}}L(\mathbf{\theta},\mathbf{x}+\mathbf{ \delta},y). \tag{1}\]
The existence of \(f_{\varepsilon,\mathbf{\theta}}(\mathbf{x},y)\) can be guaranteed due to the continuity of the loss function \(L(\mathbf{\theta},\mathbf{x}+\mathbf{\delta},y)\). Then the training objective \(\rho(\mathbf{\theta})\) can be denoted as:
\[\rho(\mathbf{\theta}) =\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{P}_{0}}\ L(\mathbf{\theta},f_{ \varepsilon,\mathbf{\theta}}(\mathbf{x},y),y) \tag{2}\] \[=\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{P}_{f}}\ L(\mathbf{\theta},\mathbf{x },y), \tag{3}\]
where \(\mathcal{P}_{f}\) denotes the distribution of \(f_{\varepsilon,\mathbf{\theta}}(\mathbf{x},y)\). Eq. 3 omits the perturbation \(\mathbf{\delta}\) by introducing \(\mathcal{P}_{f}\), and directly approximates the robust optimization loss. However, the accurate distribution is intractable due to the non-convex nature of neural networks. We, therefore, constrain the above distribution shift (i.e., from \(\mathcal{P}_{0}\) to \(\mathcal{P}_{f}\)) with Wasserstein distance.
**Lemma 3.1**.: _Let \(\mathrm{W}_{p}\left(\mathcal{P},\mathcal{Q}\right)\) denotes the \(p\)-th Wasserstein distance between \(\mathcal{P}\) and \(\mathcal{Q}\)(Peyre et al., 2019). \(\mathcal{P}_{0}\) and \(\mathcal{P}_{f}\) are the respective distributions of clean and perturbed samples. The \(p\)-norm of perturbation \(\delta\) is constrained by \(\|\mathbf{\delta}\|_{p}\leq\varepsilon\), then the distribution shift in Eq. 3 is bounded by:_
\[\mathrm{W}_{p}\left(\mathcal{P}_{0},\mathcal{P}_{f}\right)\leq\varepsilon\]
Proof.: With Eq. 1, we have:
\[\mathrm{W}_{p}\left(\mathcal{P}_{0},\mathcal{P}_{f}\right) \triangleq\left(\inf_{\pi\in\Pi\left(\mathcal{P}_{0},\mathcal{P}_{ f}\right)}\mathbb{E}_{(u,\mathbf{v})\sim\pi}\left[\|\mathbf{u}-\mathbf{v}\|_{p}^{p} \right]\right)^{\frac{1}{p}}\] \[\leq\varepsilon.\]
Lemma. 3.1 ensures that for bounded perturbation strengths, the distribution shift between the original and virtual adversarial samples is limited, and we consequently define our Distribution Shift Risk Minimization (DSRM) objective as follows:
**Definition 3.1** (Dsmsmm).: _Giving \((\mathbf{x},y)\sim\mathcal{P}_{0}\), loss function \(L\) and model parameters \(\theta\), the DSRM aiming to minimize the worst-case loss \(\rho_{DS}(\mathbf{\theta})\) under distributional perturbations with intensity limited to \(\varepsilon\), that is:_
\[\min_{\mathbf{\theta}\in\Theta}\rho_{DS}(\mathbf{\theta})\triangleq\max_{ \mathrm{W}_{p}(\mathcal{P}_{0},\ \mathcal{P}_{t})\leqslant\varepsilon}\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{P}_{t}} L(\mathbf{\theta},\mathbf{x},y). \tag{4}\]
Noticing that there always satisfies:
\[\rho(\mathbf{\theta})\leq\rho_{DS}(\mathbf{\theta}),\]
we subsequently optimize the upper bound \(\rho_{DS}(\theta)\) for adversarial training.
### Distribution Shift Adversarial Training
In definition 3.1, we propose DSRM, a new adversarial training objective from the perspective of distribution shift. We now discuss how to optimize the model parameters with a finite training set \(\mathcal{S}\triangleq\cup_{i=1}^{n}\left\{(\mathbf{x}_{i},\mathbf{y}_{i})\right\}\). We first introduce the empirical estimation of Eq. 4 as follows:
\[\rho_{DS}(\theta)\approx\max_{\mathrm{W}_{p}(\mathcal{P}_{0},\ \mathcal{P}_{t})\leqslant \varepsilon}\sum_{i=1}^{n}\mathcal{P}_{t}(\mathbf{x}_{i})L(\mathbf{\theta},\mathbf{x}_{i}, y_{i}),\]
where \(\mathcal{P}_{0}\) is the unperturbed distribution. In vanilla training procedure, all training data are weighted as \(\frac{1}{n}\), where \(n\) is the value of training batch size. We therefore model \(\mathcal{P}_{0}\) as a uniform distribution.
For the purpose of simplicity, we use \(L_{S}(\mathbf{\theta},\mathcal{P}_{t})\) to denote the inner maximization term, that is:
\[\rho_{DS}(\mathbf{\theta})\approx\max_{\mathrm{W}_{p}(\mathcal{P}_{0},\;\mathcal{P} _{t})\leqslant\varepsilon}L_{S}(\mathbf{\theta},\mathcal{P}_{t}). \tag{5}\]
Suppose the worst-case distribution is \(\mathcal{P}_{f}\). To make explicit our distribution shift term, we rewrite the right-hand side of the equation above as:
\[L_{S}(\mathbf{\theta},\mathcal{P}_{0})+\left[\sum_{i=1}^{n}\left(\mathcal{P}_{f}( \mathbf{x}_{i})-\frac{1}{n}\right)L(\mathbf{\theta},\mathbf{x}_{i},y_{i})\right],\]
where \(L_{S}(\mathbf{\theta},\mathcal{P}_{0})\triangleq\frac{1}{n}\sum_{i=1}^{n}L(\mathbf{ \theta},\mathbf{x}_{i},y_{i})\) are the empirical risk of training sets. The term in square brackets captures the sensitivity of \(\rho_{DS}(\theta)\) at \(\mathcal{P}_{f}\), measuring how quickly the empirical loss increase when transforming training samples to different weights. This term can be denoted as \(L_{S}(\mathbf{\theta},\mathcal{P}_{f}-\mathcal{P}_{0})\). Since the training set is finite, the probability distribution over all samples can be simplified to a vector, let \(\mathbf{P}_{f}=[\mathcal{P}_{f}(\mathbf{x}_{1}),\mathcal{P}_{f}(\mathbf{x}_{2}),..., \mathcal{P}_{f}(\mathbf{x}_{n})]\), and \(\mathbf{L}=[L(\mathbf{\theta},\mathbf{x}_{1},y_{1}),L(\mathbf{\theta},\mathbf{x}_{2},y_{2}),..., L(\mathbf{\theta},\mathbf{x}_{n},y_{n})]\), we have:
\[L_{S}(\mathbf{\theta},\mathcal{P}_{f}-\mathcal{P}_{0})=\left(\mathbf{P}_{f}-\frac{1}{n }\right)\mathbf{L}^{T}. \tag{6}\]
In order to minimize the \(L_{S}(\mathbf{\theta},\mathcal{P}_{f})\), we first derive an approximation to the inner maximization of DSRM. We approximate the inner maximization problem via a first-order Taylor expansion of \(\rho_{DS}(\theta)\) w.r.t \(\mathcal{P}_{f}\) around \(\mathcal{P}_{0}\), we obtain the estimation as follows:
\[\mathcal{P}_{f} =\arg\max_{\mathrm{W}_{p}(\mathcal{P}_{0},\;\mathcal{P}_{t}) \leqslant\varepsilon}L_{S}(\mathbf{\theta},\mathcal{P}_{t}) \tag{7}\] \[=\arg\max_{\mathrm{W}_{p}(\mathcal{P}_{0},\;\mathcal{P}_{t}) \leqslant\varepsilon}\big{[}L_{S}(\mathbf{\theta},\mathcal{P}_{t})-L_{S}(\mathbf{ \theta},\mathcal{P}_{0})]\] \[\approx\arg\max_{\mathrm{W}_{p}(\mathcal{P}_{0},\;\mathcal{P}_{t}) \leqslant\varepsilon}\big{[}\left(\mathcal{P}_{t}-\mathcal{P}_{0}\right)^{T} \nabla_{\mathcal{P}_{t}}L_{S}(\mathbf{\theta},\mathcal{P}_{0})\big{]}.\]
By Eq. 7, the value \(\mathcal{P}_{f}\) that exactly solves this approximation can be given by its dual problem. For experimental convenience, here we only focus on and present one of the special cases, that the metric used in \(\mathrm{W}_{p}\left(\mathcal{P}_{0},\;\mathcal{P}_{t}\right)\) treats all data pairs equally. We empirically demonstrate that such approximations can achieve promising performance in the next section. In turn, the solution of \(\mathcal{P}_{f}\) can be denoted as:
\[\mathcal{P}_{f}^{*}=\varepsilon\;\nabla_{\mathcal{P}_{t}}L_{S}(\mathbf{\theta}, \mathcal{P}_{0})\;/\;\|\nabla_{\mathcal{P}_{t}}L_{S}(\mathbf{\theta},\mathcal{P} _{0})\|+\mathcal{P}_{0}. \tag{8}\]
Substituting the equation into Eq. 4 and differentiating the DSRM objective, we then have:
\[\nabla_{\mathbf{\theta}}(\rho_{DS}(\mathbf{\theta}))\approx\nabla_{\mathbf{ \theta}}L_{S}\left(\mathbf{\theta},\mathcal{P}_{f}^{*}\right) \tag{9}\] \[= \nabla_{\mathbf{\theta}}\left[L_{S}\left(\mathbf{\theta},\mathcal{P}_{0} \right)+\left(\mathcal{P}_{f}^{*}-\mathcal{P}_{0}\right)\nabla_{\mathcal{P}_{t }}L_{S}\left(\mathbf{\theta},\mathcal{P}_{t}\right)|_{\mathcal{P}_{f}^{*}}\right].\]
Though this approximation to \(\nabla_{\mathbf{\theta}}(\rho_{DS}(\theta))\) requires a potential second-order differentiation (the influence of weight perturbations on the loss of DSRM), they can be decomposed into a multi-step process, which is tractable with an automatic meta-learning framework. In our experiments, we use the Higher 2 package for differential to the sample weight.
Footnote 2: [https://github.com/facebookresearch/higher.git](https://github.com/facebookresearch/higher.git).
To summarize, we first update the parameters for one step under the original data distribution \(\mathcal{P}_{0}\), and compute the empirical loss on a previously divided validation set, which requires an additional set of forward processes with the updated parameters. Later, we differentiate validation loss to the weights of the input samples to obtain the worst-case perturbation and re-update the parameters with our distribution shift loss function. Our detailed algorithm implementation is shown in Algorithm 1.
## 4 Experiments
In this section, we comprehensively analyse DSRM versus other adversarial training methods in three evaluation settings for three tasks.
### Datasets and Backbone Model
We evaluate our proposed method mainly on the four most commonly used classification tasks for adversarial defence, including SST-2 Socher et al. (2013), IMDB Maas et al. (2011), AG NEWS Zhang et al. (2015) and QNLI Wang et al. (2018). The statistics of these involved benchmark datasets are summarised in Appendix A. We take the BERT-base model (12 transformer layers, 12 attention heads, and 110M parameters in total) as the backbone model, and follow the BERT implementations Devlin et al. (2019).
### Evaluation Settings
We refer to the setup of previous state-of-the-art works Liu et al. (2022); Xi et al. (2022) to verify the robustness of the model. The pre-trained model
is finetuned with different defence methods on various datasets and saves the best three checkpoints. We then test the defensive capabilities of the saved checkpoint via TextAttack Morris et al. (2020) and report the mean value as the result of the robustness evaluation experiments.
Three well-received textual attack methods are leveraged in our experiments. TextBugger Li et al. (2018) identify the critical words of the target model and repeatedly replace them with synonyms until the model's predictions are changed. TextFooler Jin et al. (2020) similarly filter the keywords in the sentences and select an optimal perturbation from various generated candidates. BERTAttack Li et al. (2020) applies BERT to maintain semantic consistency and generate substitutions for vulnerable words detected in the input.
For all attack methods, we introduce four metrics to measure BERT's resistance to adversarial attacks under different defence algorithms. **Clean accuracy (Clean%)** refers to the model's test accuracy on the clean dataset. **Accuracy under attack (Aua%)** refers to the model's prediction accuracy with the adversarial data generated by specific attack methods. **Attack success rate (Suc%)** measures the ratio of the number of texts successfully scrambled by a specific attack method to the number of all texts involved. **Number of Queries (#Query)** refers to the average attempts the attacker queries the target model. The larger the number is, the more complex the model is to be attacked.
### Baseline Methods
Since our method is based on the adversarial training objective, we mainly compare it with previous adversarial training algorithms. In addition, to refine the demonstration of the effectiveness of our method, we also introduce two non-adversarial training methods (InfoBERT and Flooding-X) from current state-of-the-art works.
PgdpProjected gradient descent Madry et al. (2018) formulates adversarial training algorithms to minimize the empirical loss on adversarial examples.
FreeLBFreeLB Zhu et al. (2019) generates virtual adversarial samples in the region surrounding the input samples by adding adversarial perturbations to the word embeddings.
FreeLB++Based on FreeLB, Li et al. (2021) discovered that the effectiveness of adversarial training could be improved by scaling up the steps of FreeLB, and proposed FreeLB++, which exhibits the current optimal results in textual adversarial training.
TavatToken-Aware Virtual Adversarial Training Li and Qiu (2021) proposed a token-level perturbation vocabulary to constrain adversarial training within a token-level normalization ball.
InfoBERTInfoBERT Wang et al. (2020) leverages two regularizers based on mutual information, enabling models to explore stable features better.
Flooding-XFlooding-XLiu et al. (2022) smooth the parameter landscape with Flooding Ishida et al. (2020) to boost model resistance to adversarial perturbations.
### Implementation Details
We reproduced the baseline works based on their open-source codes, and the results are competitive relative to what they reported in the paper. The **Clean%** is evaluated on the whole test set. **Aua%**, **Suc%** and **#Query** are evaluated on the whole test dataset for SST-2, and on 1000 randomly selected samples for the other three datasets. We train our models on NVIDIA RTX 3090 GPUs. Most parameters, such as learning rate and warm-up steps, are consistent with the FreeLB (Zhu et al., 2019). We train 8 epochs with 3 random seeds for each model on each dataset and report the resulting mean error (or accuracy) on test sets. To reduce the time consumption for calculating the distribution shift risk, for each step we sample 64 sentences (32 for IMDB) from the validation set to estimate our adversarial loss. More implementation details and hyperparameters can be found in Appendix B.
### Experimental Results
Our analysis of the DSRM approach with other comparative methods against various adversarial attacks is summarized in Table 1. Our method demonstrates significant improvements in the BERT's resistance to these attacks, outperforming the baseline defence algorithm on most datasets.
In the SST-2, IMDB, and AG NEWS datasets, DSRM achieved optimal robustness against all three attack algorithms. It is worth noting that the effectiveness of DSRM was more pronounced on the more complex IMDB and AG NEWS datasets, as the estimation of adversarial loss for these tasks is more challenging than for the simpler SST-2 dataset. This phenomenon verifies that our method better estimates the inner maximization problem. In
\begin{table}
\begin{tabular}{c|l|c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{**Datasets**} & \multirow{2}{*}{**Methods**} & \multirow{2}{*}{**Clean\%**} & \multicolumn{3}{c|}{**TextFooler**} & \multicolumn{3}{c|}{**BERT-Attack**} & \multicolumn{3}{c}{**TextBugger**} \\ \cline{3-13} & & & \multicolumn{1}{c|}{Au\%} & \multicolumn{1}{c|}{Suc\%} & \multicolumn{1}{c|}{\#Query} & \multicolumn{1}{c|}{Au\%} & \multicolumn{1}{c|}{Suc\%} & \multicolumn{1}{c|}{\#Query} & \multicolumn{1}{c}{Au\%} & \multicolumn{1}{c}{Suc\%} & \multicolumn{1}{c}{\#Query} \\ \hline \multirow{8}{*}{**SST-2**} & Fine-tune & 93.1 & 5.7 & 94.0 & 89.3 & 5.9 & 93.4 & 108.9 & 28.2 & 68.7 & 49.2 \\ & PGD\({}^{\dagger}\) & 92.8 & 8.3 & 90.7 & 94.6 & 8.7 & 90.5 & 117.7 & 31.5 & 65.2 & 53.3 \\ & FreeLB\({}^{\dagger}\) & 93.6 & 8.5 & 91.4 & 95.4 & 9.3 & 90.2 & 118.7 & 31.8 & 64.7 & 50.2 \\ & FreeLB\({}^{\dagger+}\)\({}^{\dagger}\) & 92.9 & 14.3 & 84.8 & 118.2 & 11.7 & 87.4 & 139.9 & 37.4 & 61.2 & 52.3 \\ & TAVAAT\({}^{\dagger}\) & 93.0 & 12.5 & 85.3 & 121.7 & 11.6 & 85.3 & 129.0 & 29.3 & 67.2 & 48.6 \\ & InfoBERT\({}^{\ddagger}\) & 92.9 & 12.5 & 85.1 & 122.8 & 13.4 & 83.6 & 133.3 & 33.4 & 63.8 & 50.9 \\ & Flooding-X\({}^{\ddagger}\) & **93.1** & 28.4 & 67.5 & 149.6 & 25.3 & 70.7 & 192.4 & 41.9 & 58.3 & 62.5 \\ \cline{2-13} & DSRM(ours) & 91.5 & **32.8** & **65.1** & **153.6** & **27.2** & **69.1** & **201.5** & **44.2** & **51.4** & **88.6** \\ \hline \multirow{8}{*}{**QNLI**} & Fine-tune & 90.6 & 5.8 & 94.2 & 161.9 & 3.5 & 96.1 & 216.5 & 10.9 & 88.0 & 98.4 \\ & PGD\({}^{\dagger}\) & 90.6 & 14.3 & 81.2 & 201.6 & 17.3 & 80.6 & 268.9 & 27.9 & 67.8 & 134.6 \\ & FreeLB\({}^{\dagger}\) & 90.7 & 12.8 & 85.3 & 189.4 & **21.4** & 76.8 & **324.2** & 29.8 & 69.3 & 143.9 \\ & FreeLB++\({}^{\dagger}\) & **91.1** & 16.4 & 81.4 & 193.7 & 20.7 & 77.0 & 301.7 & 30.2 & 66.7 & 150.1 \\ & InfoBERT\({}^{\ddagger}\) & 90.4 & 18.0 & 82.5 & 212.9 & 13.1 & 85.8 & 270.2 & 15.4 & 83.9 & 127.9 \\ & Flooding-X\({}^{\ddagger}\) & 90.8 & 25.6 & 71.3 & 232.7 & 18.7 & 79.2 & 294.6 & 29.4 & 67.5 & 137.1 \\ & DSRM(ours) & 90.1 & **27.6** & **65.4** & **247.2** & 20.4 & **76.7** & 312.4 & **37.1** & **59.2** & **176.3** \\ \hline \multirow{8}{*}{**IMDB**} & Fine-tune & 92.1 & 10.3 & 88.8 & 922.4 & 5.3 & 94.3 & 1187.0 & 15.8 & 83.7 & 695.2 \\ & PGD\({}^{\dagger}\) & 93.2 & 26.0 & 72.1 & 1562.8 & 21.0 & 77.6 & 2114.6 & 41.6 & 53.2 & 905.8 \\ & FreeLB\({}^{\dagger}\) & 93.2 & 35.0 & 62.7 & 1736.9 & 29.0 & 68.4 & 2588.8 & 53.0 & 44.2 & 1110.9 \\ & FreeLB++\({}^{\dagger}\) & 93.2 & 45.3 & 51.0 & 1895.3 & 39.9 & 56.9 & 2732.5 & 42.9 & 54.6 & 1094.0 \\ & TAVAAT\({}^{\dagger}\) & 92.7 & 27.6 & 71.9 & 1405.8 & 23.1 & 75.1 & 2244.8 & 54.1 & 44.1 & 1022.6 \\ & InfoBERT\({}^{\ddagger}\) & 93.3 & 49.6 & 49.1 & 1932.3 & 47.2 & 51.3 & 3088.8 & 53.8 & 44.7 & 1070.4 \\ & Flooding-X\({}^{\ddagger}\) & 93.4 & 45.5 & 53.5 & 2015.4 & 37.3 & 60.8 & 2448.7 & 62.3 & 35.8 & 1187.9 \\ & DSRM(ours) & **93.4** & **56.3** & **39.0** & **2215.3** & **54.1** & **41.2** & **3309.8** & **67.2** & **28.9** & **1207.7** \\ \hline \multirow{8}{*}{**AG NEWS**} & Fine-tune & 93.9 & 28.6 & 69.9 & 383.3 & 17.6 & 81.2 & 556.0 & 45.2 & 53.4 & 192.5 \\ & PGD\({}^{\dagger}\) & 94.5 & 36.8 & 68.2 & 414.9 & 21.6 & 77.1 & 616.1 & 56.4 & 41.9 & 201.8 \\ \cline{1-1} & FreeLB\({}^{\dagger}\) & 94.7 & 34.8 & 63.4 & 408.5 & 20.4 & 73.8 & 596.2 & 54.2 & 43.0 & 210.3 \\ \cline{1-1} & FreeLB++\({}^{\dagger}\) & 94.9 & 51.5 & 46.0 & 439.1 & 41.8 & 56.2 & 676.4 & 55.9 & 41.4 & 265.4 \\ \cline{1-1} & TraVAAT\({}^{\dagger}\) & **95.2** & 31.8 & 66.5 & 369.9 & 35.0 & 62.5 & 634.9 & 54.2 & 43.9 & 231.2 \\ \cline{1-1} & InfoBERT\({}^{\dagger}\) & 94.5 & 33.8 & 65.1 & 395.6 & 23.4 & 75.3 & 618.9 & 49.6 & 47.7 & 194.1 \\ \cline{1-1} & Flooding-X\({}^{\ddagger}\) & 94.8 & 42.4 & 54.9 & 421.4 & 27.4 & 71.0 & 590.3 & 62.2 & 3
the QNLI dataset, DSRM only fails to win in the BertAttack, but still maintains the lowest attack success rate among all methods, with an Aua% that is only 1% lower than that of FreeLB. This difference in performance can be attributed to the varying clean accuracy of the two methods, in which case DSRM misclassifies a small number of samples that are more robust to the attack.
In terms of clean accuracy, our method suffers from a minor degradation on SST-2, QNLI and AGNEWS, which is acceptable as a trade-off in robustness and generalization for adversarial training, and we will further discuss this phenomenon in the next section. On IMDB, our approach achieves the best clean accuracy together with flooding-X. We attribute this gain to the greater complexity of the IMDB dataset, that the aforementioned trade-off appears later to enable DSRM to achieve better performance.
Overall, DSRM performs better than the baseline adversarial training methods by 5 to 20 points on average without using any adversarial examples as training sources. Besides, our approach is more effective for complex datasets and remains the best-performing algorithm on Textfooler and Textbigger, which demonstrates the versatility and effectiveness of DSRM. Our experiments demonstrate that adversarial training methods have a richer potential for constructing robust language models.
## 5 Analysis and Discussion
In this section, we construct supplementary experiments to analyze our DSRM framework further.
### DSRM Induces Smooth Loss Distribution
Previous works demonstrate that deep neural networks suffer from overfitting training configurations and memorizing training samples, leading to poor generalization error and vulnerability towards adversarial perturbations (Werbachowski et al., 2019; Rodriguez et al., 2021). We verify that DSRM mitigates such overfitting problems by implicitly regularizing the loss's smoothness in the input space. Figure 1 shows the training/test loss of each BERT epoch trained by DSRM and fine-tuning. Models trained by fine-tuning overfit quickly and suffer persistent performance degradation as the epoch grows. In contrast, the loss curves of our method maintain lower generalization errors with a minor variance of the predicted losses on the test set. This improvement comes from the fact that under the training objective of DSRM, where the model allocates more attention to samples with a higher loss.
### Effect of Perturbation Intensity
DSRM has a single hyperparameter \(\varepsilon\) to control the constraints on perturbation intensity. The extension in the perturbation range brings a better optimization on the defence objective, while the mismatch between the train and test set data distribution may impair the model performance. To further analyze the impact of DSRM on model accuracy and robustness, we conduct a sensitivity analysis of perturbation intensity \(\varepsilon\). Figure 2 illustrates the variation curve of performance change for our method on three attack algorithms.
DSRM improves accuracy and Aua% when perturbations are moderated (\(\leq 0.2\)), similar to other adversarial training methods. When the perturbation becomes stronger, the model's resistance to adversarial attacks improves notably and suffers a drop in clean accuracy. Such turning points occur earlier in our method, making it a trade-off between model accuracy and robustness. We argue that this phenomenon comes from the fact that the clean data distribution can be treated as a marginal distribution in the previous adversarial training, where the model can still fit the original samples.
### Time Consumption
In section 2, we analyze the positive correlation between training steps and model performance in adversarial training. Such trade-off in efficiency and effectiveness comes from the complex search process to find the optimal perturbation. DSRM
Figure 1: The train/test loss of DSRM and fine-tuning on the SST-2 (a) and IMDB (b) datasets. We report the mean loss on the train and test sets, and variance (marked with shadow) only on the test set. Our method maintains uniform loss distribution and better consistency between training and test data while the fine-tuning overfits quickly after one epoch.
crumvents this issue by providing upper-bound estimates with only clean data. To further reveal the strength of DSRM besides its robustness performance, we compare its GPU training time consumption with other adversarial training methods. As is demonstrated in Table 2, the time consumption of DSRM is superior to all the comparison methods. Only TAVAT (Li and Qiu, 2021) exhibits similar efficiency to ours (with about 30% time growth on SST-2 and IMDB). TAVAT neither contains a gradient ascent process on the embedding space, but they still require the construction of additional adversarial data. More experimental details are summarized in Appendix C.
### Trade-offs in Standard Adversarial Training
In this section, we further discuss the trade-off between computational cost and performance in vanilla adversarial training. We empirically show that larger perturbation radii and steps enhance the effectiveness of textual adversarial training. Similar phenomena are previously found in image datasets by Zhang et al. (2019) and Gowal et al. (2020). The experimental results for these two modifications are shown in Figure 3.
In sub-figure (a), relaxing perturbation threshold remarkably increases the model robustness and only suffers a slight decrease when the threshold is larger than 0.6 for Textbugger. In subfigure (b), as the value of steps grows, the models' accuracy under attack increases until they reach their peak points. Subsequently, they begin to decline as the number of steps increases consistently. Notably, the optimal results are 4-10% higher in (b) relative to (a), demonstrating that a larger number of steps is necessary to achieve optimal robustness.
We give a possible explanation for the above performance. We describe the standard adversarial training as exploring potential adversarial samples in the embedding space. When the step number is small, the adversarial sample space is correspondingly simple, causing the model to underestimate the adversarial risks. A broader search interval can prevent these defects and achieve outstanding robustness as the number of steps grows.
However, these best results occur late in the step growth process. As shown in (b), a defence model needs 30 steps (about ten times the time cost) for Textfooler, 20 for Textbugger, and 40 for BertAttack to achieve optimal performance. This drawback considerably reduces the efficiency and practicality of adversarial training.
## 6 Conclusion
In this paper, we delve into the training objective of adversarial training and verify that the robust optimization loss can be estimated by shifting the distribution of training samples. Based on this discovery, we propose DSRM as an effective and more computationally friendly algorithm to overcome the trade-off between efficiency and effectiveness in adversarial training. DSRM optimizes the up
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline
**Methods** & **SST-2** & **IMDB** & **AG NEWS** \\ \hline Finetune & 227 & 371 & 816 \\ \hline
**DSRM** & **607** & **1013** & **2744** \\ \hline TAVAT & 829 & 1439 & 2811 \\ \hline FreeLB & 911 & 1558 & 3151 \\ \hline PGD & 1142 & 1980 & 4236 \\ \hline FreeLB++ & 2278 & 3802 & 5348 \\ \hline \hline \end{tabular}
\end{table}
Table 2: GPU time consumption (seconds) of training one epoch on the whole dataset.
Figure 3: The impact of different values of the perturbation threshold (a) and ascent steps (b) on IMDB dataset. We show the accuracy score of the FreeLB (Zhu et al., 2019) algorithm under three attack methods. Sub-figure (a) uses a 10-step gradient ascent with different constraints on the \(l_{2}\) norm of perturbations. In sub-figure (b), each step introduces a perturbation of length 0.05.
Figure 2: Accuracy and Aua% of BERT trained by DSRM under different perturbations.
per bound of adversarial loss by perturbing the distribution of training samples, thus circumventing the complex gradient ascent process. DSRM achieves state-of-the-art performances on various NLP tasks against different textual adversarial attacks. This implies that adversarial samples, either generated by gradient ascent or data augmentation, are not necessary for improvement in adversarial robustness. We call for further exploration and understanding of the association between sample distribution shift and adversarial robustness.
## Acknowledgements
The authors wish to thank the anonymous reviewers for their helpful comments. This work was partially funded by National Natural Science Foundation of China (No.61976056,62076069) and Natural Science Foundation of Shanghai (23ZR1403500).
## 7 Limitations
This section discusses the potential limitations of our work. This paper's analysis of model effects mainly focuses on common benchmarks for adversarial defence, which may introduce confounding factors that affect the stability of our framework. Therefore, our model's performance on more tasks, \(e.g.\), the MRPC dataset for semantic matching tasks, is worth further exploring. In addition, the present work proposes to conduct adversarial training from the perspective of estimating the overall adversarial loss. We expect a more profound exploration of improving the accuracy and efficiency of such estimation. We are also aware of the necessity to study whether the properties of traditional methods, such as the robust overfitting problem, will also arise in DSRM-based adversarial training. We leave these problems to further work.
|
2307.02731 | Visualization Supporting Talent Portrait Generation:an Empirical Study | In today' s era of scientific and technological advancements, the importance
of talent resources is increasingly highlighted. This article will attempt to
summarize the academic trajectories and successes of numerous scientists from
both past and present, aiming to reproduce the correlation between scientists'
personal development and their academic output. Firstly, this article analyzes
the life trajectories of researchers, visualizing their research
accomplishments, collaborative partners, and research inheritance, and analyze
based on the results. | Yuqing Fan, Shenghui Cheng | 2023-07-06T02:31:04Z | http://arxiv.org/abs/2307.02731v1 | # Visualization Supporting Talent Portrait Generation: an Empirical Study
###### Abstract
In today's era of scientific and technological advancements, the importance of talent resources is increasingly highlighted. This article will attempt to summarize the academic trajectories and successes of numerous scientists from both past and present, aiming to reproduce the correlation between scientists' personal development and their academic output. Firstly, this article analyzes the life trajectories of researchers, visualizing their research accomplishments, collaborative partners, and research inheritance, and analyze based on the results.
## 1 Introduction
The development of science is not only about the constant updating of scientific theories and technological means, but also the process of inheritance and development of scientific knowledge, scientific traditions, and scientific culture among generations of scientists. Scientists at the forefront of the world often have distinct teacher-student relationships. Liwen 2004 once said in "General History of Chinese Academics": "The history of academia is derived from academia. Therefore, the nature of academia determines the nature of its history."
With the emergence of talent shortages, brain drain, and structural contradictions in employment, the use of visual methods for human resource management has become increasingly popular in describing talent development. The "14th Five Year Plan" Big data Industry Development Plan proposes to build a prosperous and orderly industrial ecology and improve the level of Big data public services such as talent training. Many local governments, enterprises, and institutions also use visual and quantitative features to search for talents that meet their needs. In this article, we suggest combining visualization methods with researchers' career trajectories to analyze their careers from three perspectives: (1) research projects led by the researchers themselves, (2) collaborative projects in which the researchers participate, and (3) teacher-student relationships between researchers. In addition, this article will quantitatively study the output of researchers through longitudinal analysis and use it to predict the future development of researchers. |
2303.01917 | Pyramid Pixel Context Adaption Network for Medical Image Classification
with Supervised Contrastive Learning | Spatial attention mechanism has been widely incorporated into deep neural
networks (DNNs), significantly lifting the performance in computer vision tasks
via long-range dependency modeling. However, it may perform poorly in medical
image analysis. Unfortunately, existing efforts are often unaware that
long-range dependency modeling has limitations in highlighting subtle lesion
regions. To overcome this limitation, we propose a practical yet lightweight
architectural unit, Pyramid Pixel Context Adaption (PPCA) module, which
exploits multi-scale pixel context information to recalibrate pixel position in
a pixel-independent manner dynamically. PPCA first applies a well-designed
cross-channel pyramid pooling to aggregate multi-scale pixel context
information, then eliminates the inconsistency among them by the well-designed
pixel normalization, and finally estimates per pixel attention weight via a
pixel context integration. By embedding PPCA into a DNN with negligible
overhead, the PPCANet is developed for medical image classification. In
addition, we introduce supervised contrastive learning to enhance feature
representation by exploiting the potential of label information via supervised
contrastive loss. The extensive experiments on six medical image datasets show
that PPCANet outperforms state-of-the-art attention-based networks and recent
deep neural networks. We also provide visual analysis and ablation study to
explain the behavior of PPCANet in the decision-making process. | Xiaoqing Zhang, Zunjie Xiao, Xiao Wu, Yanlin Chen, Jilu Zhao, Yan Hu, Jiang Liu | 2023-03-03T13:36:55Z | http://arxiv.org/abs/2303.01917v3 | # PPCR: Learning Pyramid Pixel Context Recalibration Module for Medical Image Classification
###### Abstract
Spatial attention mechanism has been widely incorporated into deep convolutional neural networks (CNNs) via long-range dependency capturing, significantly lifting the performance in computer vision, but it may perform poorly in medical imaging. Unfortunately, existing efforts are often unaware that long-range dependency capturing has limitations in highlighting subtle lesion regions, neglecting to exploit the potential of multi-scale pixel context information to improve the representational capability of CNNs. In this paper, we propose a practical yet lightweight architectural unit, Pyramid Pixel Context Recalibration (PPCR) module, which exploits multi-scale pixel context information to recolibrate pixel position in a pixel-independent manner adaptively. PPCR first designs a cross-channel pyramid pooling to aggregate multi-scale pixel context information, then eliminates the inconsistency among them by the well-designed pixel normalization, and finally estimates per pixel attention weight via a pixel context integration. PPCR can be flexibly plugged into modern CNNs with negligible overhead. Extensive experiments on five medical image datasets and CIFAR benchmarks empirically demonstrate the superiority and generalization of PPCR over state-of-the-art attention methods. The in-depth analyses explain the inherent behavior of PPCR in the decision-making process, improving the interpretability of CNNs.
## 1 Introduction
Attention mechanism has achieved remarkable success in a variety of computer vision tasks, [12, 13, 39, 7, 25], e.g., object detection, instance segmentation, and image classification. The core idea of the attention mechanism is to allow deep convolutional neural networks (CNNs) to focus on the informative regions and ignore redundant ones. One of the most representative works is the non-local network (NLNet) [39], which belongs to the spatial attention method by explicitly capturing long-range dependencies between pixel positions via a self-attention mechanism. Following the self-attention mechanism used in NLNet, researchers are dedicated to improving self-mechanism design for capturing more sophisticated long-range dependencies among pixel positions [47, 22, 31, 8]. Although self-attention based spatial attention methods have achieved surpassing performance in a variety of natural image-based tasks, they may not perform well on medical image analysis tasks [33].
In seeking answers to this phenomenon, we have gained insights as follows: (1) **Long-Range Dependency Capturing.** Self-attention mechanism commonly captures long
Figure 1: Pixel attention weight maps generated by NL [39], GC [3], EA [12], and PPCR at the high stage of ResNet18 for skin disease, blinding disease, and retinal disease based on three medical image modalities: dermatoscopic image, fundus image, and optical coherence tomography (OCT) image. Clearly, our method is more capable of emphasizing subtle lesion regions than state-of-the-art spatial attention methods.
range dependencies across all pixel positions to learn such a pixel position correlation, inevitably introducing redundant position information from other pixel positions. The negative influences of redundant position information on natural image-based tasks can be ignored, because object regions in natural images are discriminative, which is easily highlighted by long-range dependency capturing. In contrast, lesion regions in medical images are subtle. That is, pixel context difference between redundant regions and lesion regions is obscure [14, 15], which is difficult to emphasize lesion regions through modeling long-range dependency. This is mainly because redundant position information significantly negatively influences distinguishing lesion regions and redundant regions. (2) **Pixel Context Aggregation.** Channel attention methods have aggregated multi-scale spatial context information to improve the performance by using spatial pyramid pooling method [11, 21, 27]. However, existing spatial attention methods only have utilized pointwise convolution (Conv\(1\times 1\)) [39], or individual cross-channel pooling (CP) [4] methods to aggregate single-scale pixel context information along the channel axis, often ignoring the effects of multi-scale pixel context information aggregation. According to our extensive literature survey, we have found that no spatial attention method has exploited the potential of multi-scale pixel context information to improve its representational ability.
Based on above systematical analysis, this paper is really curious to find out: _(1) Can one learn an alternative method to highlight informative pixel positions and suppress trivial ones without capturing long-range dependency among pixel positions in spatial attention? (2) Can we incorporate multi-scale pixel context information into spatial attention design to improve the performance and interpretability of CNNs?_
To answer these two questions, we proposed a novel yet lightweight architectural unit, Pyramid Pixel Context Recalibration (PPCR) module, which explicitly integrates multi-scale pixel context information into CNN representations through a form of pixel-independent context recalibration. Our PPCR consists of a triplet of components: _Cross-Channel Pyramid Pooling_, _Pixel Normalization_, and _Pixel Context Integration_. To the best of our knowledge, this paper is the first to design a _Cross-Channel Pyramid Pooling_ to aggregate multi-scale pixel context information at the same pixel positions through different cross-channel scales at the channel dimension. Note that a pixel position involves multi-scale pixel context, and only specific pixel context plays a significant role. Then, _Pixel Normalization_ is developed to eliminate the significant fluctuation of multi-scale pixel context distribution per pixel position, which is different from previous normalization methods performs the pixel context statistics at the feature maps. It is followed by _Pixel Context Integration_, which adaptively fuses normalized multi-scale pixel context information to produce pixel attention weights via pixel-level operation. The pixel attention weights are finally supposed to recalibrate per pixel position to emphasize or ignore their information. Our PPCR only increases negligible computational cost and few parameters, seamlessly plugged into modern CNNs and trained end-to-end.
To demonstrate the effectiveness and efficiency of our method, we conduct extensive experiments on five medical image datasets of two high-resolution datasets and three low-resolution datasets. The results show that our method is superior to state-of-the-art (SOTA) attention methods with less model complexity. Furthermore, we also provide compelling results on CIFAR benchmarks, proving its generalization capability on natural images. Beyond the practical improvements, we empirically analyze the effects of the pixel level recalibration on emphasizing significant pixel positions and redundant ones through visual analysis and ablation study: it controls the relative contributions of multi-scale pixel context information and pixel normalization, which is beneficial to improve the interpretability of CNNs in the decision-making process. Figure 1 provides the generated pixel attention weight maps of PPCR and SOTA attention methods, showing that our method is more capable of locating significant pixel positions accurately than others, agreeing with the clinician's diagnosis process. We hope our efficient and lightweight design sheds light on future research of attention methods.
In summary, the main contributions of this paper are as follows:
* We propose a pyramid pixel context recalibration module which improves the representational capability of CNNs by combining multi-scale pixel context information and pixel normalization. In particular, this is the first to develop a cross-channel pyramid pooling method to aggregate and exploit multi-scale pixel context information to boost the performance of the spatial attention method. Additionally, we design a pixel normalization method to eliminate the inconsistency of multi-scale pixel context information per-pixel position.
* We conduct comprehensive experiments on five medical image datasets and CIFAR datasets consistently to demonstrate the superiority and generalization capability over SOTA attention methods.
* Visual analysis and ablation study are implemented to interpret the inherent decision-making behavior of our PPCR, conducing to enhancing the interpretability of a CNN.
## 2 Related Work
**Pyramid Pooling.** Pyramid pooling is a widely-acknowledged technique to extract multi-scale context information [27, 29, 42, 47]. Mainly, spatial pyramid pooling has been widely utilized in various tasks, e.g., image classification, semantic segmentation, and object detection. He et al. [16] present spatial pyramid pooling to obtain multi-scale spatial context information for image classification. Gu et al. [10] propose spatial pyramid pooling for semantic segmentation. Guo et al. [11] propose spatial pyramid attention (SPA) module by incorporating spatial pyramid pooling for image classification. Unlike existing works that apply spatial pyramid pooling to extract multi-scale spatial context information for channel attention methods, we propose a cross-channel pyramid pooling to extract multi-scale pixel context information for spatial attention methods, which has not been studied before.
**Normalization.** Batch normalization (BN) [24] is a pioneered technique that normalizes the statistics along the batch axis to stabilize the intermediate feature distribution of hidden layers, allowing deep neural networks to train faster yet fluctuate smaller. Additionally, the property of batch size in BN dramatically affects the network performance when reducing batch size due to inaccurate batch statistics estimation. Several normalization methods have been proposed to tackle this issue [1, 9, 32, 40, 41]. Layer normalization (LN) [1] computes the statistics along the channel axis, and instance normalization (IN) [38] performs the BN-like normalization operator per intermediate feature map. Weight normalization (WN) [35] normalizes the filter weights. Group normalization (GN) [41] divides feature maps into several groups and then computes the statistics per group. The design of our PPCR is motivated by LN. Instead of stabilizing all pixel context features from feature maps along the channel axis, we propose a pixel normalization to normalize pixel context features along the channel axis at the same pixel positions. This is mainly because PPCR is a spatial attention method which aims to emphasize significant pixel positions and suppress trivial ones in a pixel-independent manner.
**Attention Mechanism.** The current research directions of attention mechanism can be roughly divided into three categories [7, 13, 23, 28, 34, 36, 44, 46]: channel attention, spatial attention, and combination. Squeeze-and-excitation (SE) [25] is one of the successful channel attentions, which captures long-range dependencies among channels. Since PPCR is a spatial attention module, this paper briefly surveyed spatial attention modules. Gather-excite (GE) [19] and coordinate attention (CA) [18] learn long-range spatial context information to boost the performance of CNNs. Recently, self-attention mechanism and its variants [2, 3, 5, 12, 45] dominates the spatial attention research due to their powerful capability in modeling language dependencies among all pixel positions. For instance, global context network (GCNet) [3] utilizes a self-attention mechanism to construct a global context (GC) block. However, most existing spatial attention methods capture long-range dependencies among all pixel positions, which is skilled at highlighting concentrative object regions in natural images but is poor at learning subtle lesion regions in medical images.
Different from these methods are dedicated to designing self-attention-based spatial attention modules, our method designs a more efficient yet lightweight way to highlight or suppress pixel positions in a pixel-independent manner by incorporating multi-scale pixel context information.
## 3 Pyramid Pixel Context Recalibration Module
Given the intermediate feature maps \(X\in R^{C\times H\times W}\), PPCR generates the pixel attention weight map \(G\in R^{1\times H\times W}\), where C, H and W indicate the number of channels, height and width of feature maps accordingly. As illustrated in Figure 2, our method is abstracted by the following three components: _Cross-Channel Pyramid Pooling_, _Pixel Normalization_, and _Pixel Context Integration_.
### Cross-Channel Pyramid Pooling
Spatial pyramid pooling is often used in channel attention methods to aggregate multi-scale spatial context information from multi-scale spatial regions, significantly improving performance. However, recent spatial attention methods only aggregate single-scale pixel context information and have not yet exploited the potential of multi-scale pixel context information. This paper is the first to propose a cross-channel pyramid pooling (CCPP) method to aggregate multi-scale pixel context information of all pixel positions from cross-channel scales. Sophisticated CCPP design can be used to boost performance further, but this is not the key goal of this paper. Thus, we simply employ an averaged CCPP to aggregate the multi-scale pixel context features of all pixel positions from three different cross-channel scales at the channel dimension. Furthermore, Figure 2(a) provides a visual implementation case of CCPP with three cross-channel scales at a pixel position \(x(i,j)\), which can help audiences understand the proposed CCPP visually. The output of multi-scale pixel context description \(T\in R^{D\times H\times W}\) (\(D\) is the number of pixel context feature maps, and \(D\) is equal to 7 in this paper) of all pixel positions through the CCPP can be computed by:
\[T=[CP(X,1),CP(X,2),CP(X,4)], \tag{1}\]
where \(CP(X,1)\), \(CP(X,2)\), and \(CP(X,4)\) indicate one, two, and four pixel context feature maps extracted from three different cross-channel scales. \(CP\) indicates the
cross-channel pooling, which performs at per pixel position \(x(i,j)\) across channels \(K\leq C\) can be computed as follows:
\[\mu(i,j)=\frac{1}{K}\sum_{k=1}^{K}x(k,i,j), \tag{2}\]
where \(\mu(i,j)\) is averaged pixel context feature of \(x(i,j)\).
### Pixel Normalization
Existing normalization methods such as LN and BN compute the statistics of pixel context features across the feature map, which can not effectively eliminate the inconsistency of multi-scale pixel context features \(T\) at the same pixel positions. To stabilize multi-scale pixel context feature distribution, our PPCR introduces a pixel normalization (PN) to normalize them across all pixel context feature maps at each pixel position, as illustrated in Figure 2(b). For each pixel context \(t(d,i,j)\), the PN can be formulated as follows:
\[\hat{t}_{d,i,j}=\frac{t_{d,i,j}-\mu^{(t)}_{(i,j)}}{\delta^{(t)}_{(i,j)}}, \tag{3}\]
where \(\hat{t}_{(d,i,j)}\) indicates the normalized multi-scale pixel context at the pixel position \((i,j)\) of for \(d\) pixel context feature map. \(\mu^{(t)}_{(i,j)}\) and \(\delta^{(z)}_{(i,j)}\) are the mean \(\mu^{t}_{(i,j)}\) and standard deviation \(\delta^{(t)}_{(i,j)}\) of multi-scale pixel context features at pixel position \(t(i,j)\), which can be computed as:
\[\mu^{t}_{(i,j)}=\frac{1}{D}\sum_{i=1}^{D}z_{(d,i,j)},\delta^{(t)}_{(i,j)}= \sqrt{\frac{1}{D}\sum_{i=1}^{D}(z_{(d,i,j)}-\mu^{(t)}_{(i,j)})^{2}}+\xi, \tag{4}\]
where \(\xi\) is a very small constant.
### Pixel Context Integration
Following the PN, we define the pixel context integration (PCI) function (as shown in Figure 2(c)) to convert the normalized multi-scale pixel context features \(\hat{T}\in R^{D\times H\times W}\) into pixel attention weights \(G\), which can be represented by:
\[G=\sigma(Z), \tag{5}\]
where \(\sigma\) is the sigmoid function as the gating mechanism; \(Z\in R^{1\times H\times W}\) indicates the encoded multi-scale pixel context features, which can be formulated as:
\[Z=W\cdot\hat{T}, \tag{6}\]
where \(W\in R^{7\times H\times W}\) indicates learnable parameters. Finally, the output \(Y\in R^{C\times H\times W}\) is computed as follows:
\[Y=G\cdot X. \tag{7}\]
### Complexity Analysis
Our PPCR is supposed to be lightweight in terms of computational cost and parameters. The PCI function determines the additional parameters of PPCR: \(\sum_{s=1}^{S}H_{s}\cdot W_{s}\cdot N_{s}\cdot 7\). \(S\) and \(N_{s}\) represent the number of stages and the number of repeated blocks in the s-th stage, which we follow the same definition of stage in [17], \(H_{s}\) and \(W_{s}\) represent the height and width of feature maps in the s-th stage. Therefore, the total number of additional parameters for PPCR is:
\[7\sum_{s=1}^{S}N_{s}\cdot H_{s}\cdot W_{s}, \tag{8}\]
which is far less than the total number of parameters for NL:\(\frac{2}{r}\sum_{s=1}^{S}N_{s}C_{s}+\sum_{s=1}^{S}N_{s}^{2}\) where \(r\) and \(C_{s}\) represent
Figure 2: Architecture of the pyramid pixel context recalibration (PPCR) module. Given the intermediate feature maps \(X\in R^{C\times H\times W}\), PPCR generates the pixel attention weight map \(G\in R^{1\times H\times W}\).
the reduction ratio and the number of output channels in the s-th stage. According to Eq. 8, the extra parameters of PPCR are determined by the height and width of a feature map, which is different from the extra parameters of channel attention methods determined by the number of channels. Theoretically, our method has parameter advantages over channel attention methods on low-resolution images, which will be verified in experiments. As for computational cost, our PPCR introduces negligible extra computations compared to original network architectures. For example, given a \(224\times 224\) pixel image or \(28\times 28\) as input, PPCR-ResNet50 shares almost the same computations as ResNet50.
## 4 Experiments
In this section, we first introduce our experiment setup and then demonstrate the effectiveness and generalization ability of our proposed method on five medical image datasets and CIFAR benchmarks through comparisons to SOTA attention methods. Then, we conduct systematic visual analyses and ablation study to investigate the inherent behavior of PPCR.
### Experiment Setup
In this paper, we use the following SOTA attention methods to demonstrate the effectiveness of PPCR based on five medical datasets, including SE, SPA, CA, SA (CBAM), NL, EA, and GC by adopting two commonly used CNN architectures as backbones: ResNet18 and ResNet50 [17]. Specifically, _SA_, _NL_, _EA_, _and GC_ are spatial attention methods involving local-range and long-range dependency modeling, which are able to verify the superiority of our method comprehensively.
These methods are implemented by the PyTorch package and use SGD optimizer with default settings during the training process. The initial learning rate is decreased by a factor of 10 every 40 epochs. We set batch size and epochs to 32 and 150 accordingly and run all methods on two TITAN V NVIDIA GPUs under the same experiment settings. Furthermore, five commonly-accepted evaluation metrics are adopted to evaluate the performance and model complexity of PPCR, SOTA attention methods, and baselines: accuracy (ACC), area under the ROC curve (AUC), F1, parameters (Params.), and GFLOPs.
### Medical Image Classification Results
**ISIC2018.** ISIC2018 [37] is a publicly available skin lesion dataset with 10,015 images of seven different labels. This paper uses the same data augmentation strategy in literature [30]. We split the dataset into training, validation, and testing datasets [30] and then resize all images into \(224\times 224\) pixels as input image size for the network. The classification results on the testing dataset are adopted for comparison, and the validation dataset is used to choose the best-trained model. The following experiments adopt the same settings. Listed as in Table 1(Left), PPCR generally performs better between the performance and model complexity than other SOTA attention methods by taking ResNet18 and ResNet50 as backbones. Remarkably, PPCR outperforms CA by absolute over **2.60%** in accuracy under ResNet50, although CA is 1.3% larger in computation cost and **16%** in parameters. Compared with SOTA counterparts (e.g., GC, NL and EA), PPCR consistently obtains over **3.1%** and **2.6%** gains of accuracy and F1 accordingly while benefiting fewer computational costs and parameters. For example, NL is **96%** larger in parameters and **90%** larger in computational cost than PPCR based on ResNet50. We also observe that PPCR outperforms SPA by **3.65%** of accuracy, **5.06%** of AUC, and **7.6%** of F1 accordingly based on ResNet50, which demonstrate the superiority of multi-scale pixel context aggregation via ACCP for spatial attention mechanism compared to multi-scale spatial context aggregation for channel attention mechanism via spatial pyramid pooling.
**Fundus-Isee.** Fundus-iSee is a fundus image dataset with 10,000 images. Fundus-iSee is a fundus image dataset with 10,000 images. It contains four different ocular diseases: age-related macular degeneration (720), diabetic retinopathy (270), glaucoma (450), myopia (790), and normal (7,770). We follow the same data augmentation and dataset splitting methods in literature [6]. The input size of fundus images is resized into \(224\times 224\). The results show that PPCR consistently improves performance over comparable attention methods under fewer budgets. PPCR improves two backbones over 2.25% of F1 by using almost the same model complexity. At the same time, NL, GC, and EA perform worse than these two backbones, demonstrating that our PPCR is more able to locate significant subtle lesion regions than these self-attention-based spatial attention methods.
**MedMNIST Datasets.** MedMNIST is an MNIST-like benchmark for medical image classification [43], containing 15 medical image datasets. In this paper, we use three MedMNIST datasets to further demonstrate the efficiency and effectiveness of our PPCR on low-resolution medical image datasets: OCTMNIST, RetinaMNIST, and BreastMNIST. OCTMNIST comprises 109,309 OCT images of four retinal diseases. RetinaMNIST contains 1,600 retina fundus images of five diabetic retinopathy severity levels. BreastMNIST has 780 breast ultrasound images of two labels. The image size of these three datasets is \(28\times 28\). Moreover, data augmentation and dataset splitting methods are adopted from literature [43] for a fair comparison. According to Table 2, PPCR achieves a better trade-off between effectiveness and efficiency than SOTA attention methods on three MedMNIST datasets. It is worth noting that PPCR outperforms NL by absolute over **16%** on the
OCTMNIST dataset by taking ResNet50 as the backbone, although NL is **96%** larger in parameters.
Moreover, The results in Table 1 and Table 2 demonstrate that PPCR effectively leverages the potential of multi-scale pixel context information and pixel normalization to dynamically re-estimate the relative importance of each pixel position in a pixel-independent manner, agreeing with our expectation.
### Visual Analysis and Interpretation
**Attention Weight Visualization.** Figure 3 plots pixel attention weight feature maps and pixel attention weight distributions of PPCR at three stages: low (Stage_1), middle (Stage_2), and high (Stage_3) of PPCR-ResNet18 on
\begin{table}
\begin{tabular}{l c c c|c c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{ISIC2018} & \multicolumn{3}{c}{Fundus-Isee} & \multirow{2}{*}{Params} & \multirow{2}{*}{GFLOPs} \\ & ACC & AUC & F1 & ACC & AUC & F1 & \\ \hline ResNet18 & 78.65 & 86.31 & 76.73 & 79.23 & 70.36 & 71.19 & **11.18M** & **1.820** \\ +SE [25] & 77.60 & 89.67 & 74.57 & 79.03 & 69.54 & 71.33 & 11.27M & 1.821 \\ +SPA [11] & 79.17 & 89.66 & **77.37** & 79.13 & 70.70 & 71.40 & 12.14M & 1.822 \\ +CA [18] & 77.60 & 89.62 & 76.27 & 79.54 & 69.07 & 72.38 & 11.32M & 1.822 \\ +NL [39] & 73.96 & 86.76 & 71.32 & 78.02 & 67.75 & 69.30 & 11.97M & 1.935 \\ +GC [3] & 77.08 & 86.64 & 74.49 & 79.03 & 71.62 & 71.35 & 11.36M & 1.819 \\ +EA [12] & 70.31 & 86.66 & 62.73 & 78.93 & 64.80 & 71.55 & 11.43M & 1.915 \\ +PPCR & **80.21** & **91.81** & 77.11 & **80.34** & **71.88** & **73.44** & **11.18M** & **1.820** \\ \hline ResNet50 & 72.92 & 86.93 & 69.22 & 78.33 & 66.08 & 69.50 & **23.52M** & **4.116** \\ +SE [25] & 73.96 & 85.64 & 68.80 & 77.52 & 64.40 & 67.70 & 26.05M & 4.118 \\ +SPA [11] & 74.48 & 84.21 & 69.84 & 78.23 & 67.12 & 69.27 & 51.19M & 4.153 \\ +CA [18] & 75.52 & 89.25 & 73.61 & 78.73 & 66.45 & 70.71 & 27.33M & 4.171 \\ +NL [39] & 65.63 & 67.22 & 54.27 & 77.52 & 57.81 & 67.70 & 46.17M & 7.815 \\ +GC [3] & 68.23 & 81.07 & 58.82 & 77.92 & 65.38 & 68.24 & 28.58M & 4.120 \\ +EA [12] & 72.92 & 87.36 & 70.81 & 78.83 & 66.83 & 72.05 & 25.46M & 4.816 \\ +PPCR & **78.13** & **89.27** & **77.44** & **79.44** & **69.22** & **72.34** & **23.52M** & **4.116** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance comparison of different attention methods on two medical image datasets (ISIC2018 and Fundus-Isee) in terms of accuracy, AUC, F1, parameters, and GFLOPs.
\begin{table}
\begin{tabular}{l c c|c c c|c c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{OCTMNIST} & \multicolumn{3}{c}{RetinaMNIST} & \multicolumn{3}{c}{BreastMNIST} & \multirow{2}{*}{Params} & \multirow{2}{*}{GFLOPs} \\ & ACC & AUC & F1 & ACC & AUC & F1 & ACC & AUC & F1 & \\ \hline ResNet18 & 76.20 & 93.89 & 73.61 & 52.00 & 68.96 & 48.52 & 85.90 & 87.93 & 85.39 & 11.17M & 0.458 \\ +SE [25] & 77.50 & 94.89 & 75.14 & 51.00 & 67.62 & **50.95** & 85.26 & 88.01 & 84.65 & 11.31M & 0.458 \\ +SPA [11] & 78.60 & 94.74 & 76.91 & 52.00 & 73.59 & 49.54 & 82.05 & 86.77 & 81.59 & 12.13M & 0.459 \\ +CA [18] & 78.70 & 94.20 & 76.27 & 50.00 & **74.31** & 47.65 & 85.26 & 87.81 & **86.97** & 11.31M & 0.459 \\ +NL [39] & 75.60 & 94.11 & 72.26 & 51.75 & 74.19 & 50.04 & 82.69 & 87.98 & 83.08 & 11.96M & 0.489 \\ +GC [3] & 73.20 & 93.61 & 68.84 & 51.50 & 65.77 & 48.96 & 82.05 & 81.89 & 86.85 & 11.35M & 0.458 \\ +EA [12] & 71.60 & 93.97 & 65.88 & 49.25 & 72.44 & 43.40 & 73.72 & 54.75 & 63.19 & 11.42M & 0.482 \\ +PPCR & **79.80** & **96.33** & **77.79** & **53.00** & 73.63 & 50.01 & **87.20** & **88.89** & **86.97** & 11.17M & 0.458 \\ \hline ResNet50 & 75.40 & 92.86 & 72.04 & 51.50 & 69.36 & **50.56** & 83.33 & 88.24 & 83.30 & 23.51M & 1.053 \\ +SE [25] & 72.50 & 92.69 & 67.69 & 47.75 & 67.29 & 47.22 & 84.62 & 86.90 & 84.36 & 26.24M & 1.057 \\ +SPA [11] & 77.20 & 95.04 & 74.61 & 50.25 & 66.36 & 47.63 & 82.05 & 88.16 & 82.59 & 51.18M & 1.085 \\ +CA [18] & 77.60 & 93.71 & 74.58 & 51.00 & 69.34 & 50.37 & 84.62 & 89.22 & 84.36 & 27.32M & 1.083 \\ +NL [39] & 65.80 & 90.40 & 58.78 & 48.00 & 65.18 & 39.70 & 76.28 & 70.38 & 71.37 & 46.16M & 2.033 \\ +GC [3] & 67.70 & 90.41 & 60.51 & 52.75 & 66.35 & 49.33 & 80.77 & 75.56 & 77.84 & 28.57M & 1.060 \\ +EA [12] & 73.00 & 91.90 & 68.36 & 49.00 & 70.92 & 43.84 & 73.71 & 63.31 & 68.89 & 25.44M & 1.223 \\ +PPCR & **81.90** & **95.38** & **79.97** & **53.25** & **73.68** & 50.51 & **88.46** & **89.35** & **84.37** & 23.51M & 1.053 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance comparison of different attention methods on three MedMNIST datasets (OCTMNIST, RetinaMNIST, and BreastMNIST) in terms of accuracy, AUC, F1, parameters, and GFLOPs.
the ISIC2018 dataset. We find that when the network goes deeper, attention weight differences among pixel positions become more evident, proving that PPCR is capable of distinguishing subtle lesion regions efficiently. Furthermore, Figure 1 (first row) provides pixel attention weight feature maps of NL, GC, EA, and PPCR at the high stage on the ISIC2018 dataset. We find that compared with attention weight differences among pixel positions for NL, GC, and EA, attention weight difference among pixel positions is more apparent. More visual analysis of pixel attention weight feature maps and pixel attention weight distributions of NL, GC, EA, and PPCR are provided in the supplementary materials. The visual analyses demonstrate that PPCR takes advantage of multi-scale pixel context information and pixel normalization in a pixel-independent manner rather than a long-range capturing manner through comparisons to NL, GC, and EA.
**Multi-Scale Pixel Context Value Visualization.** Figure 4(a) presents the multi-scale pixel context feature distributions before and after PN at the high stage of PPCR-ResNet18 on ISIC2018 dataset. We observe that multi-scale pixel context feature distributions differ, indicating that they play varying significance in PPCR. The fluctuations of multi-scale pixel context feature distribution after PN are smaller than before PN, proving that PN can effectively address the inconsistency among multi-scale pixel context features. More details are provided in the supplementary materials.
**Multi-Scale Pixel Context Weight Visualization.** Figure 4(b) offers multi-scale pixel context weight distributions of PPCR-ResNet18 at three stages. We find a significant difference between multi-scale pixel context weight distributions, proving our PPCR adaptively sets relative weights to multi-scale pixel contexts, guiding CNNs to emphasize or suppress significant pixel positions. More details of pixel context weight distributions along the pixel position are provided in the supplementary materials.
methods for learnable parameters \(W\). We find it more appropriate to initially set \(W\) to 0 rather than 1, referring to the classification results. This initialization also conduces to PPCR biasedly to give the higher pixel attention weights for informative pixel positions, as shown in Figure 1 and Figure 2.
**Validation on CIFAR Benchmarks.** CIFAR benchmarks contain CIFAR-10 and CIFAR-100 [26], which are colored natural images of 32\(\times\)32 pixels. Training and testing datasets provide 50,000 and 10,000 images accordingly. We follow the standard practice [20] to augment image data by padding zero to four pixels and randomly cropping them to the original size. As listed in Table 6, PPCR significantly improves the performance on CIFAR benchmarks with minimal parameter and computational cost increment, which proves the generalization ability of PPCR is not constrained to medical image datasets.
## 5 Conclusion
In this paper, we propose an efficient yet lightweight pyramid pixel context recalibration module (PPCR) to dynamically estimate the relative significance of each pixel position in a pixel-independent manner based on multi-scale pixel context features. By incorporating multi-scale pixel context features into feature maps at the pixel level, it improves the representational ability of a CNN efficiently. The experimental results demonstrate the effectiveness and generalization ability of our PPCR through comparisons to SOTA attention methods on both medical image and natural image datasets. Furthermore, we provide visual analyses and ablation study to explain the significance of PPCR in adjusting the relative contribu
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Method} & CIFAR-10 & CIFAR-100 & \multirow{2}{*}{Params} & GFLOPs \\ & ACC & ACC & & \\ \hline ResNet18 & 93.02 & 74.56 & **11.22M** & **0.557** \\ +SE [25] & 94.84 & 75.19 & 11.32M & 0.557 \\ +SPA [11] & 95.00 & 75.56 & 12.18M & 0.557 \\ +CA [18] & 95.21 & 77.73 & 11.36M & 0.558 \\ +NL [39] & 93.38 & 71.97 & 12.01M & 0.595 \\ +GC [3] & 95.38 & 77.53 & 11.40M & 0.557 \\ +EA [12] & 93.16 & 72.05 & 11.47M & 0.588 \\ +PPCR & **95.56** & **78.70** & **11.22M** & **0.557** \\ \hline ResNet50 & 93.62 & 78.51 & **23.71M** & **1.305** \\ +SE [25] & 95.35 & 79.28 & 26.24M & 1.309 \\ +SPA [11] & 94.63 & 78.21 & 51.37M & 1.338 \\ +CA [18] & 95.52 & 79.45 & 27.51M & 1.340 \\ +NL [39] & 94.00 & 72.15 & 46.36M & 2.515 \\ +GC [3] & 95.60 & 78.37 & 28.77M & 1.312 \\ +EA [12] & 93.98 & 71.85 & 25.64M & 1.536 \\ +PPCR & **95.92** & **79.93** & **23.71M** & **1.305** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Performance comparison of different attention methods on CIFAR benchmarks in terms of accuracy, parameters, and GFLOPs.
Figure 4: The multi-scale pixel context feature distributions before and after PN at the high stage (a). Multi-scale pixel weight distributions of PPCR at three stages. The dataset is ISIC2018 and the backbone is ResNet18 (b).
\begin{table}
\begin{tabular}{l c c} \hline \hline \multicolumn{2}{c}{Initialization} \\ \hline W & ACC & F1 \\ \hline
0 & **80.21** & **77.11** \\
1 & 79.69 & 77.04 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results of two initializations with PPCR-ResNet18 on ISIC2018 testing dataset.
tions of multi-scale pixel context information and pixel normalization, contributing to improving the interpretability of CNNs. In future work, we plan to present more efficient methods to explore multi-scale pixel context, which may provide new insights in spatial attention design.
|
2310.07286 | Symmetry-enforced many-body separability transitions | We study quantum many-body mixed states with a symmetry from the perspective
of separability, i.e., whether a mixed state can be expressed as an ensemble of
short-range entangled (SRE) symmetric pure states. We provide evidence for
'symmetry-enforced separability transitions' in a variety of states, where in
one regime the mixed state is expressible as a convex sum of symmetric SRE pure
states, while in the other regime, such a representation is not feasible. We
first discuss Gibbs state of Hamiltonians that exhibit spontaneous breaking of
a discrete symmetry, and argue that the associated thermal phase transition can
be thought of as a symmetry-enforced separability transition. Next, we study
cluster states in various dimensions subjected to local decoherence, and
identify several distinct mixed-state phases and associated separability phase
transitions, which also provides an alternate perspective on recently discussed
'average SPT order'. We also study decohered p+ip superconductors, and find
that if the decoherence breaks the fermion parity explicitly, then the
resulting mixed state can be expressed as a convex sum of non-chiral states,
while a fermion-parity preserving decoherence results in a phase transition at
a non-zero threshold that corresponds to spontaneous breaking of fermion
parity. Finally, we briefly discuss systems that satisfy NLTS (no low-energy
trivial state) property, such as the recently discovered good LDPC codes, and
argue that the Gibbs state of such systems exhibits a temperature-tuned
separability transition. | Yu-Hsueh Chen, Tarun Grover | 2023-10-11T08:18:51Z | http://arxiv.org/abs/2310.07286v2 | # Symmetry-enforced many-body separability transitions
###### Abstract
We study quantum many-body mixed states with a symmetry from the perspective of _separability_, i.e., whether a mixed state can be expressed as an ensemble of short-range entangled (SRE) symmetric pure states. We provide evidence for'symmetry-enforced separability transitions' in a variety of states, where in one regime the mixed state is expressible as a convex sum of symmetric SRE pure states, while in the other regime, such a representation is not feasible. We first discuss Gibbs state of Hamiltonians that exhibit spontaneous breaking of a discrete symmetry, and argue that the associated thermal phase transition can be thought of as a symmetry-enforced separability transition. Next, we study cluster states in various dimensions subjected to local decoherence, and identify several distinct mixed-state phases and associated separability phase transitions, which also provides an alternate perspective on recently discussed 'average SPT order'. We also study decohered \(p+ip\) superconductors, and find that if the decoherence breaks the fermion parity explicitly, then the resulting mixed state can be expressed as a convex sum of non-chiral states, while a fermion-parity preserving decoherence results in a phase transition at a non-zero threshold that corresponds to spontaneous breaking of fermion parity. Finally, we briefly discuss systems that satisfy NLTS (no low-energy trivial state) property, such as the recently discovered good LDPC codes, and argue that the Gibbs state of such systems exhibits a temperature-tuned separability transition.
###### Contents
* I Introduction
* II Separability criteria with and without symmetry
* III An illustrative example: Separability transition in the Gibbs state of the 2d quantum Ising model
* IV Separability transitions in SPT States
* IV.1 A relation between local and thermal decoherence
* IV.1 1d cluster state
* IV.2 2d cluster state
* IV.3 3d cluster state
* IV.4 1d and 2d topological phases protected by a \(Z_{2}^{(0)}\) symmetry
* V Separability transitions for 2d chiral topological states
* V.1 Setup and motivation
* V.2 Separability of \(p+ip\) SC subjected to fermionic Kraus operators
* V.3 Double-state formalism for fermions
* V.4 Phase transition induced by an interacting channel in a \(p+ip\) SC
* VI Separability transition in Gibbs states of NLTS Hamiltonian
* VII Some connections between separability and other measures of mixed-state complexity
* VII.1 Connections among separability, purification, and double state
* VIII Summary and discussion
* VII.2.1 [label=()]
* VII.3 Summary and discussion
* VII.3 Acknowledgments
* VII.4 Details of string order parameter for 1d cluster state
* VII.4 Details of calculations for chiral fermions subjected to decoherence
* VII.4 Covariance matrix under channel linear in fermion operators
* VII.4 Double-state formalism for fermions
## I Introduction
Suppose one has the ability to apply unitary gates that act in a geometrically local fashion on a many-body system. Starting from a product state, a specific circuit composed of such gates results in a specific pure state, and an ensemble of such circuits can therefore be associated with the mixed state \(\rho=\sum_{i}p_{i}|\psi_{i}\rangle\langle\psi_{i}|\) where the pure state \(|\psi_{i}\rangle\) is prepared with probability \(p_{i}\). If one is limited to only constant depth unitary circuits, then the corresponding mixed state can be regarded as'short-range entangled' or 'trivial' [1; 2], which generalizes the notion of short-range entangled pure state [3; 4; 5; 6; 7; 8]. In parallel with the notion of symmetry protected topological phases for pure states [9; 10; 11; 12], it is then natural to define a trivial/short-ranged entangled _symmetric_ mixed state (a'sym-SRE' state) as one that can be obtained from an ensemble of pure states, where each element of the ensemble is prepared with only a constant depth circuit
consisting of local, symmetric gates under some given symmetry. Motivated from experimental progress in controllable quantum devices where both unitary quantum dynamics and decoherence play an important role [13; 14; 15; 16], in this paper we will explore mixed state phase diagrams where in one regime a mixed state is sym-SRE, and in the other regime, it is not. We will call such phase transitions'symmetry-enforced separability transitions', since a sym-SRE state is essentially separable [1] (i.e. a convex sum of unentangled states) upto short-distance correlations generated by constant-depth unitaries. In the absence of any symmetry constraint, analogs of such transitions were recently studied in Ref.[17] in the context of decohered topologically ordered mixed states [18; 19; 20; 21; 22]. To make progress, we will try to leverage our understanding of the complexity of preparing pure many-body states using unitaries. Some of the questions that will motivate our discussion are: Do there exist separability phase transitions when pure-state symmetry protected topological (SPT) phases are subjected to decoherence, and if yes, what is the universality class of such transition? When a 2d chiral pure state (e.g. the ground state of an integer quantum Hall phase) is subjected to local decoherence, can the resulting density matrix be expressed as a convex sum of non-chiral states? Can the conventional, finite temperature phase transitions corresponding to the spontaneous breaking of a global symmetry be also thought of as separability transitions?
As an example, consider the transverse field Ising model on a square lattice. We provide an argument (Sec.III) that the Gibbs state for this model can be prepared using an ensemble of finite-depth local unitary circuit at all temperatures, including at \(T\leq T_{c}\), where \(T_{c}\) is the critical temperature for spontaneous symmetry breaking. It is crucial here that one is not imposing any symmetry constraint on the unitaries. This is consistent with previous works [23; 24; 25] where evidence was provided that the mixed-state entanglement corresponding to a Gibbs state that exhibits spontaneous symmetry breaking remains short-ranged at all non-zero temperatures, including at the finite temperature critical point (assuming absence of any co-existing finite-temperature topological order). However, if one only allows access to an ensemble of short-depth unitary circuits composed of _Ising symmetric_ local gates, then using results of Ref.[21], we provide a rigorous argument that the Gibbs state can not be prepared for any \(T\leq T_{c}\). We expect similar results to hold for other symmetry broken Gibbs states as well. Therefore, the conventional, finite-temperature symmetry breaking phase transition in a transverse-field Ising model can be thought of as a symmetry-enforced separability transition. This statement is true even when the transverse field is zero (i.e. for a classical Ising model) - the quantum mechanics still plays a role since the imposition of symmetry implies that one is forced to work with 'cat' (GHZ) states, which are long-range entangled.
In the context of pure states, a well-known example of symmetry enforced complexity is an SPT phase whose ground state can not be prepared using a finite depth circuit composed of symmetric local gates [9; 10; 11; 12]. Recent works have provided a detailed classification of SPT phases protected by zero-form symmetries that are being subjected to decoherence using spectral sequences and obstruction to a short-ranged entangled (SRE) purification [26; 27]. Progress has also been made in understanding non-trivial decohered SPTs using string operators [28] and'strange correlators' [29; 30], concepts that were originally introduced to characterize pure SPT states [31; 32; 10]. Here we will be interested in understanding decohered SPT states from the viewpoint of separability, which, as we discuss in Sec.II, is a different notion of entanglement of mixed states than that based on SRE purification considered in Ref.[26; 27]. As hinted above, we define a sym-LRE (symmetric, long-range entangled) state as one which does not admit a decomposition as a convex sum of pure states which are all preparable via a finite-depth circuit made of symmetric local gates. If so, it is interesting to ask if there exist separability transitions between sym-LRE and sym-SRE states as a function of the decoherence rate, analogous to the phase transitions in mixed states with instrinsic topological order [17]. We will not consider a general SPT state, and focus primarily on cluster states in various dimensions to illustrate the broad idea. A key step in our analysis will be the following result that was also briefly mentioned in Ref.[17] and which we discuss in detail in Sec.IV: for a large class of SPTs, including the cluster states in various dimensions, Kitaev chain in 1d, and several 2d topological phases protected by zero-form \(Z_{2}\) symmetry, one can find local, finite-depth channels that map the pure state to a Gibbs state. We will discuss decoherence induced separability transitions due to such channels in Sec.IV.
When trying to understand complexity of mixed SPT states, we will often find the following line of inquiry helpful. One first asks: Does assuming that a mixed state is trivial (i.e. decomposable as a convex sum of SRE pure states) lead to an obvious contradiction? If the answer to this question is 'yes', then we already know that the mixed state is necessarily non-trivial. In this case, there may still exist interesting transitions between two different kinds of non-trivial mixed states, and we will consider a couple of such examples as well. On the other hand, if the answer to this question is 'no', we will attempt to find an explicit decomposition of the mixed state as a convex sum of SRE states. The aforementioned relation between local and thermal decoherence will again be instrumental in making analytical progress.
As an example, consider the ground state of the 2d cluster state Hamiltonian \(H\) subjected to a local channel that locally anticommutes with the terms in the Hamiltonian. One can show that resulting decohered state \(\rho_{d}\) takes the Gibbs form: \(\rho\propto e^{-\beta H}\) where \(\tanh(\beta)=1-2p\) and \(p\) is the decoherence rate. In this example, \(H\) has both a zero-form and a one-form Ising symmetry. We will provide arguments that this system undergoes a separa
bility transition as a function of \(p\): for \(0<p<p_{c}\), \(\rho\) cannot be decomposed as a sum of pure states that respect the aforementioned two symmetries, while for \(p>p_{c}\), such a decomposition is feasible. Moreover, for \(p>p_{c}\) we will express \(\rho_{d}\) explicitly as \(\sum_{m}p_{m}|\psi_{m}\rangle\langle\psi_{m}|\), where \(|\psi_{m}\rangle\) are pure, symmetric states that are statistically SRE. More precisely, one can define an ensemble averaged string-correlation, \([\langle S_{C}\rangle^{2}]\equiv\sum_{m}p_{m}\langle S_{C}\rangle_{m}^{2}\), where \(\langle S_{C}\rangle_{m}=\langle\psi_{m}|S_{C}|\psi_{m}\rangle/\langle\psi_{m }|\psi_{m}\rangle\), and \(S_{C}\) is a string-operator whose non-zero expectation value implies long-range entanglement. We will show that \([\langle S_{C}\rangle^{2}]\) precisely corresponds to a disorder-averaged correlation function in the 2d random-bond Ising model along the Nishimori line [33]. Therefore, in this example, the separability transition maps to the ferromagnetic transition in the random bond Ising model. For the 3d cluster state, we will find an analogous relation between separability and 3d random-plaquette Ising gauge theory. We note that similar order parameters and connections to statistical mechanics models also appear in the setting of measurement protocols to prepare long-range entangled SPT states [34; 35]. We briefly discuss connection to these works.
As another byproduct of the relation between local decoherence and Gibbs states, we also study recently introduced non-trivial class of mixed states which are protected by a tensor product of 'exact' and 'average' symmetries [26; 27; 28; 29]. One says that a density matrix \(\rho\) has an 'exact symmetry' if \(U_{E}\rho=\rho\) for some unitary \(U_{E}\), while it has an 'average symmetry' if \(U_{A}^{\dagger}\rho U_{A}=\rho\) for some unitary \(U_{A}\). Refs. [26; 27; 28; 29] have provided several non-trivial examples of such mixed-state SPTs by showing that they possess non-trivial correlation functions, and/or cannot be purified to a short-ranged entangled (SRE) pure state. Here we will focus on examples of such states that are based on cluster states in various dimensions, and using locality/Lieb-Robinson bound [36; 37; 38], show that the corresponding mixed states cannot be written as a convex sum of symmetric, pure states. For 1d cluster state, we also provide an alternative proof of non-separability by using the result from Ref.[39] that in one-dimension if a state has an average \(Z_{2}\) symmetry, and its connected correlation functions are short-ranged, then the corresponding 'order parameter' and the 'disorder parameter' can't be both zero or non-zero at the same time.
Next, in Sec.V we consider fermionic chiral states subjected to local decoherence. We primarily focus on the ground state of a 2d \(p_{x}+ip_{y}\) superconductor (\(p+ip\) SC) as our initial state (we expect integer quantum Hall states to have qualitative similar behavior). We first consider subjecting this pure state to a finite-depth channel with Kraus operators that are linear in fermion creation/annihilation operators, so that the decoherence breaks the fermion parity symmetry. In the pure state classification of topological superconductors, fermion parity is precisely the symmetry responsible for the non-trivial topological character of the \(p+ip\) SC [40; 41]. Therefore, it is natural to wonder about the fate of the mixed state obtained by breaking this symmetry from exact down to average. One potential path to make progress on this problem is to map the mixed state to a pure state in the doubled Hilbert-space using the Choi-Jamiolkowski (C-J) map [42; 43] (we will call such a state the 'double state', similar to the nomenclature in Ref.[20]). There are interesting subtleties in applying the C-J map to fermionic Kraus operators that we clarify. Following the ideas in Refs.[18; 20; 22; 29], one may then map the double state to a 1+1-D theory of counter-propagating free CFTs coupled via a fermion bilinear term, which is clearly relevant and gaps out the edge states in these doubled picture. However, a short-depth channel cannot qualitatively change the expectation value of state-independent operators (i.e. \(\mathrm{tr}(\rho O)\) where \(O\) is independent of \(\rho\)) [18; 19], and it is not obvious what does the gapping of edge modes imply for the actual mixed state. We conjecture that the physical implication of the gapping of the edge states in the doubled formulation is that the actual mixed state can now be expressed as a convex sum of SRE states with zero Chern number, which is equivalent to the statement that they can be obtained as a Slater determinant of Wannier states, unlike the pure \(p+ip\) state where such a representation is not possible [44; 45; 46]. Therefore, the transition from the pure state to the mixed state can be thought of as a 'Wannierability transition'. We consider an explicit ansatz of such a decomposition, and provide numerical support of our conjecture by calculating the entanglement spectrum and modular commutator of the pure states whose convex sum corresponds to the decohered density matrix.
A more interesting channel that acts on the 2d \(p+ip\) SC corresponds to Kraus operators that are _bilinear_ in fermion creation/annihilation operators. To make progress on this problem, we use the C-J map to obtain a field theoretic description for this problem in terms of two counter-propagating chiral Majorana CFTs interacting via a four-fermion interaction, where the strength of the interaction is related to the strength of the interacting decohering channel. This theory admits a phase transition at a critical interaction strength in the super-symmeteric tricritical Ising universality class, which can be thought of as corresponding to spontaneous breaking of the fermion parity. Although we don't have an understanding of this transition directly in terms of the mixed state in the non-doubled (i.e. original) Hilbert space, it seems reasonable to conjecture that at weak decoherence, the density matrix can not be expressed as a convex sum of area-law entangled non-chiral states, while at strong decoherence, it is most naturally expressible as a convex sum of states with GHZ like character that originates from the aforementioned spontaneous breaking of the fermion parity.
Incidentally, the kind of arguments we consider to rule out sym-SRE mixed states in the context of symmetry broken phases or SPT phases also find an application in an exotic separability transition where symmetry plays no role. In particular, we consider separability as
pects of Gibbs state of Hamiltonians that satisfy the 'no low-energy trivial state' (NLTS) condition introduced by Freedman and Hastings in Ref.[47]. Colloquially, if a Hamiltonian satisfies the NLTS condition, then any pure state with energy density less than a critical non-zero threshold cannot be prepared by a constant depth circuit. Recently, Ref.[48] showed that the 'good LDPC code' constructed in Ref.[49] satisfies the NLTS condition (we note that 'Good LDPC codes' [50; 51; 49] have the remarkable property that both the code distance and the number of logical qubits scale linearly with the number of physical qubits). Ref.[48] already showed that the NLTS condition holds also for mixed states, if one defines the circuit depth of a mixed state as the minimum depth of unitary needed to prepare it by acting on system\(\otimes\)ancillae, both initially in a product state, where the ancillae are traced out afterwards [52]. Under such a definition of a non-trivial mixed state (namely, a mixed state that can not be prepared by a constant depth circuit under the aforementioned protocol), even mixed states with long-range _classical correlations_ (e.g. the Gibbs state of 3d classical Ising model) would be considered non-trivial. In contrast, under our definition of a non-trivial mixed state, such classical states will be trivial since they can be written as a convex sum of SRE states. Therefore we ask: assuming that one defines a trivial (non-trivial) mixed state as one which can (can't) be expressed as a convex sum of SRE states, is the Gibbs state of a Hamiltonian that satisfies the NLTS property non-trivial at a low but non-zero temperature? Under reasonable assumptions, in Sec.VI we provide a short argument that this is indeed the case. This implies that one should expect a non-zero temperature separability transition in such Gibbs states.
In Sec.VII we briefly discuss connections between separability criteria and other measures of the complexity of a mixed state such as the ability to purify a mixed state to an SRE pure state, entanglement of the doubled state using C-J map, and strange correlators.
Finally, in Sec.VIII we summarize our results and discuss a few open questions.
For convenience, below we list some of the acronyms that appear frequently in this work:
* SRE: Short-range entangled.
* LRE: Long-range entangled.
* CDA: Convex decomposition ansatz (for a density matrix).
* RBIM: Random bond Ising model.
* SPT: Symmetry protected topological.
## II Separability criteria with and without symmetry
Motivated from Werner and Hastings [1; 2], we call a mixed state \(\rho\) short-range entangled (SRE) if and only if it can be decomposed as a convex sum of pure states
\[\rho=\sum_{m}p_{m}|\psi_{m}\rangle\langle\psi_{m}|, \tag{1}\]
where each \(|\psi_{m}\rangle\) is short-range entangled (SRE), i.e., it can be prepared by applying a constant-depth local unitary circuit to some product state. The physical motivation for this definition is rather transparent: if a mixed state can be expressed as Eq.(1), only then it can be prepared using an ensemble of unitary-circuits (acting on the Hilbert space of \(\rho\)) whose depth does not scale with the system size. We note that this definition of an SRE mixed state has been employed to understand phase transitions in systems with intrinsic topological order subjected to thermal or local decoherence [53; 17].
One can generalize the notion of an SRE mixed state in the presence of a symmetry. Specifically, we say that a mixed state \(\rho\) satisfying \(U(g)\rho U^{\dagger}(g)=\rho,\ \forall g\in G\) is a'symmetric SRE' (sym-SRE in short) if and only if one can decompose it as a convex sum of pure states, where each of these pure states can be prepared by applying a finite-depth quantum circuit made of local gates that all commute with \(U\), to a symmetric product state.
Several comments follow.
1. The 'only if' clause in our definition for a (sym-) SRE state is a bit subtle. For example, consider a density matrix where there exists no decomposition that satisfies Eq.(1), but there exists a decomposition \(\rho=\sum\limits_{m,|\phi_{m}\rangle\in\text{SRE}}p_{m}|\psi_{m}\rangle \langle\psi_{m}|+\sum\limits_{m,|\phi_{m}\rangle\notin\text{SRE}}q_{m}|\phi_ {m}\rangle\langle\phi_{m}|\) such that the relative weight of the non-SRE states is zero in thermodynamic limit (i.e. \(\sum_{m}q_{m}/(\sum_{m}\left(p_{m}+q_{m}\right)\to 0\) in the thermodynamic limit). In this case, it might seem reasonable to regard \(\rho\) as SRE. One may also define an average circuit complexity of a density matrix as \(\langle\mathcal{C}\rangle=\inf\{\sum_{m}p_{m}\mathcal{C}(\psi_{m})\}\), where \(\mathcal{C}(|\psi_{m}\rangle)\) is the minimum depth of a circuit composed of local gates to prepare the state \(|\psi_{m}\rangle\) and the infimum is taken over all possible decompositions of the mixed state \(\rho\). One may then consider calling a mixed state \(\rho\) as SRE if and only if \(\langle\mathcal{C}\rangle\) does not scale with the system size. But even then, there may be special cases where the average behavior is not representative of a typical behavior. We will not dwell on this subtlety further at this point, and use physical intuition to quantify the separability of a density matrix, were we to encounter such a situation.
2. Ref.[2] also introduced a seemingly different definition of an SRE mixed state: Consider a 'classical' state \(\rho_{cl}\propto e^{-H_{cl}}\), where \(H_{cl}\) is a Hamiltonian composed of terms which are all diagonal in a product basis, and which acts on an enlarged Hilbert space \(a\otimes s\) where \(s\) denotes the system of interest and \(a\) denotes ancillae. Then a mixed state \(\rho\) may be
regarded as SRE if it can be obtained from \(\rho_{cl}\) by applying a finite-depth unitary on \(s\otimes a\), followed by tracing out \(a\). That is, one may consider \(\rho\) as SRE if \[\rho=\text{tr}_{a}\left(U^{\dagger}e^{-H_{cl}}U/Z\right)\] (2) where \(U\) is a finite-depth circuit and \(Z=\text{tr}\left(e^{-H_{cl}}\right)\). We are unable to show that the definition in Eq.(1) is equivalent to Eq.(2). Although we will primarily use the former definition (Eq.(1)), in Sec.VII we will briefly discuss potential connections between the two definitions, and also relation with other diagnostics of mixed-state entanglement.
3. The symmetry we consider is called weak symmetry (average symmetry) in Ref.[28] (Ref.[26]), which highlights its difference with the stronger symmetry \(U(g)\rho=\rho U(g)=e^{i\theta(g)}\rho,\forall g\in G\), termed strong symmetry (exact symmetry) in Ref.[28] (Ref.[26]). Physically, exact symmetry enforces the constraint that the density matrix _must_ be written as an incoherent sum of pure states, where each of them is an eigenstate of \(U(g)\) with the same eigenvalue \(e^{i\theta(g)}\). On the other hand, while the mixed state \(\rho\) with only average symmetry can be written as a convex sum of symmetric pure states having different charge under \(G\), one may as well express \(\rho\) as a convex sum of non-symmetric pure states. Therefore, our requirement that each of the pure states respects the symmetry puts a further constraint on a mixed state with only average symmetry. On that note, Ref. [54] defined a symmetric-SRE state for a symmetry \(U\) as one which satisfies Eq.(2) where \(e^{-H_{cl}}\) is replaced by \(P_{\theta(g)}e^{-H_{cl}}\) where \(P_{\theta(g)}\) is a projector onto a given symmetry charge \(\theta(g)\). Therefore, in this definition one is always working with a density matrix that has an _exact_ symmetry. As already mentioned, we will instead only impose the average symmetry in our definition of a sym-SRE state (of course, there may be special quantum channels that happen to preserve an exact symmetry).
4. An alternative definition of an SRE mixed state was considered in Refs.[26; 27; 52] whereby a mixed density matrix is considered SRE if it can be obtained from a _pure_ product state in a system\(\otimes\)ancillae Hilbert space via a finite-depth unitary followed by tracing out ancillae. In contrast, as already mentioned above in comment #2, Ref.[2] defines a mixed density matrix as SRE if it can be obtained from the 'classical mixed state' \(\rho_{cl}\propto e^{-H_{cl}}\) of system\(\otimes\)ancillae via a finite-depth local quantum channel. Therefore, a mixed state can be trivial/SRE using the definition of Ref.[2] while remaining non-trivial/LRE using the definition of Refs.[26; 27; 52]. The physical distinction between these two definitions is most apparent when one considers a mixed state for qubits of the form \(\rho=\frac{1}{2}\left(|\uparrow\rangle\langle\uparrow|+|\downarrow\rangle \langle\downarrow|\right)\) where \(|\uparrow\rangle=\prod_{i}|\uparrow\rangle_{i}\) and \(|\downarrow\rangle=\prod_{i}|\downarrow\rangle_{i}\). This state is clearly separable (unentangled). However, any short-depth purification of this state must be long-ranged entangled. This is because \(\text{tr}(\rho Z_{i}Z_{j})-\text{tr}(\rho Z_{i})\,\text{tr}(\rho Z_{j})\) is non-zero and the purified state can't change this correlation function due to the Lieb-Robinson bound [36; 37] (this is also related to the fact the entanglement of purification [55] is sensitive to both quantum and classical correlations, and therefore is not a good mixed-state entanglement measure). Thus the aforementioned \(\rho\) will be SRE using definition of Ref.[2], and LRE using the definition of Ref.[26; 27; 52]. Of course, it will also be SRE via Eq.(1), which is the definition we will use throughout this paper.
III An illustrative example: separability transition in the gibbs state of the 2D quantum Ising model
Let us consider an example to illustrate the difference between an SRE mixed state and a sym-SRE mixed state, that will also provide one of the simplest examples of a separability transition. Consider the density matrix \(\rho\) for qubits (i.e. objects transforming in the spin-\(1/2\) representation of \(SU(2)\)) given by \(\rho(\beta)=e^{-\beta H}/Z\) where \(H\) is a local Hamiltonian that satisfies \(U^{\dagger}HU=H\) with \(U=\prod_{i}X_{i}\) being the generator of the Ising symmetry, and \(Z=\text{tr}\,e^{-\beta H}\) is the partition function. Let us further assume that \(\rho(\beta)\) exhibits spontaneous symmetry breaking (SSB) for \(\beta>\beta_{c}\) where \(0<\beta_{c}<\infty\) (for a range of other parameters that specify the Hamiltonian). For concreteness, one may choose \(H\) as the nearest neighbor transverse-field Ising model on the square lattice, i.e., \(H=-\sum_{\langle i,j\rangle}Z_{i}Z_{j}-h\sum_{i}X_{i}\) although the only aspect that will matter in the following discussion is that \(H\) is local with a zero-form Ising symmetry, and the order parameter in the symmetry breaking phase is a real scalar (e.g. one may as well consider a transverse-field Ising model on a cubic lattice). Therefore, for a range of the transverse-field \(h\) and \(\beta>\beta_{c}\) (where \(\beta_{c}\) depends on \(h\)), the two-point correlation function \(\text{tr}\left(\rho Z_{i}Z_{j}\right)\) is non-zero for \(|i-j|\rightarrow\infty\). We will argue that \(\rho\) is SRE for all non-zero temperatures, while it is sym-SRE only for \(\beta<\beta_{c}\). Partial support for \(\rho\) being an SRE at all non-zero temperatures was provided in Refs.[23; 24; 25] and we will argue below for an explicit decomposition of \(\rho\) in terms of SRE states. The statement that \(\rho\) is not sym-SRE for \(\beta\geq\beta_{c}\) was also hinted in [54], and intuitively follows from the fact that for \(\beta>\beta_{c}\), SSB implies that if one decomposes \(\rho\) as a convex sum of symmetric, pure states, those pure states must have GHZ-like entanglement. Let us first consider a rigorous argument for this statement which, upto small
modifications, essentially follow the one in Ref.[21] for a closely related problem of non-triviality of a density matrix with an exact symmetry and long-range order.
To show that for \(\beta>\beta_{c}\), \(\rho\) can't be a sym-SRE state, let us first decompose \(\rho\) as \(\rho=\rho_{+}+\rho_{-}\) where \(\rho_{\pm}=(\frac{1\pm U}{2})\rho\) are the projections of \(\rho\) onto even and odd charge of the Ising symmetry. \(\rho_{+}\) and \(\rho_{-}\) are valid density matrices with an exact Ising symmetry, that is, they satisfy, \(U\rho_{\pm}=\pm\rho_{\pm}\). Now let us make the assumption that for \(\beta>\beta_{c}\), \(\rho\) is a sym-SRE state. We will show that this assumption leads to a contradiction. Therefore, we write \(\rho_{\pm}=\sum_{\alpha}p_{\alpha,\pm}|\psi_{\alpha,\pm}\rangle\langle\psi_{ \alpha,\pm}|\) where \(p_{\alpha,\pm}\) are positive numbers, and \(|\psi_{\alpha,\pm}\rangle\) are SRE states \(\forall\alpha\) that satisfy \(U|\psi_{\alpha,\pm}\rangle=\pm|\psi_{\alpha,\pm}\rangle\). Since \(U\) anti-commutes with \(Z_{i}\), \(\langle\psi_{\alpha,\pm}|Z_{i}|\psi_{\alpha,\pm}\rangle=0\). Further, since \(|\psi_{\alpha,\pm}\rangle\) are all SRE states, correlation functions of all local operators decay exponentially (notably, we assume that the associated correlation length is bounded by a _system-size independent_ constant for all \(|\psi_{\alpha,\pm}\rangle\)), and therefore, \(\langle\psi_{\alpha,\pm}|Z_{j}Z_{k}|\psi_{\alpha,\pm}\rangle-\langle\psi_{ \alpha,\pm}|Z_{j}|\psi_{\alpha,\pm}\rangle\langle\psi_{\alpha,\pm}|Z_{k}|\psi _{\alpha,\pm}\rangle=\langle\psi_{\alpha,\pm}|Z_{j}Z_{k}|\psi_{\alpha,\pm}\rangle\) vanishes as \(|j-k|\rightarrow\infty\). However, this leads to a contradiction, because this implies that \(\operatorname{tr}\left(\rho Z_{j}Z_{k}\right)=\sum_{\pm}\sum_{\alpha}p_{\alpha, \pm}\langle\psi_{\alpha,\pm}|Z_{j}Z_{k}|\psi_{\alpha,\pm}\rangle\) itself vanishes, which we know can't be true since as mentioned above, for \(\beta>\beta_{c}\), the system is in an SSB phase with long-range order. Therefore, our assumption that \(\rho\) is a sym-SRE state for \(\beta>\beta_{c}\) must be incorrect. The same conclusion also holds for \(\beta=\beta_{c}\) since the correlations at the critical point decay as a power-law.
As mentioned in the introduction, our general approach would be to first look for general constraints that lead to a mixed state being necessarily non-trivial. If we are unable to find such a constraint, we will attempt to find an explicit decomposition of the density matrix as a convex sum of SRE states. For example, above, we noted that \(\rho\) cannot be a sym-SRE state for \(\beta\geq\beta_{c}\), and we also claimed that \(\rho\) is an SRE state for all non-zero temperatures. Let us therefore try to find an explicit decomposition of \(\rho\) as a convex sum of SRE pure states for any non-zero temperature, and as a convex sum of symmetric, pure SRE states for \(\beta<\beta_{c}\). The key player in our argument will be a particular convex decomposition ansatz (CDA in short) that is motivated from "minimally entangled typical thermal states" (METTS) construction introduced in Ref.[56], and which was employed in Ref.[53] to show that the Gibbs state of 2d and 3d toric code is SRE for all non-zero temperatures. Note that despite the nomenclature, METTS construction as introduced in [56] does not involve minimization of entanglement over all possible decompositions, and is simply an ansatz that is physically motivated (which is why we prefer the nomenclature CDA over METTS for our discussion).
First, let us specialize to zero transverse field. In this case, \(\rho\) is clearly an SRE state at any temperature since \(\rho\propto\sum_{m}e^{-\beta E_{m}}|z_{m}\rangle\langle z_{m}|\) where \(|z_{m}\rangle\) denotes a product state in the \(Z\)-basis and \(E_{m}=\langle z_{m}|H|z_{m}\rangle\). To obtain a symmetric convex decomposition, we write:
\[\rho=\frac{\sum_{m}e^{-\beta H/2}|x_{m}\rangle\langle x_{m}|e^{-\beta H/2}}{Z} =\sum_{m}p_{m}|\psi_{m}\rangle\langle\psi_{m}| \tag{3}\]
where the set \(\{|x_{m}\rangle\}\) corresponds to the complete set of states in the \(X\) basis, \(|\psi_{m}\rangle\propto e^{-\beta H/2}|x_{m}\rangle\) and \(p_{m}\propto\langle\psi_{m}|\psi_{m}\rangle\). The states \(|\psi_{m}\rangle\) are clearly symmetric under the Ising symmetry, and their symmetry charge (\(=\pm 1\)) is determined by the parity of the number of sites in the product state \(|x_{m}\rangle\) where spins point along the negative-\(x\) direction. We will now argue that the states \(|\psi_{m}\rangle\) are SRE for \(\beta<\beta_{c}\) and LRE for \(\beta\geq\beta_{c}\). To see this, we first consider the "partition function with respect to \(|\psi_{m}\rangle\)" defined as \(\mathcal{Z}_{m}=\langle\psi_{m}|\psi_{m}\rangle\) and study its analyticity as a function of \(\beta\). In this specific example, since transverse field is set to zero, one finds that for all \(m\), \(\mathcal{Z}_{m}\) is simply proportional to the partition function of the 2d classical Ising model at inverse temperature \(\beta\), and therefore is non-analytic across the phase transition. Similarly, the two-point correlation function \(\langle\psi_{m}|Z_{i}Z_{j}|\psi_{m}\rangle/\langle\psi_{m}|\psi_{m}\rangle\) is just the two-point spin-spin correlation function in the 2d classical Ising model, which is long-ranged for \(\beta\geq\beta_{c}\) and exponentially decaying for \(\beta<\beta_{c}\). These observations strongly indicate that \(|\psi_{m}\rangle\) is SRE (and correspondingly, \(\rho\) sym-SRE) if and only if \(\beta<\beta_{c}\). Note that the states \(|\psi_{m}\rangle\) are expected to be area-law entangled for all \(\beta\). This is because one may represent the imaginary time evolution \(e^{-\beta H}|m\rangle\) as a tensor network of depth \(\beta\) acting on \(|m\rangle\) (which is a product state), which can only generate an area-law worth of entanglement. Further, even the state at \(\beta=\infty\) is area-law entangled (= the ground state of \(H\)). Therefore, short-range correlations are strongly suggestive of short-range entanglement.
Now, let's consider non-zero transverse field. To argue that \(\rho\) is SRE for any non-zero temperature, we again decompose it as \(\rho=\sum_{m}p_{m}|\phi_{m}\rangle\langle\phi_{m}|\) where \(|\phi_{m}\rangle\propto e^{-\beta H/2}|z_{m}\rangle\). The corresponding \(\mathcal{Z}_{m}=\langle\phi_{m}|\phi_{m}\rangle\) can now be expressed in the continuum limit as an imaginary-time path integral \(\mathcal{Z}_{m}\sim\int_{\phi(\tau=0)=\phi(\tau=\beta)=\phi_{0}}D\phi\ e^{-S}\) where \(S=\sum_{n}\int_{k_{x},k_{y}}|\phi(k_{x},k_{y},n)|^{2}(k_{x}^{2}+k_{y}^{2}+\omega_ {n}^{2})+\int_{\tau=0}^{\beta}\int_{x,y}\left(r|\phi|^{2}+u|\phi|^{4}\right)\), \(\omega_{n}=2\pi n/\beta\) are the Matsubara frequencies, and the Dirichlet boundary conditions \(\phi(x,y,\tau=0)=\phi(x,y,\tau=\beta)=\phi_{0}(x,y)\) are imposed by the 'initial' state \(z_{m}\sim\phi_{0}(x,y)\). Since \(\beta\neq\infty\), the discrete sum over the Matsubara frequencies will be dominated by \(\omega_{n}=0\), which implies that the fluctuations of \(\phi\) will be essentially completely suppressed at all non-zero temperatures (including at the finite temperature critical point which corresponds to renormalized \(r=0\)), since \(n=0\) corresponds to space-time configurations that are translationally invariant along the imaginary-time-direction, and the Dirichlet boundary conditions imply that there is just one such configuration, namely, \(\phi(x,y,\tau)=\phi_{0}(x,y)\). Therefore, we expect that \(\mathcal{Z}_{m}\) will not exhibit singularity across the finite temperature critical point, which indicates that the states \(|\phi_{m}\rangle\) are SRE.
To argue that \(\rho\) is sym-SRE for \(\beta<\beta_{c}\), we now decompose \(\rho\) as \(\rho=\sum_{m}p_{m}|\psi_{m}\rangle\langle\psi_{m}|\) where \(|\psi_{m}\rangle\propto e^{-\beta H/2}|x_{m}\rangle\). The corresponding \(\mathcal{Z}_{m}=\langle\psi_{m}|\psi_{m}\rangle\) can again be expressed in the continuum limit as an imaginary-time path integral \(\mathcal{Z}_{m}\sim\int D\phi\;e^{-S}\) where \(S=\sum_{n}\int_{k_{x},k_{y}}|\phi(k_{x},k_{y},n)|^{2}(k_{x}^{2}+k_{y}^{2}+ \omega_{n}^{2})+\int_{\tau=0}^{\beta}\int_{x,y}\big{(}r|\phi|^{2}+u|\phi|^{4} \big{)}\). Note the now the fields at the two boundaries \(\tau=0,\beta\) are being integrated over all possible configurations, precisely because the initial state is a product state in the \(X\) basis. Again, the path integral will be dominated by \(\omega_{n}=0\) which only implies that the dominant contribution comes from configurations \(\phi(\tau,x,y)=\phi(x,y)\). Therefore, unlike the aforementioned case when the CDA states corresponded to \(e^{-\beta H/2}|z_{m}\rangle\), here dominant contribution to \(\mathcal{Z}_{m}\) precisely corresponds to the partition function of the 2d classical Ising model, which is in the paramagnetic phase for \(\beta>\beta_{c}\). The correspondence with 2d classical Ising model makes physical sense since the universality class of the phase transition at any non-zero temperature is indeed that of the 2d classical Ising model. Therefore, we expect that the states \(|\psi_{m}\rangle\propto e^{-\beta H/2}|x_{m}\rangle\) are SRE for \(\beta<\beta_{c}\) and LRE for \(\beta\geq\beta_{c}\). Correspondingly, we expect that the Gibbs state is sym-SRE (sym-LRE) for \(\beta<\beta_{c}\) (\(\beta>\beta_{c}\)).
To summarize, we provided arguments that the Gibbs state of a transverse field Ising model is an SRE state at any non-zero temperature, and a sym-SRE state only for \(\beta<\beta_{c}\). Therefore, we expect that it undergoes a separability transition as a function of temperature if one is only allowed to expand the density matrix as a convex sum of symmetric states. We expect similar statements for other models that exhibit a finite temperature zero-form symmetry breaking phase transition. In the following sections, we will employ broadly similar logic as in this example, with primary focus on topological phases of matter subjected to local decoherence. Specifically, we write \(\rho=\Gamma\Gamma^{\dagger}\) and employ the following CDA:
\[\rho=\sum_{m}\Gamma|m\rangle\langle m|\Gamma^{\dagger}=\sum_{m}| \psi_{m}\rangle\langle\psi_{m}|=\sum_{m}p_{m}|\tilde{\psi}_{m}\rangle\langle \tilde{\psi}_{m}|, \tag{4}\]
where \(|\psi_{m}\rangle=\Gamma|m\rangle\), \(p_{m}=\langle m|\Gamma^{\dagger}\Gamma|m\rangle=\langle\psi_{m}|\psi_{m}\rangle\), and \(|\tilde{\psi}_{m}\rangle=|\psi_{m}\rangle/\sqrt{\langle\psi_{m}|\psi_{m} \rangle}\) are normalized versions of \(|\psi_{m}\rangle\). We note that here \(\Gamma\) is not unique (note that \(\Gamma\) is not restricted to be a square matrix, see e.g. Ref.[17]), and CDA in Eq.(3) corresponds to choosing \(\Gamma=\rho^{1/2}\) for the Gibbs state \(\rho\). We will sometimes call states \(\{|\psi_{m}\rangle\}\) that enter a particular CDA as 'CDA states'.
## IV Separability transitions in SPT states
The fundamental property of a non-trivial SPT phase is that it cannot be prepared using a short-depth circuit consisting of local, symmetric, unitary gates [9; 10; 11; 12]. Therefore, it is natural to ask: if an SPT phase is subjected to local decoherence, is the resulting mixed state sym-SRE, i.e., can it be expressed as a convex sum of symmetric, SRE pure states? This is clearly a very challenging question for many-body mixed states since to our knowledge, there does not exist an easily calculable measure of mixed-state entanglement that is non-zero if and only if the mixed state is unentangled [57] (if such a measure did exist, then it would be useful to study its universal, long-distance component, similar to topological part of negativity [58; 19; 53]). As already hinted in the introduction, our general scheme will be to first seek sufficient conditions that make a given mixed state sym-LRE (i.e. not sym-SRE). We will do this by decomposing the decohered state into its distinct symmetry sectors as \(\rho=\sum_{Q}\rho_{Q}\), with \(\rho_{Q}\) the projection of the density matrix onto symmetry charge \(Q\), and then examining whether the assumption of each \(\rho_{Q}\) being an SRE leads to a contradiction. If we are unable to find an obvious contradiction, we will then attempt to use the decomposition outlined in Eq.(4) to express \(\rho\) as a convex sum of sym-SRE states. In either of these steps, we will exploit the connection between local and thermal decoherence for cluster states that was briefly mentioned in Ref.[17], and which is described in the next subsection in detail.
### A relation between local and thermal decoherence
Systems with intrinsic topological order typically behave rather differently when they are coupled to a thermal bath, compared to when they are subjected to decoherence induced by a short-depth quantum channel. For example, when 2d and 3d toric codes are embedded in a thermal bath, so that the mixed state is described by a Gibbs state, the topological order is lost at any non-zero temperature [2; 59; 60; 53]. In contrast, when 2d or 3d toric codes are subjected to local decoherence, then the error-threshold theorems [61; 62; 63; 64; 65; 66] imply that the mixed-state topological order is stable upto a non-zero decoherence rate [17; 18; 19; 20; 59; 67]. Given this, it is interesting to ask if there exist situations where a local short-depth channel maps a ground state to a Gibbs state. Here we show that this is indeed the case if the corresponding Hamiltonian satisfies the following properties:
(1) It can be written as a sum of local commuting terms where each of them squares to identity:
\[H=\sum_{j}h_{j},\;[h_{j},h_{k}]=0,\;h_{j}^{2}=I,\;\forall j,k. \tag{5}\]
(2) There exists a local unitary \(O_{j}\) which anticommutes (commutes) with \(h_{k}\) if \(j=k\,(j\neq k)\):
\[O_{j}h_{j}O_{j}^{\dagger} =-h_{j}, \tag{6}\] \[O_{j}h_{k}O_{j}^{\dagger} =h_{k}\;(j\neq k).\]
Specifically, denoting the total system size as \(N\), the channel \(\mathcal{E}=\mathcal{E}_{1}\circ\cdots\mathcal{E}_{N}\) with
\[\mathcal{E}_{j}[\rho]=(1-p)\rho+pO_{j}\rho O_{j}^{\dagger} \tag{7}\]
maps the ground state density matrix \(\rho_{0}\) to a Gibbs state for \(H\).
To verify the claim, we first note that Eq.(5) implies that \(\rho_{0}\) can be written as the product of the projectors on all sites \(\rho_{0}=\frac{1}{2^{N}}\prod_{j}(I-h_{j})\). Now, using Eq.(6), it is straightforward to show that \(\mathcal{E}_{j}[\rho_{0}]=\frac{1}{2^{N}}[I-(1-2p)h_{j}]\prod_{k\neq j}(I-h_{k})\). It then follows that the composition of \(\mathcal{E}_{j}\) on all sites gives
\[\mathcal{E}[\rho_{0}]=\frac{1}{2^{N}}\prod_{j}[I-(1-2p)h_{j}]. \tag{8}\]
Since \(h_{j}^{2}=I\), which implies \(e^{-\beta h_{j}}=\cosh(\beta)I-\sinh(\beta)h_{j}\), one may now exponentiate Eq.(8) to obtain \(\mathcal{E}[\rho_{0}]=\frac{1}{2^{N}}e^{-\beta H}\) where \(\tanh\beta=(1-2p)\), \(\mathcal{Z}=\operatorname{tr}\!\left(e^{-\beta H}\right)\). In Sec.VIII, we also discuss a \(\mathbb{Z}_{N}\) generalization of this construction. For the rest of the paper, the aforementioned \(\mathbb{Z}_{2}\) version will suffice. In the following we will exploit the connection between local and thermal decoherence to study decoherence-induced separability transitions for the cluster states in various dimensions (Secs.IV.1,IV.2,IV.3). We will also briefly discuss a couple examples where the pure state is protected by a single zero-form symmetry (Sec.IV.4).
### 1d cluster state
The Hamiltonian for the 1d cluster state is
\[\begin{split} H&=-\sum_{j=1}^{N}(Z_{b,j-1}X_{a,j}Z _{b,j}+Z_{a,j}X_{b,j}Z_{a,j+1})\\ &=\sum_{j=1}^{N}h_{a,j}+h_{b,j}\end{split} \tag{9}\]
where \(a\) and \(b\) denote the two sublattices of the 1d chain, see Fig.1(a). \(H\) has a global \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) symmetry generated by
\[U_{a}=\prod_{j}X_{a,j},\ U_{b}=\prod_{j}X_{b,j}. \tag{10}\]
We assume periodic boundary conditions, so that their is a unique, symmetric, ground state of \(H\) which is separated from the rest of the spectrum with a finite gap. It is obvious that \(H\) satisfies Eq.(5). To satisfy Eq. (6), we choose Kraus operators \(O_{a/b,j}=Z_{a/b,j}\). Therefore, under the composition of the channel \(\mathcal{E}_{a/b,j}[\rho]=(1-p_{a/b})\rho+p_{a/b}Z_{a/b,j}\rho Z_{a/b,j}\) on all sites, the pure state density matrix becomes
\[\begin{split}\rho(p_{a},p_{b})&=\Big{(}\frac{1}{ \mathcal{Z}_{a}}e^{-\beta_{a}\sum_{j}h_{a,j}}\Big{)}\Big{(}\frac{1}{\mathcal{Z }_{b}}e^{-\beta_{b}\sum_{j}h_{b,j}}\Big{)}\\ &=\rho_{a}(p_{a})\rho_{b}(p_{b}),\end{split} \tag{11}\]
with \(\tanh\beta_{a/b}=(1-2p_{a/b})\) and \(\mathcal{Z}_{a/b}=\operatorname{tr}\!\left(e^{-\beta_{a/b}\sum_{j}h_{a/b,j}}\right)\). In the following, we will suppress the arguments \(p_{a},p_{b}\) in \(\rho_{a}(p_{a}),\rho_{b}(p_{b})\) if there is no ambiguity. Note that \(\rho_{a}\) and \(\rho_{b}\) commute with each other. To decompose \(\rho\) as a convex sum of symmetric states, we write \(\rho=\sum_{Q_{a},Q_{b}}\rho_{Q_{a},Q_{b}}\), where each \(\rho_{Q_{a},Q_{b}}\) is an unnormalized density matrix that carries exact symmetry: \(U_{a}\,\rho_{Q_{a},Q_{b}}=(-1)^{Q_{a}}\rho_{Q_{a},Q_{b}}\), \(U_{b}\,\rho_{Q_{a},Q_{b}}=(-1)^{Q_{b}}\rho_{Q_{a},Q_{b}}\), with \(Q_{a}=0,1\) and \(Q_{b}=0,1\), so that the sum over \(Q_{a},Q_{b}\) contains four terms. The explicit expression for \(\rho_{Q_{a},Q_{b}}\) is given as: \(\rho_{Q_{a},Q_{b}}=\rho_{Q_{a}}\rho_{Q_{b}}\), where \(\rho_{Q_{a}}=\rho_{a}P_{Q_{a}}\), and \(\rho_{Q_{b}}=\rho_{b}P_{Q_{b}}\), and \(P_{Q_{a/b}}=(I+(-1)^{Q_{a/b}}U_{a/b})/2\) are projectors. Note that the probability for a given sector \((Q_{a},Q_{b})\) is given by \(\operatorname{tr}\left(\rho_{Q_{a},Q_{b}}\right)\) which can be used to obtain the normalized density matrix \(\tilde{\rho}_{Q_{a},Q_{b}}\) for a sector \((Q_{a},Q_{b})\) as \(\tilde{\rho}_{Q_{a},Q_{b}}=\rho_{Q_{a},Q_{b}}/\operatorname{tr}\left(\rho_{Q_{a },Q_{b}}\right)\).
To discuss whether the decohered mixed state \(\rho\) is trivial based on our definition of sym-SRE mixed state, we start from considering the special case \(p_{a}>0,\ p_{b}=0\), i.e., the mixed state obtained by applying the aforementioned quantum channel only on sublattice \(a\). This case was studied in detail in Ref.[27] from a different perspective and is an example of an 'average-SPT' phase [26; 27; 29; 30]. In particular, it was shown in Ref.[27] that this mixed state can not be purified to an SRE pure state using a finite-depth local quantum channel. As discussed in Sec.II, our definition of SRE mixed state is a bit different (namely, whether a mixed state can be written as a convex sum of SRE pure states), and therefore, it is worth examining whether this state continues to remain an LRE mixed state using our definition.
When \(p_{a}>0,\ p_{b}=0\), only the sector corresponding to \(Q_{b}=0\) survives, and in this sector, \(\rho_{Q_{a},Q_{b}}\propto\prod_{j}(I-h_{b,j})e^{-\beta_{a}\sum_{j}h_{a,j}}P_{Q_{a}}\). We will now provide two separate arguments that show that \(\rho_{Q_{a},Q_{b}}\) is a sym-LRE (i.e. not sym-SRE) mixed state when \(p_{a}>0,\ p_{b}=0\).
_First argument:_ We want to show that \(\rho_{Q_{a},Q_{b}}\propto\prod_{j}(I-h_{b,j})e^{-\beta_{a}\sum_{j}h_{a,j}}P_{Q_{a}}\) can not be written as \(\sum_{m}p_{m}|\psi_{m}\rangle\langle\psi_{m}|\) where \(|\psi_{m}\rangle\) are SRE states that can be prepared via a short-depth circuit consisting of symmetric, local gates. We utilize the result in Ref.[39], which shows that for an area-law entangled state in 1D (which we will take to be \(|\psi_{m}\rangle\)) which is symmetric under an Ising symmetry (which we will take here to be \(U_{a}=\prod_{j}X_{a,j}\)), both order and disorder parameters cannot vanish simultaneously. Note that we are assuming that \(|\psi_{m}\rangle\) has an area-law entanglement, as otherwise, it is certainly not SRE and there is nothing more to prove.
Therefore, following the results in Ref.[39], \(|\psi_{m}\rangle\) must either (a) have a non-zero order parameter correspond
ing to the symmetry \(U_{a}\) i.e. \(\langle\psi_{m}|\tilde{Z}_{j}\tilde{Z}_{k}|\psi_{m}\rangle\neq 0\) where \(|j-k|\gg 1\) and \(\tilde{Z}\) is an operator that is odd under \(U_{a}\) e.g. \(\tilde{Z}_{i}=Z_{a,i}\), or, (b) it must have a non-zero 'disorder parameter' corresponding to the symmetry \(U_{a}\) i.e. \(\langle\psi_{m}|O_{L}\left(\prod_{l=j}^{k}X_{a,l}\right)O_{R}|\psi_{m}\rangle\neq 0\) where \(|j-k|\gg 1\), and \(O_{L},O_{R}\) are operators localized close to site \(j\) and \(k\) respectively that are _either_ both even or both odd under \(U_{a}\). In case (a), the system has a long-range GHZ type order since the state \(|\psi_{m}\rangle\) is symmetric under \(U_{a}\). In case (b), we now argue that the system has an SPT order.
For \(\langle\psi_{m}|O_{L}\left(\prod_{l=j}^{k}X_{a,l}\right)O_{R}|\psi_{m}\rangle\) to be non-zero, the operator \(O_{L}\otimes O_{R}\) must carry no charge under the symmetry \(U_{b}\) as \(|\psi_{m}\rangle\) is an eigenstate of \(U_{b}\). Therefore, there are two disjoint possibilities for the operators \(O_{L}\) and \(O_{R}\): they are either both charged under the symmetry \(U_{b}\) or neither of them are charged under \(U_{b}\). If neither of them are charged under \(U_{b}\), then \(\langle\psi_{m}|O_{L}\left(\prod_{l=j}^{k}X_{a,l}\right)O_{R}|\psi_{m}\rangle\) must vanish. This is because \(|\psi_{m}\rangle\) is an eigenstate of the string operator \(S_{b}(l,m)\) (this follows from the fact that \(\rho_{Q_{a},Q_{b}}\propto\prod_{j}(I-h_{b,j})\)) which anticommutes with \(O_{L}\left(\prod_{l=j}^{k}X_{a,l}\right)O_{R}\) for an appropriate choice of \((l,m)\), whenever neither of \(O_{L},O_{R}\) are charged under \(U_{b}\). As a consequence, for \(\langle\psi_{m}|O_{L}\left(\prod_{l=j}^{k}X_{a,l}\right)O_{R}|\psi_{m}\rangle\) to be non-zero, \(O_{L}\) and \(O_{R}\) must both be odd under \(U_{b}\). If so, then the disorder order parameter precisely corresponds to one of the two SPT string order parameters, namely, \(S_{a}(j,k)\sim Z_{b,j-1}\left(\prod_{l=j}^{k}X_{a,l}\right)Z_{b,k+1}\) upto finite depth
Figure 1: Cluster states under decoherence in (a) 1d, (b) 2d, and (c) 3d. The first column depicts the Hamiltonian of cluster states. The second column divides the decohered mixed state as a function of error rates into several regimes that have qualitatively different behaviors. The white regions (region (iv)) in the three phase diagrams denote phases where the mixed state is ‘sym-SRE’ (‘trivial’), i.e., it is expressible as a convex sum of symmetric, short-ranged entangled pure states. In contrast, the colored regions or lines (regions (i), (ii), (iii)) denote phases where such a decomposition is not possible (‘sym-LRE’). There can be phase transitions from one kind of sym-LRE phase to a different kind of sym-LRE phase as depicted by different colors. The phase diagram is obtained by calculating objects of the form \([\langle O\rangle^{2}]=\sum_{Q}P(q)\left(\langle O\rangle_{Q}\right)^{2}\) where \(O\) corresponds to an appropriate observable that characterizes symmetry-enforced long-range entanglement, and \(P(q)\) is the probability for obtaining the symmetry charge \(q\). The \(p_{c}\approx 0.109\) in the second row corresponds to the ferromagnetic to paramagnetic phase transition in the 2d random-bond Ising model along the Nishimori line, while \(p_{c}\approx 0.029\) in third row corresponds to the critical point in the 3d random plaquette gauge model along the Nishimori line. The third column shows the phase diagram obtained by expressing \(\rho\) as a convex sum of symmetric states, where each symmetric state \(|\psi_{m}\rangle=\rho^{1/2}|m\rangle\) with \(|m\rangle\) the product state in Pauli-\(X\) basis. See main text for more details.
symmetric unitary transformation. At the same time, the other SPT string order parameter \(\langle\psi_{m}|S_{b}(j,k)|\psi_{m}\rangle\) is also non-zero (due to \(\rho_{Q_{a},Q_{b}}\propto\prod_{j}(I-h_{b,j})\)), and therefore, we arrive at the conclusion that in case (b), \(|\psi_{m}\rangle\) must possess non-trivial SPT order since string order parameters on both sublattices are non-zero. Therefore, in either case (a) or (b), \(|\psi_{m}\rangle\) cannot be prepared by a short-depth circuit composed of local gates that respect both \(U_{a}\) and \(U_{b}\), starting with a symmetric product state.
_Second argument:_ This argument is essentially the same as the one introduced in Ref.[38] to show that the circuit depth of various states with a non-trivial string order parameter cannot be a system-size independent constant due to locality/Lieb-Robinson bound [36; 37]. Again, recall that we want to show that \(\rho_{Q_{a},Q_{b}}\propto\prod_{j}(I-h_{b,j})e^{-\beta_{a}\sum_{j}h_{a,j}}P_{Q _{a}}\) can not be written as \(\sum_{m}p_{m}|\psi_{m}\rangle\langle\psi_{m}|\) where \(|\psi_{m}\rangle\) are SRE. Since \(\rho_{Q_{a},Q_{b}}\) carries an exact symmetry charge of \(U_{a},U_{b}\), so do each of the pure states \(|\psi_{m}\rangle\). As already discussed above, the expectation value of the string order parameter \(S_{b}(j,k)=\prod_{l=j}^{k}(-h_{b,l})=Z_{a,j}\left(\prod_{l=j}^{k}X_{b,l}\right) Z_{a,k+1}\) is unity with respect to \(\rho_{Q_{a},Q_{b}}\), which implies that its expectation value is also unity with respect to each of the states \(|\psi_{m}\rangle\). Let us assume that \(|\psi_{m}\rangle\) can be obtained from a symmetric product state (i.e. an eigenstate of Pauli \(X\) on all sites) which we denote as \(|x_{\mathbf{a},\mathbf{b}}\rangle=\otimes_{j}|x_{a,j},x_{b,j}\rangle\), i.e., \(|\psi_{m}\rangle=V|x_{\mathbf{a},\mathbf{b}}\rangle\) (here \(x_{a/b,j}=\pm 1\) are chosen so as to satisfy the symmetry \(U_{a/b}|\psi_{m}\rangle=(-1)^{Q_{a/b}}|\psi_{m}\rangle\)). Note that \(|x_{\mathbf{a},\mathbf{b}}\rangle\) not only satisfy the global symmetry \(U_{a/b}\) but the 'local' ones as well, i.e., \(\prod_{j\in l}X_{j}|x_{\mathbf{a},\mathbf{b}}\rangle\propto|x_{\mathbf{a}, \mathbf{b}}\rangle\) for any string \(l\). Since each end point of \(S_{b}\) is charged under \(U_{a}\) (i.e., \(U_{a}Z_{a,j/k+1}U_{a}^{\dagger}=-Z_{a,j/k+1}\)), the local symmetry of \(|x_{\mathbf{a},\mathbf{b}}\rangle\) implies \(\langle x_{\mathbf{a},\mathbf{b}}|S_{b}(j,k)|x_{\mathbf{a},\mathbf{b}}\rangle=0\). Moreover, since \(V\) is a finite-depth unitary, the operator \(V^{\dagger}S_{b}(j,k)V\) is still a string operator with each 'end point operator' \(V^{\dagger}Z_{a,j/k+1}V\) a sum of local operators (due to the locality of \(V\)) that are charged under \(U_{a}\) (due to \(V\) being a symmetric unitary, i.e., \([V,U_{a}]=[V,U_{b}]=0\)). Due to these properties, the expectation value \(\langle x_{\mathbf{a},\mathbf{b}}|V^{\dagger}S_{b}(j,k)V|x_{\mathbf{a},\mathbf{ b}}\rangle\) will be identically zero. However, \(\langle x_{\mathbf{a},\mathbf{b}}|V^{\dagger}S_{b}(j,k)V|x_{\mathbf{a},\mathbf{ b}}\rangle\) is nothing but \(\langle\psi_{m}|S_{b}(j,k)|\psi_{m}\rangle\), which is unity, as discussed above. Therefore, we arrive at a contradiction. This implies that our assumption that \(|\psi_{m}\rangle\) is a symmetric SRE state must be incorrect.
We now discuss the general case of both \(p_{a}\) and \(p_{b}\) being non-zero. Based on our discussion above, it is instructive to evaluate the string order parameter with respect to each \(\rho_{Q_{a},Q_{b}}\), i.e., \(\text{tr}\big{(}\rho_{Q_{a},Q_{b}}S_{a/b}\big{)}/\,\text{tr}(\rho_{Q_{a},Q_{b}})\). One finds (see Appendix A) that both string order parameters can be mapped to two-point correlation functions of spins in the 1d classical Ising model at non-zero temperature and hence decay exponentially with the length of the strings. This result merely implies that the corresponding mixed state \(\rho=\sum_{Q_{a},Q_{b}}\rho_{Q_{a},Q_{b}}\) doesn't satisfy the aforementioned sufficient condition for non-trivial sym-SRE, and does not guarantee that \(\rho\) must be trivial. We now use the CDA in Eq.(4) to argue that \(\rho\) is indeed sym-SRE. In particular, we choose \(\Gamma=\rho^{1/2}\) so that \(\rho=\sum_{m}\Gamma|m\rangle\langle m|\Gamma^{\dagger}=\sum_{m}|\psi_{m} \rangle\langle\psi_{m}|\) with \(|\psi_{m}\rangle\propto e^{-(\beta_{a}\sum_{j}h_{a,j}+\beta_{b}\sum_{j}h_{b,j} )/2}|m\rangle\). To ensure each \(|\psi_{m}\rangle\) respects the global \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) symmetry, we choose the set \(\{|m\rangle\}=\{|x_{\mathbf{a}},x_{\mathbf{b}}\rangle_{m}\}\). When \(\beta_{a}=\beta_{b}=0\), \(|\psi_{m}\rangle=|x_{\mathbf{a}},x_{\mathbf{b}}\rangle_{m}\) is a product state. To check whether \(|\psi_{m}\rangle\) remains SRE for any non-infinite \(\beta_{a}\) and \(\beta_{b}\), let us consider the 'partition function with respect to \(|\psi_{m}\rangle\)'
\[\mathcal{Z}_{m}(\beta_{a},\beta_{b})=\langle\psi_{m}|\psi_{m}\rangle \tag{12}\]
as a function of \(\beta\). As \(\beta_{a},\beta_{b}\) are increased from zero, if the state \(|\psi_{m}\rangle\) becomes long-range entangled, one expects that it will lead to a non-analytic behavior of \(\mathcal{Z}_{m}(\beta)\) as a function of \(\beta_{a},\beta_{b}\). The calculation for \(\mathcal{Z}_{m}(\beta)=\langle x_{\mathbf{a}},x_{\mathbf{b}}|\rho|x_{\mathbf{a }},x_{\mathbf{b}}\rangle\) is quite similar to the one for \(\text{tr}(\rho_{Q_{a},Q_{b}})\) detailed in Appendix A, and one finds that \(\mathcal{Z}_{m}(\beta)\) is proportional to the product of two partition functions for the 1d classical Ising model at inverse temperatures \(\beta_{a},\beta_{b}\). Therefore, we expect that \(|\psi_{m}(\beta)\rangle\) remains an SRE state as long as both \(\beta_{a},\beta_{b}<\infty\), which confirms our expectation that \(\rho\) is sym-SRE for non-infinite \(\beta_{a},\beta_{b}\) (i.e. \(p_{a},p_{b}>0\)).
One can also compute the string order parameters \(S_{a}(S_{b})\) for \(|\psi_{m}\rangle\) and show its equivalence to \(\langle z_{j}z_{k}\rangle_{\text{1D Ising}}\) at inverse temperature \(\beta_{a}(\beta_{b})\). Therefore, \(|\psi_{m}\rangle\) does not develop string order as long as \(\beta_{a/b}<\infty\). The triviality of \(|\psi_{m}\rangle\) is also manifested by the _non-zero_ expectation value of disorder operator \(U_{a/b}(k,j)=\prod_{l=j}^{k}X_{a/b,l}\). For example, consider the expectation value of the disorder operator on \(a\) sublattice: \(\langle U_{a}(k,j)\rangle_{m}=\langle\psi_{m}|U_{a}(k,j)|\psi_{m}\rangle/\langle \psi_{m}|\psi_{m}\rangle\). Using the fact that the only terms in \(e^{-(\beta_{a}\sum_{j}h_{a,j}+\beta_{b}\sum_{j}h_{b,j})/2}\) that anticommutes with \(U_{a}(k,j)\) are \(h_{b,j-1}\) and \(h_{b,k}\), we find that \(\langle U_{a}(k,j)\rangle_{m}=\prod_{l=j}^{k}x_{l}\rangle\,\text{sech}^{2}( \beta_{a})\), which is non-vanishing except for \(\beta_{a}=\infty\). This is of course expected based on the result of Ref.[39], since \(|\psi_{m}\rangle\) does not have any GHZ type order. The result for \(U_{b}(k,j)\) is similar.
It is also instructive to apply the aforementioned convex decomposition to the case \(\beta_{b}=\infty,\beta_{a}\neq\infty\), i.e., the above discussed case of 'average SPT order'. In this case we find that the corresponding state \(|\psi_{m}\rangle\) develops GHZ type long-range entanglement. To see this, one can rewrite \(|\psi_{m}\rangle\) as \(|\psi_{m}\rangle\sim e^{-\beta_{a}\sum_{k}h_{a,j}/2}|\chi_{m}\rangle\), where \(|\chi_{m}\rangle\sim\prod_{j}(I-h_{b,j})|m\rangle=|x_{\mathbf{b}}\rangle\otimes \prod_{j}(I-x_{b,j}Z_{a,j}Z_{a,j+1})|x_{\mathbf{a}}\rangle\) exhibits GHZ-type long-range entanglement characterized by \(|\langle\chi_{m}|Z_{a,j}Z_{a,k}|\chi_{m}\rangle|=1\). Using the fact that the only terms in \(e^{-\beta_
sector). This is just the pure state SPT.
2. \(p_{a}>0\) and \(p_{b}=0\): \(\mathrm{tr}(\rho_{Q_{a},Q_{b}}S_{a}(j,k))\) decays exponentially with \(|j-k|\) and \(\mathrm{tr}(\rho_{Q_{a},Q_{b}}S_{b}(j,k))=1\) (in the \(Q_{b}=0\) sector). This regime is sym-LRE i.e. a non-trivial mixed state, in agreement with the non-trivial 'average SPT' discussed in Ref.[27].
3. \(p_{a}=0\) and \(p_{b}>0\): this is similar to the case (ii) with \(a\leftrightarrow b\) and is again a sym-LRE state.
4. \(p_{a},p_{b}>0\): both \(\mathrm{tr}(\rho_{Q_{a},Q_{b}}S_{a}(j,k))\) and \(\mathrm{tr}(\rho_{Q_{a},Q_{b}}S_{b}(j,k))\) decay exponentially with \(|j-k|\). This is a sym-SRE state.
Based on our discussion above, we also provide one possible 'phase diagram' to express \(\rho\) as a convex sum of symmetric states using CDA states \(|\psi_{m}\rangle=\rho^{1/2}|x_{\mathbf{a}},x_{\mathbf{b}}\rangle\), as summarized in the third column of Fig.1(a). Note that the boundary of the phase diagram using the employed CDA matches the boundary of regimes (i)-(iv), and therefore, the CDA is optimal in this sense. However, it's worth noting that the decomposition we chose is just one possible choice, and the label 'GHZ' on the \(x\) and \(y\) axis in the third column of Fig.1(a) is tied to this choice. One may also chose to expand \(\rho\) as a convex sum of SPT states. Therefore, the result that is independent of any specific choice of CDA is that the regime (iv) is sym-SRE, while the regimes (i), (ii) and (iii) are sym-LRE.
### 2d cluster state
The 2d cluster state Hamiltonian \(H_{\text{2d Cluster}}\) is:
\[\begin{split} H_{\text{2d Cluster}}&=-\sum_{v}X_{v} \big{(}\prod_{e\ni v}Z_{e}\big{)}-\sum_{e}X_{e}\big{(}\prod_{v\in e}Z_{v} \big{)}\\ &=\sum_{v}h_{v}+\sum_{e}h_{e}.\end{split} \tag{13}\]
Here the Hilbert space consists of qubits residing on both the vertices \(v\) and the edges \(e\) of a 2D square lattice, see Fig.1(b). The Hamiltonian has both a zero-form symmetry \(Z_{2}^{(0)}\), and a one-form symmetry \(Z_{2}^{(1)}\) with the corresponding generators
\[U^{(0)}=\prod_{v}X_{v},\ U_{p}^{(1)}=\prod_{e\in\partial p}X_{e}, \tag{14}\]
where \(p\) labels the plaquette on the lattice and \(\partial p\) is the boundary of \(p\). We assume periodic boundary conditions, so that \(H\) has a unique, symmetric, gapped ground state. Using Eqs.(5),(6), if one subjects the ground state of \(H_{\text{2d Cluster}}\) to Kraus operators \(O_{v/e}=Z_{v/e}\) with respective probabilities \(p_{v/e}\), the resulting decohered density matrix is \(\rho=\frac{1}{Z}e^{-\left(\beta_{v}\sum_{v}h_{v}^{(0)}+\beta_{e}\sum_{e}h_{e} ^{(1)}\right)}\) with \(\tanh\beta_{e/v}=(1-2p_{e/v})\).
Let us decompose \(\rho\) as a convex sum of symmetric states by writing \(\rho=\sum_{Q^{(0)},Q^{(1)}}\rho_{Q^{(0)},Q^{(1)}}\), where each \(\rho_{Q^{(0)},Q^{(1)}}\) carries the exact symmetry: \(U^{(0)}\rho_{Q^{(0)},Q^{(1)}}=(-1)^{Q^{(0)}}\rho_{Q^{(0)},Q^{(1)}}\), \(U_{p}^{(1)}\rho_{Q^{(0)},Q^{(1)}}=(-1)^{Q^{(0)}_{p}}\rho_{Q^{(0)},Q^{(1)}}\). Here, the one-form symmetry charge is labeled by the set \(Q^{(1)}=\{Q_{p}^{(1)}\}\) with \(Q_{p}^{(1)}=0,1\) defined on each plaquette \(p\). Crucially, the number of one-form symmetry sectors grows exponentially as a function of the system size, and this implies that the probability for a given sector \((Q^{(0)},Q^{(1)})\), i.e., \(\mathrm{tr}\big{(}\rho_{Q^{(0)},Q^{(1)}}\big{)}\), is exponentially small in general. It follows that even if there exists some \(\rho_{Q^{(0)},Q^{(1)}}\) that is not sym-SRE, the decohered state \(\rho\) may still be well approximated by a sym-SRE mixed state as long as the total probability corresponding to the non-trivial sectors is exponentially small. Therefore, the notion of \(\rho\) being sym-SRE must take into account the probability for each symmetry sector, and can only be made precise in a statistical sense (a similar situation arises for a certain non-optimal decomposition for decohered toric code [17]. We will return to this point in detail below. For now, let's focus on the physical observables in each symmetry sector.
The observables that characterize the 2d cluster ground state are the expectation value of the membrane operator \(M_{S}=\prod_{v\in S}(-h_{v}^{(0)})\) with \(S\) a surface (for simplicity, we will assume that the boundary \(\partial S\) of this surface is contractible), and the string operator \(S_{C}=\prod_{e\in C}(-h_{e}^{(1)})\) with \(C\) a curve (the expectation value of either of these operators equals unity in the 2d cluster ground state). To detect whether \(\rho_{Q^{(0)},Q^{(1)}}\) is sym-SRE, i.e., it can be expanded as a convex sum of pure SRE states that each carries a definite symmetry charge \((Q^{(0)},Q^{(1)})\), it is instructive to calculate the expectation value of these operators with respect to \(\rho_{Q^{(0)},Q^{(1)}}\), i.e., \(\mathrm{tr}\big{(}\rho_{Q^{(0)},Q^{(1)}}M_{S}\big{)}/\mathrm{tr}\big{(}\rho_{Q^ {(0)},Q^{(1)}}\big{)}\) and \(\mathrm{tr}\big{(}\rho_{Q^{(0)},Q^{(1)}}S_{C}\big{)}/\mathrm{tr}\big{(}\rho_{Q ^{(0)},Q^{(1)}}\big{)}\). To proceed, we first compute the denominator in these expressions, i.e., \(\mathrm{tr}\big{(}\rho_{Q^{(0)},Q^{(1)}}\big{)}\). Similar to the 1d cluster state, this can be easily done by inserting the complete basis \(\{|x_{\mathbf{e},\mathbf{v}}\rangle\}\) and \(\{|z_{\mathbf{e},\mathbf{v}}\rangle\}\), where \(|x_{\mathbf{e},\mathbf{v}}\rangle=\otimes_{e,v}|x_{e},x_{v}\rangle\) and \(|z_{\mathbf{e},\mathbf{v}}\rangle=\otimes_{e,v}|z_{\mathbf{e},z},z_{v}\rangle\) denote the product state in Pauli-\(X\) and \(Z\) basis, respectively. Following a calculation quite similar to that in the 1D cluster state, one finds that \(\mathrm{tr}\big{(}\rho_{Q^{(0)},Q^{(1)}}\big{)}\propto\sum_{x_{\mathbf{v}}\in Q^{ (0)}}\mathcal{Z}_{\text{2D gauge},x_{\mathbf{v}}}\sum_{x_{\mathbf{e}}\in Q^{(1)}} \mathcal{Z}_{\text{2D Ising},x_{\mathbf{e}}}\). Here, \(\mathcal{Z}_{\text{2D gauge},x_{\mathbf{v}}}=\sum\limits_{x_{\mathbf{e}}}e^{ \beta_{v}\sum_{v}x_{v}(\prod_{v\ni v}z_{v})}\) is the partition function of the 2D Ising gauge theory with the sign of interaction on each vertex given by \(x_{\mathbf{v}}\) while \(\mathcal{Z}_{\text{2D Ising},x_{\mathbf{e}}}=\sum\limits_{z_{\mathbf{v}}}e^{ \beta_{e}\sum_{v}x_{v}(\prod_{v\in e}z_{v})}\) is the partition function of the 2D Ising model with the sign of Ising interaction given by \(x_{\mathbf{e}}\). In the summation, the notation \(x_{\mathbf{v}}\in Q^{(0)}\) denotes all possible \(x_{\mathbf{v}}\) which satisfy \(\prod_{v}x_{\mathbf{v}}=(-1)^{Q^{(0)}_{p}}\) while \(x_{\mathbf{e}}\in Q^{(1)}\) denotes all possible \(x_{\mathbf{e}}\) which satisfy \(\prod_{e\in\partial p}x_{\mathbf{e}}=(-1)^{Q^{(1)}_{p}}\), \(\forall p\). For a system with periodic boundary conditions,
all possible \(x_{\bf v}\in Q^{(0)}(x_{\bf e}\in Q^{(1)})\) can be reached by the transformation \(x_{v}\to x_{v}\prod_{e\ni v}\sigma_{e},\sigma_{e}=\pm 1\) (\(x_{e}\to x_{e}\prod_{v\in e}s_{v},s_{v}=\pm 1\)). One may verify that \(\mathcal{Z}_{\text{2D gauge},x_{\bf v}}\) (\(\mathcal{Z}_{\text{2D Ising},x_{\bf e}}\)) is invariant under the aforementioned transformation by changing the dummy variables \(z_{e}\to\sigma_{e}z_{e}\) (\(z_{v}\to s_{v}z_{v}\)). It follows that \(\mathcal{Z}_{\text{2D gauge},x_{\bf v}}\) (\(\mathcal{Z}_{\text{2D Ising},x_{\bf e}}\)) is only a function of the charge \(Q^{(0)}(Q^{(1)})\), and therefore we will label it as \(\mathcal{Z}_{\text{2D gauge},Q^{(0)}}\) (\(\mathcal{Z}_{\text{2D Ising},Q^{(1)}}\)). Therefore, \(\text{tr}\big{(}\rho_{Q^{(0)},Q^{(1)}}\big{)}\propto\mathcal{Z}_{\text{2D gauge},Q^{(0)}}\mathcal{Z}_{\text{2D Ising},Q^{(1)}}\) (see footnote 1).
Footnote 1: Here we ignore the non-contractible ‘charges’ corresponding to \(\prod_{e}x_{e}\) where \(\ell\) is a non-contractible loop around the torus on which the system lives. This is because we will only be concerned with observables involving operators in the bulk of the system and such observables are insensitive to non-contractible charges.
One may similarly compute \(\text{tr}\big{(}\rho_{Q^{(0)},Q^{(1)}}M_{S}\big{)}\) and \(\text{tr}\big{(}\rho_{Q^{(0)},Q^{(1)}}S_{C}\big{)}\), the numerators in the expectation value for the membrane and the string operators. Let us first consider the membrane order parameter in the sector \((Q_{0},Q_{1})\) which we denote as \(\langle M_{S}\rangle_{Q_{0},Q_{1}}\). One finds
\[\begin{split}&\langle M_{S}\rangle_{Q_{0},Q_{1}}=\frac{\text{tr} \big{(}\rho_{Q^{(0)},Q^{(1)}}M_{S}\big{)}}{\text{tr}\big{(}\rho_{Q^{(0)},Q^{(1 )}}\big{)}}\\ &=\frac{\sum_{z_{\bf e}}(\prod_{v\in S}x_{v}\prod_{e\in\partial S }z_{e})e^{\beta_{v}\sum_{v}x_{v}(\prod_{e\ni v}z_{e})}}{\mathcal{Z}_{\text{2D gauge},Q^{(0)}}}\bigg{|}_{x_{\bf v}\in Q^{(0)}}\\ &=(\prod_{v\in S}x_{v})\langle W_{\partial S}\rangle_{\text{2D gauge},x_{\bf v}}\Big{|}_{x_{\bf v}\in Q^{(0)}}\\ &\sim e^{-\text{Area}(S)}\,\text{ for }\beta_{v}<\infty\end{split} \tag{15}\]
where \(\langle W_{\partial S}\rangle_{\text{2D gauge},x_{\bf v}}\) is the expectation value of the Wilson loop operator along the curve \(\partial S\) for the 2D Ising gauge theory with interaction \(x_{\bf v}\) while \(\text{Area}(S)\) is the area enclosed by the surface \(S\). The area law follows because the 2d Ising gauge theory is confining at any non-zero temperature. We conclude that \(\rho_{Q^{(0)},Q^{(1)}}\) has no membrane order as long as \(p_{v}>0\).
On the other hand, the string order parameter \(\langle S_{C}\rangle_{Q_{0},Q_{1}}\) is
\[\begin{split}&\langle S_{C}\rangle_{Q_{0},Q_{1}}=\frac{\text{tr} \big{(}\rho_{Q^{(0)},Q^{(1)}}S_{C}\big{)}}{\text{tr}\big{(}\rho_{Q^{(0)},Q^{(1 )}}\big{)}}\\ &=\frac{\sum_{z_{\bf v}}(\prod_{e\in C}x_{e})z_{v_{1}}z_{v_{2}}e^{ \beta_{e}}\sum_{e}x_{e}(\prod_{v\in e}x_{v})}{\mathcal{Z}_{\text{2D Ising},Q^{(1)}}} \bigg{|}_{x_{\bf v}\in Q^{(1)}}\\ &=(\prod_{e\in C}x_{e})\langle z_{v_{1}}z_{v_{2}}\rangle_{\text{2D Ising},x_{\bf e}}\Big{|}_{x_{\bf v}\in Q^{(1)}},\end{split} \tag{16}\]
where \(v_{1}\) and \(v_{2}\) label the end points of the curve \(C\) and \(\langle z_{v_{1}}z_{v_{2}}\rangle_{\text{2D Ising},x_{\bf e}}\) is the spin-spin correlation function of the 2D Ising model with the sign of the Ising interaction determined by \(x_{\bf e}\). Clearly, \(\langle S_{C}\rangle_{Q_{0},Q_{1}}\) can show long-range order at low-temperature, and following the same argument as that for the 1d cluster state, long-range order for a given sector implies that the (unnormalized) density matrix \(\rho_{Q^{(0)},Q^{(1)}}\) is sym-LRE. For example, in the sector corresponding to all \(x_{e}=1\), the long range order sets in below 2d Ising critical temperature. However, since the ordering temperature clearly depends on the sector \(Q^{(1)}\), to understand whether the full density matrix \(\rho=\sum_{Q^{(0)},Q^{(1)}}\rho_{Q^{(0)},Q^{(1)}}\) is sym-LRE, one needs to statistically quantify the string order as a function of the error rate. To do so, we introduce the following 'average string order parameter':
\[[\langle S_{C}\rangle^{2}]=\sum_{Q^{(0)},Q^{(1)}}\text{tr}\big{(}\rho_{Q^{(0)},Q^{(1)}}\big{)}\left(\langle S_{C}\rangle_{Q_{0},Q_{1}}\right)^{2} \tag{17}\]
Eq.(17) is equivalent to the disorder averaged spin-spin correlation function of RBIM along the Nishimori line [33]. It follows that \([\langle S_{C}\rangle^{2}]\) decays exponentially as a function of \(|C|\) when \(p_{e}>p_{e}\approx 0.109\)[68].
Based on above analysis, the decohered state \(\rho\) as a function of \(p_{e}\) and \(p_{v}\) can be divided into four regimes using the qualitative behavior of membrane and average string orders [see Fig.1(b)]:
1. \(p_{v}=0\) and \(p_{c}>p_{e}\geq 0\): \(\langle M_{S}\rangle_{Q_{0},Q_{1}}=1\) (in the sector \(Q^{(0)}=0\)) and \([\langle S_{C}\rangle^{2}]\) is a non-zero constant as \(|C|\to\infty\). In this regime \(\rho\) must be sym-LRE.
2. \(p_{v}=0\) and \(p_{e}>p_{c}\): \(\langle M_{S}\rangle_{Q_{0},Q_{1}}=1\) (in the sector \(Q^{(0)}=0\)) and \([\langle S_{C}\rangle^{2}]\) decays exponentially as a function of \(|C|\). In this regime \(\rho\) must again be sym-LRE.
3. \(p_{v}>0\) and \(p_{c}>p_{e}\geq 0\): \(\langle M_{S}\rangle_{Q_{0},Q_{1}}\sim e^{-\text{Area}(S)}\) and \([\langle S_{C}\rangle^{2}]\) is a non-zero constant as \(|C|\to\infty\). In this regime \(\rho\) must also be (statistically) sym-LRE.
4. \(p_{v}>0\) and \(p_{e}>p_{c}\): \(\langle M_{S}\rangle_{Q_{0},Q_{1}}\sim e^{-\text{Area}(S)}\) and \([\langle S_{C}\rangle^{2}]\sim e^{-|C|}\). This is suggestive that in this regime \(\rho\) is (statistically) sym-SRE and we provide an argument in favor of this conclusion below using an explicit convex decomposition.
We now use the CDA in Eq.(4) with \(\Gamma=\sqrt{\rho}\) to argue that the regime (iv) above, namely \(p_{v}>0\) and \(p_{e}>p_{c}\), is indeed sym-SRE. To ensure that each CDA state \(|\psi_{m}\rangle\) satisfies the \(Z_{2}^{(0)}\times Z_{2}^{(1)}\) symmetry, we choose \(\{|m\rangle=|x_{\bf v},x_{\bf e}\rangle\}\). Similar to the 1D case, we consider the singularity of 'partition function' \(\mathcal{Z}_{m}=\langle\psi_{m}|\psi_{m}\rangle\) as a diagnostic for transition from SRE to LRE as \(\beta\) is increased from zero. Since \(\mathcal{Z}_{m}=\langle x_{\bf e},x_{\bf v}|\rho|x_{\bf e},x_{\bf v}\rangle\), a calculation similar to that for \(\text{tr}\big{(}\rho_{Q^{(0)},Q^{(1)}}\big{)}\) shows that \(\mathcal{Z}_{m}\) is proportional to \(\mathcal{Z}_{\text{2D Ising gauge},x_{\bf v}}\mathcal{Z}_{\text{2D Ising},x_{\bf e}}\). One can also compute the expectation values of membrane and average string order operators with respect to \(|\psi_{m}\rangle\) and obtain \(\langle M_{S}\rangle_{m}\) is proportional to the expression in
Eq.(15) while \([\langle S_{C}\rangle_{m}^{2}]\) is proportional to the expression in Eq.(17), and therefore both vanish when \(p_{v}>0\) and \(p_{e}>p_{c}\).
Alternatively, one may define an 'average free energy' \([\log\mathcal{Z}]=\sum_{m}P_{m}\log(\mathcal{Z}_{m})\propto\sum_{m}\mathcal{Z} _{m}\log(\mathcal{Z}_{m})\) with respect to \(|\psi_{m}\rangle\) to detect whether the ensemble \(\{\psi_{m}\rangle\}\) encounters a phase transition as a function of the error rate. When \(\beta=0\), \(|\psi_{m}\rangle=|x_{\mathbf{a}},x_{\mathbf{b}}\rangle_{m}\) is the trivial product state. On the other hand, \(|\psi_{m}\rangle\) becomes the 2D cluster state when \(\beta\rightarrow\infty\). One expects that the phase transition point can be located by the singular behavior of \([\log\mathcal{Z}]\). Since \([\log\mathcal{Z}]\) is proportional to the disorder-averaged free energy of the 2d RBIM along the Nishimori line, it is singular at \(p_{e}\approx 0.109\). This leads to the same conclusion that \(\{|\psi_{m}\rangle\}\) remains SRE in the regime (iv) above.
Interestingly, if one adopts the aforementioned CDA in regimes (ii) and (iii), then \(|\psi_{m}\rangle\) hosts intrinsic topological order and GHZ order, respectively. This can be argued by first considering the extreme case \((p_{v},p_{e})=(0,0.5)\) in regime (ii) and \((p_{v},p_{e})=(0.5,0)\) in regime (iii). When \((p_{v},p_{e})=(0,0.5)\), \(|\psi_{m}\rangle\propto\prod_{v}(I+h_{v}^{(0)})|m\rangle\propto(|x_{\mathbf{ v}}\rangle\otimes\prod_{v}(I+x_{v}\prod_{e\geqslant v}Z_{e})|x_{\mathbf{e}})\rangle\), which is an eigenstate of toric code. On the other hand, when \((p_{v},p_{e})=(0.5,0)\), \(|\psi_{m}\rangle\propto\prod_{e}(I+h_{v}^{(1)})|m\rangle\propto(|x_{\mathbf{ e}}\rangle\otimes\prod_{e}(I+x_{e}\prod_{v\in c}Z_{v})|x_{\mathbf{v}}\rangle)\) is the 2D GHZ state. The argument based on the analyticity of the average free energy \([\log\mathcal{Z}]\) then indicates that regimes (ii) and (iii) continue to host topological order and GHZ order, respectively. The phase diagram using the current decomposition is summarized in Fig.1(b).
Finally, we note that order parameters similar to \([\langle S_{C}\rangle^{2}]\) (Eq.(17)) and the connections between the decohered cluster states and RBIM have also appeared in the context of preparing long-range entangled states using measurement protocols in Refs.[34; 35]. In particular, our phase diagram (Fig.1(b)) along the line \(p_{v}=0.5\) is similar to the finite-time measurement induced phase transitions in Ref.[34; 35]. However, one crucial difference is that the mixed states in Refs. [34; 35] do not respect the \(Z_{2}^{(1)}\) symmetry and therefore the corresponding transitions can not be interpreted as separability transitions protected by \(Z_{2}^{(0)}\times Z_{2}^{(1)}\) symmetry between a sym-LRE phase and a sym-SRE phase. Instead, the role of different sectors corresponding to the \(Z_{2}^{(1)}\) symmetry is played by the flux \(f_{p}=\prod_{e\in p}s_{e}\) through a plaquette \(p\), where \(s_{e}\) is the measurement outcome. One may then regard the transition in Ref.[34; 35] as a separability transition where in the non-trivial phase it is impossible to decompose the density matrix as a convex sum of SRE states which carry both definite \(\mathcal{Z}_{2}^{(0)}\) charge and flux \(f_{p}\). Similar statements hold true for the case of 3D cluster state, which we discuss next.
### 3d cluster state
The 3d cluster state Hamiltonian \(H_{\text{3d Cluster}}\) is:
\[\begin{split} H_{\text{3d Cluster}}&=-\sum_{e}X_{e} \prod_{f\geqslant e}Z_{f}-\sum_{f}X_{f}\prod_{e\in f}Z_{e}\\ &=\sum_{e}h_{e}+\sum_{f}h_{f}.\end{split} \tag{18}\]
The Hilbert space consists of qubits residing at both the faces \(f\) and the edges \(e\) of a cubic lattice, see Fig.1(c), or equivalently, at the edges of a cubic lattice, and the edges of its dual lattice (recall that each edge (plaquette) of the original lattice is in one-to-one correspondence with a plaquatte (edge) of the dual lattice). We assume periodic boundary conditions. This model has a \(\mathbb{Z}_{2}^{(1)}\times\mathbb{Z}_{2}^{(1^{\prime})}\) symmetry whose generators are given by
\[U_{c}^{(1^{\prime})}=\prod_{f\in\partial c}X_{f},\ U_{\tilde{c}}^{(1)}=\prod_{ e\in\partial\tilde{c}}X_{e}. \tag{19}\]
where \(c(\tilde{c})\) specifies the cube in the lattice (dual lattice) and \(\partial c(\partial\tilde{c})\) denotes the faces on the boundary of \(c(\tilde{c})\). Choosing Kraus operators \(O_{e/f}=Z_{e/f}\) with respective probabilities \(p_{e/f}\), using Eqs.(5),(6), one obtains the decohered state \(\rho=\frac{1}{2}e^{-\beta_{a}\sum_{e}h_{e}^{(1)}-\beta_{f}\sum_{f}h_{f}^{(1^{ \prime})}}\) with \(\tanh\beta_{e/f}=(1-2p_{e/f})\).
We now decompose \(\rho\) as a convex sum of symmetric states by writing \(\rho=\sum_{Q^{(1^{\prime})},Q^{(1)}}\rho_{Q^{(1^{\prime})},Q^{(1)}}\), where each \(\rho_{Q^{(1^{\prime})},Q^{(1)}}\) carries exact symmetry: \(U_{c}^{(1^{\prime})}\rho_{Q^{(1^{\prime})},Q^{(1)}}=(-1)^{Q^{(1^{\prime})}_{c} }\rho_{Q^{(1^{\prime})},Q^{(1)}}\), \(U_{\tilde{c}}^{(1)}\rho_{Q^{(1^{\prime})},Q^{(1)}}=(-1)^{Q^{(1)}_{\tilde{c}} }\rho_{Q^{(1^{\prime})},Q^{(1)}}\). Here, two one-form symmetry charges are labeled by \(Q^{(1^{\prime})}=\{Q_{c}^{(1^{\prime})}\}\) with \(Q_{c}^{(1^{\prime})}=0,1\) defined on each cube \(c\) and \(Q^{(1)}=\{Q_{\tilde{c}}^{(1)}\}\) with \(Q_{\tilde{c}}^{(1)}=0,1\) defined on each cube \(\tilde{c}\) in the dual lattice. Now, let's focus on the physical observables that characterize each sector. These are the membrane operators \(M_{S}=\prod_{f\in S}(-h_{f}^{(1^{\prime})})\) with \(S\) a contractible surface on the original lattice (by contractible surface we mean an open-membrane whose boundary \(\partial S\) is non-zero and is a closed loop) and \(M_{\tilde{S}}=\prod_{e\in\tilde{S}}(-h_{e}^{(1^{\prime})})\) with \(\tilde{S}\) a non-contractible surface on the dual lattice. Thus, we want to compute \(\operatorname{tr}\Bigl{(}\rho_{Q^{(1^{\prime})},Q^{(1)}}M_{\tilde{S}}\Bigr{)}/ \operatorname{tr}\Bigl{(}\rho_{Q^{(1^{\prime})},Q^{(1)}}\Bigr{)}\) and \(\operatorname{tr}\Bigl{(}\rho_{Q^{(1^{\prime})},Q^{(1)}}M_{\tilde{S}}\Bigr{)}/ \operatorname{tr}\Bigl{(}\rho_{Q^{(1^{\prime})},Q^{(1)}}\Bigr{)}\).
Similar to the cases in previous sections, we first compute the denominator \(\operatorname{tr}\Bigl{(}\rho_{Q^{(1^{\prime})},Q^{(1)}}\Bigr{)}\) in these expressions by inserting the complete basis \(\{|x_{\mathbf{f},\mathbf{e}}\rangle\}\) and \(\{|z_{\mathbf{f},\mathbf{e}}\rangle\}\), and obtain \(\operatorname{tr}\Bigl{(}\rho_{Q^{(1^{\prime})},Q^{(1)}}\Bigr{)}\sim\sum_{x_{ \mathbf{f}}\in Q^{(1^{\prime})}}\mathcal{Z}_{\text{3D gauge},x_{\mathbf{f}}} \sum_{x_{\mathbf{e}}\in Q^{(1)}}\mathcal{Z}_{\text{3D gauge},x_{\mathbf{e}}}\). Here, \(\mathcal{Z}_{\text{3D gauge},x_{\mathbf{f}}}=\sum_{z_{\mathbf{f}}}e^{\beta_{f} \sum_{f}x_{\mathbf{f}}(\prod_{e\in f}z_{\mathbf{e}})}\) is the partition function of the 3D Ising gauge theory with the sign of the interaction on each face labeled by \(x_{\mathbf{f}}\), and \(x_{\mathbf{f}}\in Q^{(1^{\prime})}\)
denotes all possible \(x_{\bf f}\) satisfying \(\prod_{f\in\partial c}x_{\bf f}=(-1)^{Q^{(1^{\prime})}_{c}}\). For a system with periodic boundary condition, all possible \(x_{f}\in Q^{(1^{\prime})}\) can be reached by the transformation \(x_{f}\to x_{f}\prod_{e\leq f}\sigma_{e},\sigma_{e}=\pm 1\). Further, one may verify that \({\cal Z}_{\rm 3D\ gauge,z_{\bf f}}\) is invariant under the aforementioned transformation by changing the dummy variables \(z_{e}\rightarrow\sigma_{e}z_{e}\). It follows that \({\cal Z}_{\rm 3D\ gauge,z_{\bf f}}={\cal Z}_{\rm 3D\ gauge,Q^{(1^{\prime})}}\) is only a function of charge \(Q^{(1^{\prime})}\). Analogous statements hold true for \({\cal Z}_{\rm 3D\ gauge,z_{\bf e}}\). Therefore, we write
\[{\rm tr}\Big{(}\rho_{Q^{(1^{\prime})},Q^{(1)}}\Big{)}\propto{\cal Z}_{\rm 3D \ gauge,Q^{(1^{\prime})}}{\cal Z}_{\rm 3D\ gauge,Q^{(1)}}. \tag{20}\]
One may similarly compute \({\rm tr}\Big{(}\rho_{Q^{(1^{\prime})},Q^{(1)}}M_{S}\Big{)}\), and obtain the following expressions:
\[\langle M_{S}\rangle_{Q^{(1^{\prime})},Q^{(1)}}=\frac{{\rm tr} \Big{(}\rho_{Q^{(1^{\prime})},Q^{(1)}}M_{S}\Big{)}}{{\rm tr}\Big{(}\rho_{Q^{( 0)},Q^{(1)}}\Big{)}}\] \[=\frac{\sum_{z_{\bf e}}(\prod_{f\in S}x_{f}\prod_{e\in\partial S}z _{\bf e})e^{\beta_{f}\sum_{f}x_{f}(\prod_{e\in f}z_{e})}}{{\cal Z}_{\rm 3D\ gauge,Q^{(1^{ \prime})}}}\Bigg{|}_{x_{\bf f}\in Q^{(1^{\prime})}}\] \[=(\prod_{f\in S}x_{f})\langle W_{\partial S}\rangle_{\rm 3D\ gauge,x_{\bf f }}\Big{|}_{x_{\bf f}\in Q^{(1^{\prime})}},\]
where \(\langle W_{\partial S}\rangle_{\rm 3D\ gauge,x_{\bf f}}\) is the expectation value of the Wilson loop operator (\(=\prod z_{e}\) along a closed curve) along the boundary of \(S\) for the 3d classical Ising gauge theory whose Hamiltonian is defined by the term that multiplies \(\beta_{f}\) in the exponential in the second line of Eq.(III.2). Since the plaquette interaction term in this Ising gauge theory depends on \(x_{\bf f}\in Q^{(1^{\prime})}\), similar to the discussion for 2d cluster state, we introduce an average membrane order parameter
\[[\langle M_{S}\rangle^{2}]=\sum_{Q^{(1^{\prime})},Q^{(1)}}{\rm tr}\Big{(}\rho _{Q^{(1^{\prime})},Q^{(1)}}\Big{)}\left(\langle M_{S}\rangle_{Q^{\prime}_{1}},Q_{1}\right)^{2} \tag{22}\]
Eq.(22) precisely corresponds to the disorder averaged Wilson loop of the 3D random plaquette gauge model (RPGM) along the Nishimori line [67]. It follows that \([\langle M_{S}\rangle^{2}]\sim e^{-\kappa|\partial S|}\) ('perimeter-law') when \(p_{f}<p_{c}\approx 0.029\) while \([\langle M_{S}\rangle^{2}]\sim e^{-\kappa|S|}\) ('area-law') when \(p_{f}>p_{c}\). One can also define the average membrane order parameter \([\langle M_{\hat{S}}\rangle^{2}]\) for \(M_{\hat{S}}\), and the results are analogous with the same critical error rate \(p_{c}\).
Therefore, using the qualitative behaviors of \([\langle M_{S}\rangle^{2}]\) and \([\langle M_{\hat{S}}\rangle^{2}]\), one can divide the decohered state \(\rho\) as a function of \(p_{f}\) and \(p_{e}\) into four regimes, see Fig.1(c):
1. \(p_{f},p_{e}<p_{c}\): both \([\langle M_{\hat{S}}\rangle^{2}]\) and \([\langle M_{\hat{S}}\rangle^{2}]\) satisfy perimeter law.
2. \(p_{f}<p_{c},p_{e}>p_{c}\): \([\langle M_{S}\rangle^{2}]\) satisfies perimeter-law while \([\langle M_{\hat{S}}\rangle^{2}]\) satisfies area-law.
3. \(p_{f}>p_{c},p_{e}<p_{c}\): \([\langle M_{\hat{S}}\rangle^{2}]\) satisfies area-law while \([\langle M_{\hat{S}}\rangle^{2}]\) satisfies perimeter-law.
4. \(p_{f},p_{e}>p_{c}\): Both \([\langle M_{S}\rangle^{2}]\) and \([\langle M_{\hat{S}}\rangle^{2}]\) satisfy area-law.
Using an argument similar to Ref.[21], and also similar to those already used in previous subsections for 1d and 2d cluster states, one can show that in regimes (i)-(iii), \(\rho\) cannot be a convex sum of symmetric pure states where membrane operators only exhibit an area-law. This suggests that these three regimes are sym-LRE. In regime (iv), \(\rho\) does not develop any average membrane orders, which strongly suggests that it is a sym-SRE state. We now use a CDA to support this expectation.
We again choose a CDA (Eq.(4)) with \(\Gamma=\sqrt{\rho}\). To ensure that each \(|\psi_{m}\rangle\) that enters the CDA satisfies \(\mathbb{Z}_{2}^{(1)}\times\mathbb{Z}_{2}^{(1^{\prime})}\) symmetry, we choose the basis \(\{|m\rangle=|x_{\bf e},x_{\bf f}\rangle\}\). Similar to the previous cases, we consider the 'partition function' \({\cal Z}_{m}=\langle\psi_{m}|\psi_{m}\rangle\) whose singularities are expected to indicate the presence of a phase transition. The evaluation of \({\cal Z}_{m}=\langle x_{\bf f},x_{\bf e}|\rho|x_{\bf f},x_{\bf e}\rangle\) is quite similar to that for \({\rm tr}\Big{(}\rho_{Q^{(1^{\prime})},Q^{(1)}}\Big{)}\) and one finds that \({\cal Z}_{m}\sim{\cal Z}_{\rm 3D\ gauge,x_{\bf f}}{\cal Z}_{\rm 3D\ gauge,x_{\bf e}}\). One may also compute the expectation values of two membrane operators and find \(\langle\psi_{m}|M_{S}|\psi_{m}\rangle=(\prod_{f\in S}x_{f})\langle W_{ \partial S}\rangle_{\rm 3D\ gauge,x_{\bf f}}\) and \(\langle\psi_{m}|M_{\hat{S}}|\psi_{m}\rangle=(\prod_{e\in\hat{S}}x_{e})\langle W_{ \partial\hat{S}}\rangle_{\rm 3D\ gauge,x_{\bf e}}\). Using these one may then define an average membrane order parameters \([\langle M_{S}\rangle^{2}]=\sum_{m}P_{m}\langle\psi_{m}|M_{S}|\psi_{m}\rangle^{2}\) and \([\langle M_{\hat{S}}\rangle^{2}]=\sum_{m}P_{m}\langle\psi_{m}|M_{S}|\psi_{m}\rangle ^{2}\). Using same arguments as those following Eq.(22), one concludes that both of these order parameters vanish in regime (iv).
One may also conclude that the aforementioned decomposition in regimes (ii) and (iii) correspond to topologically ordered phases. This can be argued by first considering the extreme case \((p_{f},p_{e})=(0,0.5)\) in (ii) and \((p_{f},p_{e})=(0.5,0)\) in (iii). When \((p_{f},p_{e})=(0,0.5)\), \(|\psi_{m}\rangle\sim\prod_{f}(I+h_{f}^{(0)})|m\rangle\sim(|x_{\bf f}\rangle \otimes\prod_{f}(I+x_{f}\prod_{e\in f}Z_{e})|x_{\bf e}\rangle)\), which is an eigenstate of the 3D toric code. The argument based on the singularity of average free energy \([\log{\cal Z}]\) then indicates that in regime (ii) CDA states are topologically ordered. Similar arguments hold for regime (iii). The phase diagram using such a convex decomposition is summarized in the third column of Fig.1(c).
It is interesting to compare our results with Ref.[69] where the Gibbs state of 3d cluster Hamiltonian was studied. The main difference between the decohered state we study, which is also takes the Gibbs form, with the state studied in Ref.[60] is that in Ref.[60], the Gibbs state is projected to a _single_ charge sector of both 1-form symmetries (and therefore possesses an exact symmetry, see comment #3 in Sec.II), which results in a phase transition as a function of temperature that is in the 3d Ising universality. In contrast, the decoherence we are considering leads only to an average (instead of exact) symmetry, and therefore, we obtain an _ensemble_ of density matrices \(\rho_{Q^{(1^{\prime})},Q^{(1)}}\) labeled by the symmetry charges \(Q^{(1^{\prime})},Q^{(1)}\). As discussed above, this implies that
the universality class of the transition is related to the 3d random plaquette gauge model (and not 3d Ising transition).
### 1d and 2d topological phases protected by a \(Z_{2}^{(0)}\) symmetry
Aside from the cluster states in several dimensions, Eq.(5) and Eq.(6) also holds for various stabilizer models realizing 1d and 2d SPT phases protected by a \(Z_{2}^{(0)}\) symmetry, which we now discuss briefly. An example in 1d is the non-trivial phase of the Kitaev chain [70]:
\[H=-i\sum_{j}\gamma_{2j-1}\gamma_{2j} \tag{23}\]
where \(\gamma_{j}\) denotes the majorana operator satisfying \(\{\gamma_{j},\gamma_{k}\}=2\delta_{ij}\). It is straightforward to see that the Hamiltonian satisfies Eq.(5) and one can choose \(O_{j}\) as \(\gamma_{2j-1}\) or \(\gamma_{2j}\) such that Eq.(6) is satisfied. Therefore, under the composition of the channel \(\mathcal{E}_{j}[\rho]=(1-p)\rho+p\gamma_{2j-1}\rho\gamma_{2j-1}\), the pure state density matrix becomes the finite temperature Gibbs state with \(\tanh\beta=1-2p\). A 2d example is the Levin-Gu state [71], where the Hamiltonian is defined on the triangular lattice and can be written as
\[H=-\sum_{p}B_{p},\ B_{p}=-X_{p}\prod_{\langle pqq^{\prime}\rangle}i^{\frac{1 -Z_{q}Z_{q^{\prime}}}{2}}, \tag{24}\]
where the product runs over the six triangles \(\langle pqq^{\prime}\rangle\) containing the site \(p\). The ground state has non-trivial SPT order for the \(Z_{2}^{(0)}\) symmetry generated by \(U=\prod_{p}X_{p}\). One can verify \([B_{p},B_{p^{\prime}}]=0\) and \(B_{p}^{2}=1\) by straightforward algebra, and thus Eq.(5) is satisfied. Besides, one can choose \(O_{j}=Z_{j}\) such that Eq.(6) is satisfied. Therefore, under the composition of the channel \(\mathcal{E}_{j}[\rho]=(1-p)\rho+pZ_{j}\rho Z_{j}\), the pure state density matrix becomes the finite temperature Gibbs states with \(\tanh\beta=1-2p\). Using the CDA in Eq.(4), one may then argue that both the decohered Kitaev chain and Levin-Gu state are sym-SRE for any non-zero \(p\) (we assume periodic boundary conditions so that there are no boundary modes).
## V Separability transitions for 2d chiral topological states
### Setup and motivation
In this subsection, we consider subjecting chiral fermions in 2d to local decoherence. The starting pure state we consider is the ground state of a \(p_{x}+ip_{y}\) superconductor (\(p+ip\) SC in short), although we expect that the results will qualitatively carry over to other non-interacting chiral states.
Our motivation is as follows: it is generally believed that the 2d \(p+ip\) SC cannot be prepared from a product state using a constant-depth unitary circuit (as suggested by the fact the thermal Hall conductance of a \(p+ip\) SC is non-zero while that for a trivial, gapped paramagnet is zero). Indeed, one may think of a \(p+ip\) SC as an SPT phase protected by the conservation of fermion parity [40]. Therefore, it is natural to ask what happens if one applies a quantum channel to this system where Kraus operators anticommute with the fermion parity. This is conceptually similar to our discussion in Sec.IV where we subjected a non-trivial SPT ground state to Kraus operators odd under the symmetry responsible for the existence of (pure) SPT ground state. An example of such a Kraus operator is the fermion creation/annihilation operator, and we will study this case in detail. Alternatively, one may consider subjecting \(p+ip\) ground state to decoherence with Kraus operators _bilinear_ in fermion creation/annihilation operators. In this latter case, the fermion parity remains an exact symmetry. Based on our discussion in Sec.IV, one may expect a qualitative difference in these two cases, namely, Kraus operators linear Vs bilinear in fermion creation/annihilation operators. Let us briefly outline such a qualitative difference as suggested by field-theoretic considerations whose details are presented in Sec.V.4.
Let us first consider Kraus operators linear in fermion operators. This is equivalent to bringing in ancillae fermions and entangling them with the fermions of the \(p+ip\) SC by a finite-depth unitary. Since this is a finite depth unitary operation on the enlarged Hilbert space (\(=\) ancillae \(+\) original \(p+ip\) SC), the expectation value of any observable, including non-local ones that detect chiral topological order [72; 73], cannot become zero. At the same time, intuitively, the resulting mixed state for the electrons belonging to the original \(p+ip\) SC must somehow "lose its chirality" at infinitesimal coupling to the ancillae. This is indicated by treating the density matrix as a pure state in the doubled Hilbert space using C-J isomorphism, which we discuss below in detail, where we also clarify subtleties pertinent to the mapping of Kraus operators linear in fermion operators. Under the C-J map, the effect of the channel becomes a coupling bilinear in fermion operators between two chiral Ising CFTs with opposite chirality, and which, therefore, gaps out the counter-propagating chiral CFTs. The gapping out of the edge states in the double state is also manifested in the entanglement spectrum of the double state, which we also study. In particular, we show that infinitesimal decoherence leads to a gap in the entanglement spectrum.
Although working with the double state using C-J map is insightful, it does not directly tell us the nature of the decohered mixed state. One of our central aims is to understand the difference between the original pure (non-decohered) state and the decohered state not in terms of the double state obtained via the C-J map, or in terms of non-linear functions of density matrix, but directly in
terms of the separability properties of the mixed state. Our main result is that the resulting mixed state can be expressed as a convex sum of non-chiral states, and in this sense, is non-chiral (i.e. it can be prepared using an ensemble of finite-depth unitaries that commute with fermion parity).
Let us next consider Kraus operators bilinear in the fermion operators. We study this problem only using the double-state formalism (i.e. the aforementioned C-J map), and obtain an effective action consisting of two counter-propagating free, chiral Majorana CFTs coupled via a four-fermion interaction. Such a Hamiltonian has already been studied in the past (see e.g. Refs.[74; 75]), and we simply borrow the previous results to conclude that unlike the case for Kraus operators linear in Majorana operators, this system is _stable_ against infinitesimal decoherence. Furthermore, the field-theory corresponding to the double state indicates that this system undergoes a spontaneous symmetry breaking where the gapless modes corresponding to the CFT are gapped out. The university class for this transition lies in the (supersymmetric) \(c=7/10\) tricritical Ising model. We discuss this below in detail in Sec.V.4. We note that recently, Ref.[22] studied chiral topological phases subjected to decoherence using a generalization of strange correlator [32] to mixed states [29; 30]. Although Ref.[22] did not study the problem of our interest (namely, \(p+ip\) SC subjected to Kraus operators bilinear in Majorana fermions), the overall structure of the field theories obtained in Ref.[22] using strange correlator bears resemblance to the one we motivate using entanglement spectrum in Sec.V.4.
### Separability of \(p+ip\) SC subjected to fermionic Kraus operators
Our starting point is the ground state of the \(p+ip\) superconductor [45] described by the following Hamiltonian on a square lattice
\[H=\sum_{x,y}-t(\mathbf{c}^{\dagger}_{x+1,y}\mathbf{c}_{x,y}+ \mathbf{c}^{\dagger}_{x,y+1}\mathbf{c}_{x,y}+h.c.)+\Delta(\mathbf{c}^{\dagger }_{x+1,y}\mathbf{c}^{\dagger}_{x,y}+i\mathbf{c}^{\dagger}_{x,y+1}\mathbf{c}^{ \dagger}_{x,y}+h.c.)-(\mu-4t)\mathbf{c}^{\dagger}_{x,y}\mathbf{c}_{x,y}. \tag{25}\]
When \(t=\Delta=1/2\) and the chemical potential \(\mu=1\), the system is in the topologically non-trivial phase. This can be diagnosed, for example, by studying the entanglement spectrum which will exhibit chiral propagating modes [76; 77], or, by studying the modular commutator [78; 79; 80; 81] which is proportional to the chiral central charge of the edge modes that appear if the system had boundaries. Relatedly, in the topological phase, the ground state cannot be written as a Slater determinant of exponentially localized Wannier single-particle states [44; 45; 46]. In our discussion, we assume periodic boundary conditions, so that there are no physical edge modes.
We are interested in subjecting the ground state of Eq.(25) to the composition of the following single-majorana channel on all sites:
\[\mathcal{E}_{j}[\rho]=(1-p)\rho+p\gamma_{j}\rho\gamma_{j}. \tag{26}\]
Is the chiral nature of the ground state \(\rho_{0}\) stable under the channel? More precisely, can we express the decohered density matrix as a convex sum of pure states, where each of these pure states now does not exhibit chiral states in its entanglement spectrum, and relatedly, has a vanishing modular commutator in the thermodynamic limit?
Under the aforementioned channel (Eq.(26)), the density matrix will continue to remain Gaussian, and is fully determined by the covariance matrix \(M\) defined as \(M_{jk}=-i\operatorname{tr}(\rho(\gamma_{j}\gamma_{k}-\delta_{jk}))\). As shown in Appendix B.1, under the channel in Eq.26, \(M\) evolves as \(\mathcal{E}(M)=(1-2p)^{2}M\). We write the decohered density matrix \(\rho\) as \(\rho(p)=e^{-H_{\rho}(p)}\), where \(H_{\rho}(p)\) can be determined explicitly in terms of \(\mathcal{E}(M)=(1-2p)^{2}M\) as detailed in the Appendix B.1.
To write the decohered mixed state \(\rho\) as a convex sum of pure states, we consider the decomposition in Eq.(4), and write
\[\rho(p) = \sum_{m}e^{-H_{\rho}(p)/2}|m\rangle\langle m|e^{-H_{\rho}(p)/2} \tag{27}\] \[= \sum_{m}|\psi_{m}\rangle\langle\psi_{m}|\]
where \(|m\rangle\) are product states in the occupation number basis: \(|m\rangle=|m_{1},...,m_{N}\rangle,\ m_{j}=0,1\) and \(|\psi_{m}\rangle=\sqrt{\rho}|m\rangle=e^{-H_{\rho}(p)/2}|m\rangle\). To build intuition for the states \(|\psi_{m}\rangle\), let's consider the particular state \(|\psi_{0}\rangle=\sqrt{\rho}|0\rangle\) where \(|0\rangle\) is a state with no fermions. One can analytically show at any non-zero decoherence, the real-space wavefunction for this state is a Slater determinant of localized Wannier orbitals, unlike the (undecohered) ground state of \(p+ip\) SC [44; 45; 46]. The argument is as follows. One may write \(|\psi_{0}\rangle\propto e^{-\beta\sum_{\mathbf{c}}a^{\dagger}_{\mathbf{c}}a _{\mathbf{c}}}|0\rangle\) where \(\tanh(\beta)=(1-2p)^{2}\) and \(\alpha^{\dagger}_{\mathbf{k}}=u_{\mathbf{k}}c^{\dagger}_{\mathbf{k}}+v^{ \dagger}_{\mathbf{k}}c_{-\mathbf{k}}\) are the same (complex) fermionic operators that diagonalize the original \(p+ip\) BCS Hamiltonian (see Appendix B.1), with \(|u_{\mathbf{k}}|^{2}+|v_{\mathbf{k}}|^{2}=1\) due to unitarity. Since \(c_{\mathbf{k}}|0\rangle=0\), this implies that
\[|\psi_{0}\rangle\propto\prod_{\mathbf{k}}\left[1+\left(e^{-\beta}-1 \right)\left(|v_{\mathbf{k}}|^{2}+u_{\mathbf{k}}v_{\mathbf{k}}c^{\dagger}_{ \mathbf{k}}c^{\dagger}_{-\mathbf{k}}\right)\right]|0\rangle \tag{28}\]
This expression may then be exponentiated to obtain the standard BCS-like form for \(|\psi_{0}\rangle\propto e^{\sum_{\mathbf{k}}h(\mathbf{k})c_{\mathbf{k}}^{\dagger} c_{-\mathbf{k}}^{\dagger}}|0\rangle\), where
\[h(\mathbf{k})=\frac{u_{\mathbf{k}}v_{\mathbf{k}}\left(e^{-\beta}-1\right)}{|u_{ \mathbf{k}}|^{2}+|v_{\mathbf{k}}|^{2}e^{-\beta}} \tag{29}\]
As \(p\to 0\), \(\beta\rightarrow\infty\) (recall \(\tanh(\beta)=(1-2p)^{2}\)), and one recovers the \(p+ip\) ground state where \(h(\mathbf{k})\sim v_{\mathbf{k}}/u_{\mathbf{k}}\) diverges as \(1/(k_{x}+ik_{y})\) and results in a power-law decay of Wannier orbitals [45]. In contrast, at any non-infinite \(\beta\) (i.e. non-zero decoherence rate \(p\)), \(h(\mathbf{k})\) is non-infinite for any \(\mathbf{k}\) (since \(|u_{\mathbf{k}}|^{2}+|v_{\mathbf{k}}|^{2}=1\)), and therefore, the Wannier orbitals corresponding the state \(|\psi_{0}\rangle\) are exponentially localized. As an aside, this same argument also applies to the decohered 1d Kitaev chain (Sec.IV.4), and more generally, to other decohered non-interacting fermionic topological superconductors.
The above argument only applies to the translationally invariant state \(|\psi_{0}\rangle\) that enters the convex decomposition in Eq.(27). To make progress for general \(|\psi_{m}\rangle\), we found it more helpful to consider diagnostics which directly access the topological character (or lack thereof) of a wavefunction, and which are also more amenable to finite-size scaling. In particular, we employ the'modular commutator' introduced in Refs. [78; 79; 80; 81]. Modular commutator is a multipartite entanglement measure that quantifies the chiral central charge for a _pure state_, and can be completely determined by the many-body wavefuntion [78; 79; 80; 81]. Specifically, it is defined as \(J_{ABC}:=i\operatorname{tr}(\rho_{ABC}[\ln\rho_{AC},\ln\rho_{BC}])\) with \(\rho_{X}\) the reduced density matrix in region \(X\) obtained from a pure state \(|\psi\rangle\) (i.e. \(\rho_{X}=\operatorname{tr}_{\overline{X}}|\psi\rangle\langle\psi|\)).
In the absence of decoherence, the modular commutator of \(|\psi_{m}\rangle\) for this setup is \(J_{0,ABC}=\pi c/3=\pi/6\), as the chiral central charge \(c=1/2\) for the \(p+ip\) superconductor. Fig.2 shows the modular commutator \(J_{ABC}/J_{0,ABC}\) on a \(L\times L\) torus as a function of \(L\). We choose the error rate \(p=0.04\) and several different initial states, including \(|m\rangle=|0,...,0\rangle\) (uniform), \(|0,1,0,1,..,0,1\rangle\) (staggered), and also a random bit string in the occupational number basis. We find that in all cases, \(J_{ABC}\) vanishes in the thermodynamic limit. We also studied other values of \(p\), and our results are again consistent with the claim that at any non-zero \(p\), the modular commutator for the states \(|\psi_{m}\rangle\) vanishes in the thermodynamic limit. This provides numerical evidence that at any non-zero error rate, the decohered mixed state can be expressed as a convex sum of states that do not have any chiral topological order, and hence must be representable as Slater determinants of single-particle localized Wannier states [44] (note that all states \(|\psi_{m}\rangle\) are area-law entangled).
It is important to note that in contrast to the pure states \(|\psi_{m}\rangle\), the modular commutator for the decohered mixed state \(\rho\)_does not_ show any abrupt behavior change at \(p=0\) (dashed plot in Fig.2). This is consistent with the fact that the arguments relating modular commutator to the chiral central charge rely on the overall state being pure [78; 79; 80; 81], and therefore, we don't expect that modular commutator for the mixed-state \(\rho\) captures the separability transition at \(p=0\). This again highlights the utility of the convex decomposition of \(\rho\) into pure states.
In addition, we also numerically compute the entanglement spectrum of \(|\psi_{m}\rangle\) with \(|m\rangle\) the uniform product state (so that momentum along the entanglement bipartition is a good quantum number). For a chiral topological state, one expects that the edge spectrum of a physical edge will be imprinted on the entanglement spectrum of a subregion [76]. Since \(|\psi_{m}\rangle\) is Gaussian, the entanglement spectrum is encoded in the spectrum of the matrix \(iM_{A}\), where \(M_{A}\) is the restriction of the covariance matrix \(M\) to the region \(A\) in the inset of Fig.3. Fig.3 shows the spectrum of \(iM_{ABC}\) (denoted as \(\nu\)) as a function of the momentum \(k_{y}\) with error rate \(p=0\) and \(p=0.04\). The geometry is again chosen as a torus, with length \(L_{x}=60\), and height \(L_{y}=30\). In the absence of error (\(p=0\)), all states \(|\psi_{m}\rangle\) are projected to the \(p+ip\) ground state, and thus the spectrum shows chirality, resembling the edge states of the \(p+ip\) SC (note that we have two entanglement boundaries resulting in counter-propagating chiral states in the entanglement spectrum). After the decoherence is introduced, one finds that the chiral mode in the entanglement spectrum is gapped out, see Fig.3. We also confirmed that the gap between the two 'bands' of the entanglement spectrum increases with the system size (not shown). Overall, both the modular commutator and the
Figure 2: Modular commutator \(J_{ABC}/J_{0,ABC}\) on a \(L\times L\) torus as a function of \(L\) corresponding to several different pure states \(|\psi_{m}\rangle\) that enter the convex decomposition of the \(p+ip\) SC subjected to decoherence with Kraus operators linear in Majorana fermions (Eq.(27)), as well as the modular commutator of the decohered mixed state itself. We choose error rate \(p=0.04\), and the following initial states \(|m\rangle\) in Eq.(27): \(|m\rangle=|0,...,0\rangle\) (uniform), \(|0,1,0,1,..,0,1\rangle\) (staggered), and \(|m\rangle=\) a random bit string in the occupational number basis. The inset shows the geometry of regions \(A,B,C\) used to define the modular commutator. We use anti-periodic boundary conditions along both directions so that the ground state is unique.
entanglement spectrum provide numerical evidence that the decohered density matrix can be written as a convex sum of free-fermion, pure states that have no chiral topological order.
### Double-state formalism for fermions
Previous subsection focused on the single-Majorana channel that breaks the fermion parity symmetry of the initial density matrix from exact (\(U\rho=\rho U=\rho\)) down to average (\(U^{\dagger}\rho U=\rho\)). As briefly mentioned above, if one instead uses a channel where Kraus operators are bilinear in Majorana operators (so that the fermion parity remains an exact symmetry), one might expect a more interesting behavior, in particular the possibility of a phase transition between different non-trivial mixed states. One way to make progress on this case is to study appropriate non-linear functions of the density matrix [18; 19; 20; 22; 82; 29]. Relatedly, one may employ the double state obtained using C-J map, which has been used in [18; 20] to study decoherence in bosonic problems. Specifically, given a density matrix \(\rho_{\mathcal{H}}\) acting on the Hilbert space \(\mathcal{H}\), one can define a state vector \(|\rho\rangle_{\mathcal{H}\otimes\bar{\mathcal{H}}}\) in the doubled Hilbert space \(\mathcal{H}\otimes\bar{\mathcal{H}}\) (with \(\bar{\mathcal{H}}\) having the same dimension as \(\mathcal{H}\)) using the C-J map [42; 43] (see footnote 2):
Footnote 2: We note that the C-J isomorphism discussed here is a bit different from the original C-J isomorphism between channels \(\mathcal{E}[\cdot]\) and operators \(\mathcal{E}\otimes I[|\Phi\rangle\langle\Phi|]\) introduced in Ref.[42; 43], and is along the lines of super-operator formalism in Refs.[84; 83]. We use C-J isomorphism as a mnemonic to transform bra(ket) to ket(bra) spaces using maximally-entangled states. See App.B.2 for more discussion.
\[|\rho\rangle_{\mathcal{H}\otimes\bar{\mathcal{H}}}=\rho_{\mathcal{H}}\otimes I _{\bar{\mathcal{H}}}|\Phi\rangle_{\mathcal{H}\otimes\bar{\mathcal{H}}}. \tag{30}\]
Here \(I_{\bar{\mathcal{H}}}\) denotes the identity in \(\bar{\mathcal{H}}\) and \(|\Phi\rangle_{\mathcal{H}\otimes\bar{\mathcal{H}}}\) is the product of (unnormalized) maximally entangled pairs connecting \(\mathcal{H}\) and \(\bar{\mathcal{H}}\), i.e., \(|\Phi\rangle_{\mathcal{H}\otimes\bar{\mathcal{H}}}=\otimes_{j}|\phi\rangle_{j, \mathcal{H}\otimes\bar{\mathcal{H}}}\) with \(|\phi\rangle_{j,\mathcal{H}\otimes\bar{\mathcal{H}}}=\otimes_{j}(\sum_{p=1}^{d} |p_{\mathcal{H}},p_{\bar{\mathcal{H}}}\rangle_{j})\) and \(d\) the Hilbert space dimension on a single site. Henceforth, for notational simplicity we omit the subscript labeling the Hilbert space if there is no confusion. For bosons, it is straightforward to see that under Eq.(30), the density matrix \(\rho=\sum_{p,q}\rho_{q}^{p}|p\rangle\langle q|\) is mapped to \(|\rho\rangle=\sum_{p,q}\rho_{q}^{p}|p,q\rangle\). On the other hand, the channel \(\mathcal{E}[\cdot]=\sum_{\alpha}K_{\alpha}(\cdot)K_{\alpha}^{\dagger}\) is mapped to the operator
\[\mathcal{N}_{\mathcal{E}}=\sum_{\alpha}K_{\alpha}\otimes\bar{K}_{\alpha}. \tag{31}\]
This can be derived by expressing \(|\mathcal{E}[\rho]\rangle\) as an operator acting on \(|\rho\rangle\), i.e., \(|\mathcal{E}[\rho]\rangle=\mathcal{N}_{\mathcal{E}}|\rho\rangle\). See App.B.2 for details. However, a similar correspondence for fermions is a bit subtle. For example, naively applying Eq.(31) to the single-majorana channel in Eq.(26) gives
\[\begin{split}\mathcal{E}_{j}|\rho\rangle&\stackrel{{?}}{{=}}[(1-p)I_{j}\otimes I_{j}+p\gamma_{j}\otimes\bar{\gamma}_{j})]| \rho\rangle\\ &=[(1-p)I+p\gamma_{j}\eta_{j})]|\rho\rangle\\ &\sim e^{-i\mu(i\gamma_{j}\eta_{j})}|\rho\rangle,\ \mu=\tan(p/(1-p)),\end{split} \tag{32}\]
where we denote \(\eta=\bar{\gamma}\) as the Majorana operators in the Hilbert space \(\bar{\mathcal{H}}\). Eq.(32) suggests that the channel generates a _real_ time evolution for the double state, which contradicts our intuition that the channel instead gives rise to an imaginary time evolution. Another hint that Eq.(32) is incorrect comes from setting \(p=1/2\), where the relation \(\mathcal{E}_{j}[\mathcal{E}_{j}[\rho]]=\mathcal{E}_{j}[\rho]\) holds. However, Eq.(32) gives \(\mathcal{E}_{j}\mathcal{E}_{j}|\rho\rangle=\gamma_{j}\eta_{j}|\rho\rangle/2\), which is not equal to \(\mathcal{E}_{j}|\rho\rangle\). Therefore, to find the correct correspondence between \(\mathcal{E}[\cdot]\) and \(\mathcal{N}_{\mathcal{E}}\) for fermions, one should begin with the more fundamental definition of the double state, i.e, \(|\rho\rangle=\rho\otimes I|\Phi\rangle\), which we discuss in detail in Appendix B. Our main result is the following mapping: Given the Kraus operators as a function of fermionic creation and annihilation operators \(\{K_{\alpha}=K_{\alpha}(\mathbf{c}^{\dagger},\mathbf{c})\}\), the channel \(\mathcal{E}[\cdot]=\sum_{\alpha}K_{\alpha}(\mathbf{c}^{\dagger},\mathbf{c})( \cdot)K_{\alpha}^{\dagger}(\mathbf{c}^{\dagger},\mathbf{c})\) in the Hilbert space \(\mathcal{H}\) under Eq.(30) is mapped to the following operator in
Figure 3: The spectrum of \(iM_{A}\) (= restriction of the covariance matrix to region \(A\) in the inset) for a state \(|\psi_{m}\rangle\) obtained from \(|m\rangle=|0,...,0\rangle\) (see Eq.(27)) as a function of the momentum \(k_{y}\) for error rates \(p=0\) (i.e. non-decohered) and \(p=0.04\) (i.e. decohered). Here, we put the system on a \(L_{x}\times L_{y}\) torus with \(L_{x}=60\) and \(L_{y}=30\).
the Hilbert space \(\mathcal{H}\otimes\mathcal{\tilde{H}}\):
\[\mathcal{N}_{\mathcal{E}}=\sum_{\alpha}K_{\alpha}(\mathbf{c}^{\dagger},\mathbf{c })K_{\alpha}^{\dagger}(\mathbf{c}^{\dagger}\rightarrow-\mathbf{d},\mathbf{c} \rightarrow\mathbf{d}^{\dagger}), \tag{33}\]
where we denote the \(\mathbf{d}^{\dagger}(\mathbf{d})\) as the creation (annihilation) operator in \(\mathcal{\tilde{H}}\). 3 For example, for the Kraus operator given by \(K=\gamma_{1}\equiv(\mathbf{c}+\mathbf{c}^{\dagger})\), the C-J transformed operator is
Footnote 3: We note that the same result has also been dervied by Daniel Arovas (unpublished) using a slightly different approach.
\[\begin{split}(\mathbf{c}+\mathbf{c}^{\dagger})(-\mathbf{d}+ \mathbf{d}^{\dagger})&=-i(\mathbf{c}+\mathbf{c}^{\dagger})\frac{ (\mathbf{d}-\mathbf{d}^{\dagger})}{i}\\ &=-i\gamma_{1}\eta_{1},\end{split} \tag{34}\]
where we denote \(\eta_{1}=(\mathbf{d}-\mathbf{d}^{\dagger})/i\). Since the fermionic coherent states are Grassmann even and commute with each other, Eq.(33) can be directly generalized to a system with multiple fermionic modes.
### Phase transition induced by an interacting channel in a \(p+ip\) SC
Being equipped with the correspondence between \(\mathcal{E}[\cdot]\) and \(\mathcal{N}_{\mathcal{E}}\), we now return to our discussion of decoherence induced transitions in chiral topological states of fermions. We first revisit the problem discussed in V.2, and then consider a more interesting problem where the Kraus operators are bilinear in fermions so that the decohered density matrix is not Gaussian.
There are different ways to employ the double state to probe the effect of decoherence. For example, one may consider non-linear functions such as the normalization of the double state [18; 29; 20; 82]. Here we will motivate the entanglement spectrum of a state obtained from the double state \(|\rho\rangle\) (after space-time rotation) as a probe of the decoherence-induced phase transitions.
To begin with, consider the normalization of the double state
\[\langle\rho|\rho\rangle=\langle\rho_{0}|\mathcal{E}^{\dagger}\mathcal{E}|\rho _{0}\rangle. \tag{35}\]
If the bulk action describing \(|\rho_{0}\rangle=|\Psi_{0},\Psi_{0}^{*}\rangle\) is rotationally invariant, one can map \(\langle\rho|\rho\rangle\) to the path integral of the \((1+1)\)-D boundary fields following Ref.[20]:
\[\begin{split}\langle\rho|\rho\rangle=\int&\mathcal{ D}(\psi_{L},\psi_{L}^{*},\psi_{R},\psi_{R}^{*})\\ &\quad\quad e^{-S_{0,L}(\psi_{L},\psi_{L}^{*})-S_{0,R}(\psi_{R}, \psi_{R}^{*})-S_{\rm int}(\psi_{L},\psi_{L}^{*},\psi_{R},\psi_{R}^{*})}.\end{split} \tag{36}\]
Here, \(\psi_{L}\) and \(\psi_{L}^{*}\) denote the low-energy field variables in the ket and bra Hilbert space, respectively. \(S_{0,L}\) is the partition function on the left side of the spatial interface \(x=0^{-}\) (the meaning of \((\psi_{R},\psi_{R}^{*})\) and \(S_{0,R}\) are similar with left \(\leftrightarrow\) right). \(S_{\rm int}\) describes the effect of the channel \(\mathcal{E}^{\dagger}\mathcal{E}\) and has two contributions:
\[S_{\rm int}=S_{1}+S_{\mathcal{E}}. \tag{37}\]
Here, \(S_{1}\) denotes the action that exists even in the absences of decoherence. In particular, \(S_{1}\) strongly couples the fields \(\psi_{L}(\psi_{L}^{*})\) and \(\psi_{R}(\psi_{R}^{*})\) such that \(\psi_{L}=\psi_{R}(\psi_{L}^{*}=\psi_{R}^{*})\) in the absence of decoherence. On the other hand, \(S_{\mathcal{E}}\) describes the action that merely comes from the decoherence and vanishes when the error rate \(p=0\). In general, the exact form of \(S_{\mathcal{E}}\) involves four fields \((\psi_{L},\psi_{L}^{*},\psi_{R},\psi_{R}^{*})\) and may be schematically captured by the following Hamiltonian:
\[H=(H_{0,L}+H_{\mathcal{E},L})+(H_{0,R}+H_{\mathcal{E},R})+H_{1}. \tag{38}\]
where \(H_{1}\) strongly couples the \(L\) and \(R\) fields. One may then consider the reduced density matrix for \(L\) fields that is obtained after tracing out the \(R\) fields. One expects [85; 86] that the corresponding entanglement Hamiltonian (= logarithm of the reduced density matrix) will essentially correspond to \(H_{0,L}+H_{\mathcal{E},L}\). Working with entanglement Hamiltonian has the advantage that the number of fields one needs to keep track of are now halved. Similar simplification occurs if one considers the fidelity \(\mathrm{tr}(\rho_{d}\,\rho_{0})\) between the decohered density matrix \(\rho_{d}\) and the non-decohered density matrix \(\rho_{0}\), see Ref.[22]. Since we are now working only with the \(L\) fields, in the following we will omit the subscript \(L\) for notational simplicity.
As an example, let us first revisit the case of \(p+ip\) superconductor perturbed by a channel that is linear in Majorana fermions (Sec.V.2). Recall that here the Kraus map corresponds to the composit
Figure 4: The spectrum of \(iM_{L}\) for the double state \(|\rho\rangle\), where \(M_{L}\) is the restriction of the covariance matrix \(M\) to the region \(L\), as a function of the momentum \(k_{y}\) for different error rates \(p\). Here, we put the system on a cylinder with circumference \(L_{x}=60\), and height \(L_{y}=16\).
on all sites: \(\mathcal{E}_{\mathbf{x}}[\rho]=(1-p)\rho+p\gamma_{\mathbf{x}}\rho\gamma_{\mathbf{x}}\). Based on our discussion above on the C-J map for fermions, this translates to a term of the form \(H_{\mathcal{E}}=ig\int dy\,\gamma\,\eta\) where \(p\sim g\), and \(\gamma,\eta\) respectively denote the fields corresponding to bra and ket Hilbert spaces of the \(L\) fields. In the absence of any decoherence, the spatial boundary of the \(p+ip\) superconductor has a simple description in terms of a chiral Majorana fermion. The entanglement Hamiltonian in the doubled Hilbert space then corresponds to stacking the boundary of \(p+ip\) and \(p-ip\) superconductors, and is given by \(H_{0}=i\int dy(\gamma\partial_{y}\gamma-\eta\partial_{y}\eta)\). Therefore, one expects that the entanglement Hamiltonian for the \(L\) fields in the presence of decoherence takes the form:
\[H_{E}=i\int dy(\gamma\partial_{y}\gamma-\eta\partial_{y}\eta)+ig\int dy\gamma\eta \tag{39}\]
The counter-propagating edge modes are gapped out for any non-zero \(g\,(\propto p)\), in line with our earlier discussion where we provided evidence that at any non-zero \(p\), the density matrix can be written as a convex sum of pure states that are SRE. The gapping out of the edge modes can also be seen by numerically evaluating the entanglement spectrum of the double state obtained via C-J map. Fig.4 shows the spectrum of \(iM_{L}\) (denoted as \(\nu\)) as a function of the momentum \(k_{y}\) with different error rate \(p\). Here, we put the system on the cylinder with circumference \(L_{x}=60\), and height \(L_{y}=16\). In the absence of error (\(p=0\)), there are two counter-propagating modes, resembling the edge states of the initial double state \(|\rho_{0}\rangle\). After the decoherence is introduced, one can clearly see from Fig.4 that these counter-propagating modes are gapped out for arbitrary small error rate. Note that we _did not_ perform any space-time rotation to obtain Fig.4. This is suggestive that the entanglement Hamiltonian of the double state \(|\rho\rangle\) may already have the same qualitative behavior as the one obtained after space-time rotation. We leave further investigation of this point to the future.
Let us return to the problem of our main interest in this subsection, namely that of Kraus operators that _commute_ with the fermion parity operator. The simplest possibility is the composition of the following Kraus map on all nearest-neighbor bonds \(\langle\mathbf{x},\mathbf{y}\rangle\) of the square lattice:
\[\mathcal{E}_{\langle\mathbf{x},\mathbf{y}\rangle}[\rho]=(1-p)\rho+p\gamma_{ \mathbf{x}}\gamma_{\mathbf{y}}\rho\gamma_{\mathbf{x}}\gamma_{\mathbf{y}} \tag{40}\]
The interaction term \(H_{\mathcal{E}}\) induced by such a Kraus map in the double state should respect the following \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) symmetries: \(\gamma\rightarrow-\gamma\) and/or \(\eta\rightarrow-\eta\). Since Majorana fermions square to identity, the simplest term that is bilinear in both \(\gamma\) and \(\eta\) and respects all the symmetries involves derivatives:
\[H_{\mathcal{E}}=g\int dy(\gamma\partial_{y}\gamma)(\eta\partial_{y}\eta). \tag{41}\]
where \(g\propto p\). Therefore, the full entanglement Hamiltonian for the \(L\) fields in the presence of decoherence is given by:
\[H_{E}=i\int dy(\gamma\partial_{y}\gamma-\eta\partial_{y}\eta)+g\int dy(\gamma \partial_{y}\gamma)(\eta\partial_{y}\eta). \tag{42}\]
This field theory has been studied earlier in Refs.[74; 75]. At a particular \(g=g_{c}\), the system undergoes a phase transition in the tricitial Ising university class with central charge \(c=7/10\). For \(g<g_{c}\), the interaction term is irrelevant, while above \(g_{c}\), the system spontaneously breaks the \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) symmetry down to the diagonal \(\mathbb{Z}_{2}\) symmetry. Physically, this means that the exact fermion-parity symmetry (i.e. \(U\rho=\rho\) where \(U\) is the generator of the fermion parity), has been spontaneously broken down to an average symmetry (i.e. \(U\rho U^{\dagger}=\rho\)). We note that a class of 2d chiral topological phases subjected to decoherence with fermion-bilinear Kraus operators was also studied in Ref.[22]. One notable difference between Ref.[22] and our problem is that in the examples considered in Ref.[22], the decoherence always reduces the effective central charge of the action corresponding to the double state. In contrast, in our problem, the effective central charge increases from \(c=1/2\) to \(c=7/10\).
It is interesting to contemplate the implications of the above phase transition in terms of the separability properties of the original mixed state \(\rho\) (instead of the double state \(|\rho\rangle\)). We conjecture that for \(p\ll p_{c}\), there exists no decomposition of the density matrix as a convex sum of area-law entangled pure states without any chirality, while \(p>p_{c}\) the density matrix is expressible as a convex sum of area-law entangled pure states with GHZ-like entanglement (due to spontaneous breaking of fermion parity). Similar to the case of intrinsic topological orders subjected to local decoherence [17; 18; 20; 81], we anticipate that the universality class as well as the location of the critical point obtained from the double-state formalism will differ from that of the 'intrinsic' mixed-state transition for the density matrix, e.g., when viewed from the perspective of separability. We do not know the universality for the latter transition and we leave it as an open question.
## VI Separability transition in Gibbs states of Nits Hamiltonian
In this section we will consider an exotic separability transition in a Gibbs state relevant to certain quantum codes. Although this transition does not require any symmetry, which has been a main ingredient in the rest of this work, the argument below to deduce the existence of a separability transition is broadly similar in spirit to that in Secs.III and IV.
Recently, there has been discovery of 'good LDPC codes' where the code distance as well as the number of logical qubits scale with the total number of qubits
[49; 50; 51]. Moreover, Ref.[48] showed that the construction of a good LDPC code in Ref.[49] satisfies Freedman-Hastings' No Low-Energy Trivial States (NLTS) conjecture [47] which, when satisfied by a Hamiltonian, means that any state \(|\psi\rangle\) with energy density less than a non-zero value \(e_{c}\) can not be prepared by a constant depth unitary circuit (the energy density \(e\) of a state \(|\psi\rangle\) is defined as \(e=\lim_{N\to\infty}\left(\langle\psi|H|\psi\rangle-E_{0}\right)/N\) where \(E_{0}\) is the groundspace energy of \(H\)). Here we ask: does the Gibbs state of a NLTS Hamiltonian show a separability transition at a non-zero temperature? That is, does there exist a \(T_{c}>0\) so that for \(T<T_{c}\), the Gibbs state can not be written as a convex sum of SRE pure states?
Firstly, we note Ref.[48] already proved that any mixed state whose energy density is less than an \(e_{c}>0\) cannot be purified to a pure SRE state by a short-depth channel, i.e., it cannot be prepared by first enlarging the Hilbert space to include ancillae, which are initially all in a product state, followed by a finite-depth unitary that entangles the'system' qubits (which are also initially in a product state) with the ancillae qubits, and eventually integrating out the ancillae. However, as already discussed in Sec.II, inability to purify to an SRE state via a short-depth channel does not imply that a mixed state is SRE using our definition (= expressibility of a mixed state as a convex sum of SRE pure states). Just to briefly re-iterate the example discussed in Sec.II that illustrates these two different notions of mixed-state entanglement (see comment #4 in Sec.II), any Gibbs state that exhibits spontaneous symmetry breaking (which therefore has long-range correlations for the operator corresponding to the order parameter), can not be purified to an SRE pure state via a short-depth channel. Therefore such a mixed state will be SRE using our definition and LRE using the definition in Refs.[48; 52]. Here we provide a simple argument that the Gibbs state of an NLTS stasifying Hamiltonian shows a separability transition at a non-zero temperature.
Let us assume that the Gibbs state of an NLTS satisfying Hamiltonian \(H\) can be expressed as a convex sum of SRE pure states for _any_ temperature \(T>0\), i.e., \(\rho(T)=e^{-H/T}/Z=\sum_{i}p_{i}|\psi_{i}\rangle\langle\psi_{i}|\) where each \(|\psi_{i}\rangle\) can be prepared via a unitary whose depth is independent of the number of qubits \(N\). For simplicity of notation, we set the groundspace energy \(E_{0}\) to zero (this can always be achieved by adding a constant \(Nc\) to the Hamiltonian, where \(c\) is a constant). We will show that this assumption leads to a contradiction. Since all pure states \(|\psi_{i}\rangle\) are SRE, by NLTS condition, they must all satisfy \(\langle\psi_{i}|H|\psi_{i}\rangle/N>e_{c}\) as \(N\to\infty\). Therefore \(\text{tr}(\rho(T)H)/N=\sum_{i}p_{i}\langle\psi_{i}|H|\psi_{i}\rangle/N>\sum_{ i}p_{i}e_{c}=e_{c}\). This implies that if the Gibbs state can be expressed as a convex sum of SRE pure states, then its energy density is non-zero. However, non-zero energy density necessarily implies non-zero temperature. This is equivalent to showing that as \(T\to 0\), \(\text{tr}(\rho(T)H)/N\to 0\). This is indeed the case because as \(T\to 0\), \(\text{tr}(\rho(T)H)/N\approx E_{1}e^{-E_{1}/T}/N\) which indeed vanishes as \(T\to 0\) (\(E_{1}\) denotes the energy of the first-excited state, which is a constant independent of \(N\) since the LDPC code Hamiltonian under discussion is a sum of commuting projectors). Therefore, if we assume that the Gibbs state is separable for all non-zero temperatures, we arrive at a contradiction. Hence the Gibbs state must be long-range entangled upto a non-zero temperature \(T\). It seems reasonable to assume that at sufficiently high temperature, the Gibbs state is SRE. Therefore, one expects a separability transition at some temperature \(T_{c}\) that satisfies \(0<T_{c}<\infty\).
It is important to note that the above-argued separability transition does not necessarily imply that the Gibbs state has a _thermodynamic_ phase transition, i.e., it need not be accompanied by a singularity of the partition function.
## VII Some connections between separability and other measures of mixed-state complexity
In this section, we comment on some connections among the separability criteria, purifications, double states, and strange correlators.
### Connections among separability, purification, and double state
In Sec.V.3, we used the double-state formalism to probe decoherence-induced transitions. However, the connection between \(\rho\) being sym-SRE and the double state \(|\rho\rangle\) being trivial remains unclear. In this subsection, we will attempt to bridge the gap between them using purification of the mixed state.
We first recall the idea of purification: given a mixed state \(\rho_{\mathcal{H}}\) in the Hilbert space \(\mathcal{H}\), there exists a purification in the double Hilbert space \(\mathcal{H}\otimes\widetilde{\mathcal{H}}\) with \(\widetilde{\mathcal{H}}\) having the same dimension as \(\mathcal{H}\):
\[|\rho^{1/2}\rangle=\rho_{\mathcal{H}}^{1/2}\otimes I_{\widetilde{ \mathcal{H}}}|\Phi\rangle_{\mathcal{H}\otimes\widetilde{\mathcal{H}}}. \tag{43}\]
where \(|\Phi\rangle_{\mathcal{H}\otimes\widetilde{\mathcal{H}}}\) is a maximally entangled state between \(\mathcal{H}\) and \(\widetilde{\mathcal{H}}\). It is straightforward to see that \(\text{tr}_{\widetilde{\mathcal{H}}}(|\rho^{1/2}\rangle\langle\rho^{1/2}|)=\rho _{\mathcal{H}}\). Besides, we note that Eq.(43) is somtimes called'standard purification' [55], and all possible purifications are equivalent up to an isometry applied merely in \(\widetilde{\mathcal{H}}\). If one uses Eq.(2) as a definition of an SRE mixed state \(\rho\), then \(|\rho^{1/2}\rangle\) being an SRE implies that \(\rho\) is SRE. However, it is not obvious to us how to show that this implies that \(\rho\) can be written a convex sum of SRE states (Eq.(1)). Instead, we are only able to show that if \(|\rho^{1/2}\rangle\) is SRE, one can write the mixed state \(\rho\otimes I/\text{dim}(\widetilde{\mathcal{H}})\) (which lives in the Hilbert space system\(\otimes\)ancillae) as a convex sum of SRE states. To see this, we first note that a complete basis for the Hilbert space \(\mathcal{H}\otimes\widetilde{\mathcal{H}}\) can be obtained from a single maximally entangled state by applying local unitaries _merely_
in \(\mathcal{\bar{H}}\). Specifically, denoting the complete basis of Bell pairs for spin-1/2 system as \(\{|\phi_{m,n}\rangle,\ m,n=0,1\}\), all of them are related to \(|\phi\rangle=(|00\rangle+|11\rangle)/\sqrt{2}\) through \(|\phi_{m,n}\rangle=(Z_{\mathcal{\bar{H}}})^{m}(X_{\mathcal{\bar{H}}})^{n}|\phi\rangle\). It then follows that a complete basis for \(\mathcal{H}\otimes\mathcal{\bar{H}}\) can be written as
\[|\Phi_{m,n}\rangle=\prod_{j}(Z_{j,\mathcal{\bar{H}}})^{m_{j}}(X_{j,\mathcal{ \bar{H}}})^{n_{j}}|\Phi\rangle \tag{44}\]
with \(m=(m_{1},m_{2},...)\) and \(n=(n_{1},n_{2},...)\). Since \(|\Phi_{m,n}\rangle\) are obtained by applying local unitary in \(\mathcal{H}\) to a maximally entangled state, they are all also maximally entangled. Now, we use the same idea as we used to define CDA states (Eq.(4)) by writing \(\rho\otimes I/\text{dim}(H)\) as \(\frac{1}{\text{dim}(\mathcal{H})}\sum_{m,n}\left(\rho^{1/2}\otimes I\right)| \Phi_{m,n}\rangle\langle\Phi_{m,n}|\left(\rho^{1/2}\otimes I\right)\):
\[\begin{split}\rho\otimes\frac{I}{\text{dim}(\mathcal{\bar{H}})}& =\frac{1}{\text{dim}(\mathcal{\bar{H}})}\sum_{m,n}\rho^{1/2} \otimes I\Big{[}\prod_{j}(Z_{j,\mathcal{\bar{H}}})^{m_{j}}(X_{j,\mathcal{\bar {H}}})^{n_{j}}\Big{]}|\Phi\rangle\langle\Phi|\Big{[}\prod_{k}(Z_{k,\mathcal{ \bar{H}}})^{m_{k}}(X_{k,\mathcal{\bar{H}}})^{n_{k}}\Big{]}(\rho^{1/2}\otimes I) \\ &=\frac{1}{\text{dim}(\mathcal{\bar{H}})}\sum_{m,n}\Big{[}\prod_{ j}(Z_{j,\mathcal{\bar{H}}})^{m_{j}}(X_{j,\mathcal{\bar{H}}})^{n_{j}}\Big{]}( \rho^{1/2}\otimes I)|\Phi\rangle\langle\Phi|(\rho^{1/2}\otimes I)\Big{[}\prod _{k}(Z_{k,\mathcal{\bar{H}}})^{m_{k}}(X_{k,\mathcal{\bar{H}}})^{n_{k}}\Big{]} \\ &=\frac{1}{\text{dim}(\mathcal{\bar{H}})}\sum_{m,n}\Big{[}\prod_{ j}(Z_{j,\mathcal{\bar{H}}})^{m_{j}}(X_{j,\mathcal{\bar{H}}})^{n_{j}}\Big{]}|\rho^{1/2} \rangle\langle\rho^{1/2}|\Big{[}\prod_{k}(Z_{k,\mathcal{\bar{H}}})^{m_{k}}(X _{k,\mathcal{\bar{H}}})^{n_{k}}\Big{]}\\ &=\frac{1}{\text{dim}(\mathcal{\bar{H}})}\sum_{m,n}|\rho^{1/2}_{m,n}\rangle\langle\rho^{1/2}_{m,n}|,\ \ |\rho^{1/2}_{m,n}\rangle=(\prod_{j}(Z_{j,\mathcal{\bar{H}}})^{m_{j}}(X_{j,\mathcal{ \bar{H}}})^{n_{j}}|\rho^{1/2}\rangle.\end{split} \tag{45}\]
In the second line, we use the property that \(\prod_{j}(Z_{j,\mathcal{\bar{H}}})^{m_{j}}(X_{j,\mathcal{\bar{H}}})^{n_{j}}\) and \(\rho^{1/2}\) commute, as they act on different Hilbert space. Since \(|\rho^{1/2}_{m,n}\rangle\) is related to \(|\rho^{1/2}\rangle\) by a unitary acting solely in \(\mathcal{\bar{H}}\), if \(|\rho^{1/2}\rangle\) is SRE, then so is \(|\rho^{1/2}_{m,n}\rangle\). Therefore, if there exists an SRE purification for \(\rho\) (Eq.(43)), then \(\rho\otimes I/\text{dim}(\mathcal{\bar{H}})\) can be written as a convex sum of SRE pure states (Eq.(45)).
However, we emphasize that the converse is not true: if \(|\rho^{1/2}\rangle\) is not an SRE, it does not rule out the possibility that the mixed state \(\rho\) is still an SRE. This can be most easily seen by considering the following counterexample that also appeared in Sec.II. Let \(\rho\) be the convex sum of two product state \(|0\rangle^{N}\) and \(|1\rangle^{N}\), i.e.,
\[\rho=\frac{1}{2}[(|0\rangle\langle 0|)^{\otimes N}+(|1\rangle\langle 1|)^{ \otimes N}], \tag{46}\]
It follows that the purified state is the GHZ state:
\[|\rho^{1/2}\rangle=\frac{1}{\sqrt{2}}[|00\rangle^{\otimes N}+|11\rangle^{ \otimes N}], \tag{47}\]
which is clearly long-range entangled. This implies that \(|\rho^{1/2}\rangle\) being trivial is a sufficient but not necessary condition for \(\rho\otimes I/\text{dim}(\mathcal{\bar{H}})\) being trivial.
The advantage of studying \(\rho\) using its purification is obvious: instead of finding the decomposition in Eq.(4), one only needs to deal with a single pure state \(|\rho^{1/2}\rangle\). However, it is in general difficult to compute \(|\rho^{1/2}\rangle\), as taking a square root of \(\rho\) is non-trivial if \(H_{\rho}=-\log(\rho)\) doesn't admit a simple compact form. An alternative is to consider the double state \(|\rho\rangle=\rho\otimes I|\Phi\rangle\) in Eq.(30) (note that if the original density is pure, i.e., \(\rho^{2}=\rho\), then the double state \(|\rho\rangle\) is equivalent to the purified state \(|\rho^{1/2}\rangle\)). Heuristically, since the coefficient in front of \(H_{\rho}\) for \(|\rho\rangle\) is higher than the coefficient for \(|\rho^{1/2}\rangle\), we expect that if \(|\rho\rangle\) is SRE, then \(|\rho^{1/2}\rangle\) is SRE as well, but we do not know how to prove this. This is consistent with the result in Ref.[20], where the critical error rate for \(|\rho\rangle\) being trivial is higher than the error rate that the topological entanglement negativity drops to zero, and also consistent with the results in Ref.[17] on the error threshold for separability for topologically ordered mixed states.
### Connections between convex decomposition and strange correlator
In Sec.IV, we studied separability transitions for cluster state SPTs in various dimensions using CDA (Eq.(4)) with the initial basis \(\{|m\rangle\}\) as product states satisfying the corresponding symmetry of the cluster state SPT (which was Pauli-\(X\) basis in all the cases we considered). Fortuitously, as we discussed, the threshold for the CDA states being sym-SRE exactly corresponded to the error rate beyond which \(\rho\) must be sym-LRE using general arguments, indicating that our choice of CDA is optimal.
Intriguingly, the symmetric product state basis to generate CDA has an apparently close connection with the strange correlator [32], which was originally devised as a diagnosis of the SPT pure states and has recently been
used to probe the non-trivial SPT mixed states [29; 30]. To see the connection between them, we briefly review the original strange correlator for SPT pure states and two types of strange correlator introduced in Ref.[29]. Choosing \(|m\rangle\) as the disordered product state respecting the symmetry group \(G\), the strange correlator for a pure state \(|\psi\rangle\) is defined as [32]
\[C_{m}(j-k)=\frac{\langle m|O_{j}O_{k}|\psi\rangle}{\langle m|\psi\rangle}, \tag{48}\]
where \(O\) is some operator that transforms non-trivially under \(G\). The basic idea of strange correlator is that the temporal edge of SPT pure state (when the many-body wavefunction is expressed as an imaginary time path integral) mimics its spatial edge. Since at least 2d SPT possess nontrivial spatial edge states (in 3d, there also exists a possibility of boundary topological order), one may also use the temporal correlation defined in Eq.(48) to probe non-trivial SPT. To generalize the strange correlator from pure states to mixed states, two types of strange correlator were introduced in Ref.[29]. The type-I strange correlator is defined as
\[C_{m}^{I}(j-k)=\frac{\langle m|\rho O_{j}O_{k}|m\rangle}{\langle m|\rho|m \rangle}. \tag{49}\]
In the pure state limit \(\rho=|\psi\rangle\langle\psi|\), the type-I strange correlator reduces to Eq.(48). Therefore, in the case of subjecting local decoherence to an SPT pure state, \(C_{m}^{I}\) can be intuitively regarded as asking whether the local decoherence destroys the temporal edge states. However, it has been shown in Ref.[29] that the type-I strange correlator is unable to detect the average SPT order mentioned in Ref.[26]. Instead, it was argued that the non-triviality of such an SPT order should be detected by the type-II strange correlator, defined as
\[C_{m}^{II}(j-k)=\frac{\langle m|O_{k}^{\dagger}O_{j}^{\dagger}\rho O_{j}O_{k}| m\rangle}{\langle m|\rho|m\rangle}. \tag{50}\]
In the pure state limit, it reduces to \(|\langle m|O_{j}O_{k}|\psi\rangle|^{2}/\langle m|\psi\rangle\). Roughly speaking, the type-II strange correlator is devised to capture the case that \(\rho\) can be written as an incoherent sum of pure state \(|\psi_{p}\rangle\), where \(\langle m|O_{j}O_{k}|\psi_{p}\rangle\) is non-trivial but can be either positive or negative depending on \(|\psi_{p}\rangle\).
On the other side, the necessary condition for the mixed state to be non-trivial using separability criteria is the non-triviality of CDA states \(|\psi_{m}\rangle\), which may be probed by several physical observables \(S\) as discussed in Sec.IV:
\[\frac{\langle\psi_{m}|S|\psi_{m}\rangle}{\langle\psi_{m}|\psi_{m}\rangle}= \frac{\langle m|\rho^{1/2}S\rho^{1/2}|m\rangle}{\langle m|\rho|m\rangle}. \tag{51}\]
Comparing Eq.(49), Eq.(50) and Eq.(51), one finds that the denominator is always the fidelity between a symmetric product state and the mixed state of interest:
\[\begin{split}\mathcal{Z}_{m}&=\mathrm{tr}(\rho|m \rangle\langle m|)\\ &=\langle m|\rho|m\rangle=\langle\psi_{m}|\psi_{m}\rangle.\end{split} \tag{52}\]
For the numerator, Eq.(51) involves inserting an operator between \(\langle m|\rho^{1/2}\) and \(\rho^{1/2}|m\rangle\), while the strange correlator involves inserting an operator between \(\langle m|\rho\) and \(|m\rangle\).
## VIII Summary and discussion
In this work we explored the interplay of complexity and symmetry for many-body mixed states. Specifically, we asked whether a given mixed state can be expressed as a convex sum of symmetric short-ranged entangled pure states, which we took as a definition of an SRE mixed state subject to a given symmetry (a'sym-SRE' mixed state, Sec.II). Our primary aim was to identify'many-body separability transitions' as a function of an appropriate tuning parameter (e.g decoherence rate or temperature) across which the nature of the mixed state changes qualitatively - on one side of transition the mixed state is sym-SRE, and on the other side it is sym-LRE (= not sym-SRE). Analogous phase diagrams for intrinsic topological orders subject to local decoherence [18; 19; 20; 21; 22] were recently studied in Ref.[17]. Our general approach was to first seek constraints that imply that a mixed state is necessarily long-range entangled, and absent such constraints, we developed tools to find the regime where a mixed state can be shown to be sym-SRE. One of the tools that allowed us to make progress was that local decoherence converts ground states of several SPTs, e.g. cluster states in various dimensions, to a Gibbs state.
In the context of SPTs subjected to local decoherence, we focussed on cluster states in various dimensions and obtained their'separability phase diagram' as shown in Fig.1. As evident from the figure, the phase diagram gets progressively richer as one moves up in spatial dimensionality. The paths solely along the \(x\) and \(y\) axes in these phase diagrams correspond to the special case of 'average SPT' mixed states where one of the symmetries is exact while the other is average [26; 27; 28; 29; 30]. It is crucial to note that although the decohered mixed state takes a Gibbs form, the corresponding partition function is _not_ singular at any non-zero temperature for any of these cluster states. The different phases in Fig.1 arise _only_ because we are requiring that the density matrix be expressible as a convex sum of pure, symmetric states. Therefore, these transitions are conceptually distinct from thermal phase transitions, and are more akin to 'complexity phase transitions' for the mixed state, when a symmetry is enforced. We briefly discussed relation with other approaches to classifying mixed-state SPTs [26; 27; 28; 29; 30] in Sec.VII.
It is also interesting to contrast the symmetry-enforced separability transitions in decohered 2d and 3d cluster states with decoherence induced separability transitions in 2d and 3d toric codes, studied in Ref.[17]. In both
cases, one finds the appearance of same statistical mechanics models (e.g. RBIM in 2d). This similarity can be traced to the fact that the ground state of toric codes can be obtained from the ground state of the cluster states by performing appropriate projective measurements [87, 88, 89, 90], along with the equivalence between local and thermal decoherence for cluster states (this statement holds true also for the fractonic X-cube model [91] and its parent cluster state [89]).
We also studied non-stabilizer topological states subjected to local decoherence. In particular, for free fermion chiral states corresponding to a \(p+ip\) superconductor, we argued that if the quantum channel responsible for decoherence breaks the fermion parity, the resulting Gibbs state can be expressed as a convex sum of non-chiral states, and is therefore SRE at any non-zero decoherence rate (Sec.V). We also studied a case where the channel respects the fermion parity and identified a mixed state phase transition as a function of the decoherence rate using the double-state formalism. This transition can be thought of as corresponding to spontaneous breaking of the fermion parity, and as far as we know, does not have a pure-state counterpart. Intuitively, in a pure-state context, breaking fermion parity spontaneously essentially requires assigning a non-zero expectation value to fermionic operators, which is unphysical. In contrast, in the context of a mixed state, breaking fermion parity spontaneously means that the environment can exchange fermions with the system'spontaneously', which is not unphysical (in the double-state formalism, this corresponds to assigning non-zero expectation value to the bosonic order-parameter \(\eta\,\gamma\) where \(\eta\) and \(\gamma\) respectively denote the fields corresponding to bra and ket Hilbert spaces).
We also analyzed separability transitions in the Gibbs state of the quantum Ising model and argued that the Gibbs state is SRE at any non-zero temperature, and sym-SRE only for \(T>T_{c}\), where \(T_{c}\) is the critical temperature corresponding to the spontaneous symmetry breaking (Sec.III). We expect similar results to hold for other models whose Gibbs state shows a spontaneous breaking of zero-form discrete symmetry.
Finally, in Sec.VI, we provided a short argument that the Gibbs states of Hamiltonians that satisfy NLTS (no low-energy state) condition [47] must exhibit a separability transition at a non-zero temperature.
In the rest of this section, we discuss various aspects of our results and motivate questions for further exploration.
**1. SPT and chiral states:** The technique we used to study phase diagrams of various cluster states relied on the fact the quantum channel resulted in Gibbs states (Eqs.(5),(6)). It is not obvious how to generalize it to other SPT states. On that note, the following \(\mathbb{Z}_{N}\) generalization may be helpful to study \(\mathbb{Z}_{N}\) cluster states and topological orders produced by partial measurement of such states. Let us consider a commuting projector Hamiltonian of the form \(H=\sum P_{i}\), where \(P_{i}\) are projector (\(P_{i}^{2}=1\)) written as \(P_{i}=\frac{1}{N}\sum_{n=0}^{N-1}h_{i}^{n}\), with \(h_{i}^{N}=1\). Let us now introduce the following set of Kraus operators on each site \(i\), \(K_{1}(i)=\sqrt{1-p}\mathds{1},K_{2}(i)=\sqrt{\frac{p}{2}}K(i),K_{3}(i)=\sqrt{ \frac{p}{2}}K^{\dagger}(i)\), where \(K^{\dagger}(i)K(i)=K(i)K^{\dagger}(i)=\mathds{1}\), and \(K(i)\) are clock operators that satisfy \(K(i)h_{i}K^{\dagger}(i)=e^{2\pi i/N}h_{i},K^{\dagger}(i)h_{i}K(i)=e^{-2\pi i/N}h _{i}\), one may verify that the application of this channel on all sites again results in a Gibbs state for \(H\).
It might also be interesting to study 'intrinsically mixed' SPT states introduced in Refs.[26, 27] from the point of view of separability. These are SPT states that can exist only in the presence of decoherence. Conversely, it will be interesting to understand our results on non-trivial mixed SPTs protected by higher-form symmetries, such as 2d and 3d cluster state, using the techniques in Refs.[26, 27] which primarily focussed on zero-form symmetry SPTs.
In the context of chiral states, we studied a phase transition driven by a channel where the Kraus operators were Majorana bilinears (Sec.V.3). We analyzed this problem only using the double-state formalism. As suggested by the problem of decoherence in toric code, the double state is likely to overestimate the threshold for the actual transition, and it will be interesting to find a description of the aforementioned transition in \(p+ip\) SC directly in terms of the separability properties of the mixed state.
One important subtlety we would like to point out is that we assumed periodic boundary condition in our discussion of the SPT and chiral states. If instead one considers open boundaries such that the boundaries do not break the symmetry responsible for non-trivial SPT/chiral topological order, then the pure (non-decohered) state is always LRE, e.g., due to propagating edge modes or topolgical order at the boundary. In the presence of decoherence, our naive expectation is that the resulting mixed state is not sym-SRE, even if the decoherence breaks the symmetry from exact down to average. It will be interesting to study this aspect in the future.
**2. Symmetry broken states:** The first example we discussed, primarily to illustrate the distinction between SRE and sym-SRE states, was the Gibbs state of the transverse-field Ising model in any dimension (Sec.III). We discussed an explicit decomposition of this state at a non-zero temperature as a convex sum of pure states which we argued are SRE at any non-zero temperature. This conclusion is consistent with numerical results on Renyi negativity [24], and mean-field arguments [23, 25]. On the other hand, if one imposes the Ising symmetry on the pure states into which the Gibbs state is being decomposed, we adopted an argument from Ref.[21] to show that these pure states must be long-range entangled for \(T<T_{c}\). This implies that the Gibbs state is sym-LRE for \(T\leq T_{c}\). In contrast, for \(T>T_{c}\) we provided an explicit sym-SRE decomposition of the Gibbs state. The basic idea of the argument is to write \(e^{-\beta H}\) as \(\sum_{\phi}e^{-\beta H/2}|\phi\rangle\langle\phi|e^{-\beta H/2}\) where \(\{\phi\}\) are chosen as com
plete set of states in \(z(x)\) basis if one wants to expand the Gibbs state as a sum of (sym-)SRE pure states.
There are several open questions along this direction. Firstly, the argument we provided for the aforementioned pure states being (sym-)SRE is not mathematically rigorous. To explicitly show that a state is SRE, one needs to construct a finite-depth circuit that prepares it starting from a product state. We only provided arguments in the continuum limit that the pure states under consideration have short-range correlations. It will be worthwhile to study the entanglement structure of the pure states we claimed to be SRE using numerical methods (e.g. quantum Monte Carlo), or using a detailed field-theoretic analysis. Secondly, as we discussed, the transverse field Ising model in \(d\geq 2\) must exhibit a separability transition from a sym-LRE to sym-SRE as a function of temperature. It will be interesting to study the symmetry-resolved negativity [92] to quantify the nature of long-range entanglement across this transition. Finally, our arguments only apply to Gibbs states that break a discrete symmetry spontaneously, and it will be interesting to consider generalization to systems with spontaneously broken continuous symmetries that host Goldstone modes at a non-zero temperature.
**3. Experimental and numerical implications:** It is interesting to contemplate experimental implications of a symmetry-enforced separability transition. One perspective is that symmetry resolved versions of mixed-state entanglement measures such as entanglement negativity or entanglement of formation, that are specifically designed to quantify the lack of separability, would likely experience a singularity across such a phase transition. For example, for the Gibbs state \(\rho\) of the transverse-field Ising model (Sec.III), one can in principle prepare the states \(\rho_{\pm}=P_{\pm}\rho\) where \(P_{\pm}\) are the projectors onto the even and odd sectors of the Ising symmetry. This can be done, e.g., by entangling an ancilla qubit with the system qubits sequentially using CNOT gates, and by measuring the ancilla qubit at the end. As discussed in Sec.III, the resulting mixed state (i.e. \(\rho_{+}\) or \(\rho_{-}\), depending on the outcome of the measurement on the ancilla) will show long-range mixed-state entanglement for \(T<T_{c}\), in contrast to the original density matrix \(\rho\), which will be short-ranged entangled for any \(T>0\). The long-range entanglement of \(\rho_{\pm}\) can in principle be quantified experimentally using the Renyi negativity [93].
One may also imagine a very patient, gedanken experimentalist who has access to local unitary gates with a finite fidelity, so that they have the ability to prepare only an ensemble of SRE pure states (i.e. pure states preparable with a constant depth unitary). If so, then a separability transition from an SRE to an LRE mixed state is equivalent to the transition from success to failure in preparing the ensemble corresponding to the mixed state. One may similarly characterize a transition from a sym-SRE to a sym-LRE by putting symmetry constraints on the local gates that form the circuit.
Perhaps a more practical implication of our results is that it may allow efficient classical simulation of a class of mixed states. For example, in the context of Gibbs state of the quantum Ising model, we argued that it admits a convex decomposition in terms of SRE pure states at any non-zero temperature if one does not impose any symmetry constraint on the pure states. Since SRE states are typically easier to study using classical simulations, such a representation allows one to separate the "classical hardness" Vs "quantum hardness" in simulating a mixed state, analogous to the METTS algorithm [56]. In contrast, if one tries to prepare the Gibbs state of the quantum Ising model starting with a _product state_ (assisted with ancillae), then long-range correlations below \(T_{c}\) imply that one necessarily requires a deep quantum circuit [94].
On a different note, one way to prepare an ensemble of pure states that may show a mixed-state separability phase transition is via a judicious combination of unitaries and measurements [21; 34; 35; 87; 88; 89; 90; 95]. For example, Refs.[34; 35] provide a construction of mixed states that are closely related to the mixed states discussed in Sec.II, and which have also been implemented in a recent experiment (Ref.[96]). It will be interesting to design experiments that probe the phase diagram in Fig.1 using similar ideas, although we suspect it may be comparatively more challenging as it requires measuring non-local observables supplemented with appropriate decoding scheme [34].
###### Acknowledgements.
The authors thank Dan Arovas, Tim Hsieh, John McGreevy, Bowen Shi for helpful discussions, and Tsung-Cheng Lu, Shengqi Sang, William Witczak-Krempa for helpful comments on the draft. TG is supported by the National Science Foundation under Grant No. DMR-1752417. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958.
|
2303.09393 | A Deep Dive into NFT Whales: A Longitudinal Study of the NFT Trading
Ecosystem | NFT (Non-fungible Token) has drastically increased in its size, accounting
for over \$16.9B of total market capitalization. Despite the rapid growth of
NFTs, this market has not been examined thoroughly from a financial
perspective. In this paper, we conduct methodical analyses to identify NFT
market movers who play a significant role in potentially manipulating and
oscillating NFT values. We collect over 3.8M NFT transaction data from the
Ethereum Blockchain from January 2021 to February 2022 to extract trading
information in line with the NFT lifecycle: (i) mint, (ii) transfer/sale, and
(iii) burn. Based on the size of held NFT values, we classify NFT traders into
three groups (whales, dolphins, and minnows). In total, we analyze 430K traders
from 91 different NFT collection sources. We find that the top 0.1\% of NFT
traders (i.e., whales) drive the NFT market with consistent, high returns. We
then identify and characterize the NFT whales' unique investment strategies
(e.g., mint/sale patterns, wash trading) to empirically understand the whales
in the NFT market for the first time. | Na Hyeon Park, Hanna Kim, Chanhee Lee, Changhoon Yoon, Seunghyeon Lee, Youngjin jin, Seungwon Shin | 2023-02-02T10:26:12Z | http://arxiv.org/abs/2303.09393v1 | # A Deep Dive into NFT Whales: A Longitudinal Study of the NFT Trading Ecosystem
###### Abstract.
NFT (Non-fungible Token) has drastically increased in its size, accounting for over $16.9B of total market capitalization. Despite the rapid growth of NFTs, this market has not been examined thoroughly from a financial perspective. In this paper, we conduct methodical analyses to identify NFT market movements who play a significant role in potentially manipulating and oscillating NFT values. We collect over 3.8M NFT transaction data from the Ethereum Blockchain from January 2021 to February 2022 to extract trading information in line with the NFT lifecycle: (i) mint, (ii) transfer/sale, and (iii) burn. Based on the size of held NFT values, we classify NFT traders into three groups (whales, dolphins, and minnows). In total, we analyze 430K traders from 91 different NFT collection sources. We find that the top 0.1% of NFT traders (i.e., whales) drive the NFT market with consistent, high returns. We then identify and characterize the NFT whales' unique investment strategies (e.g., mint/sale patterns, wash trading) to empirically understand the whales in the NFT market for the first time.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
+
Footnote †: *Both authors contributed equally to this research.
+
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
+
+
Footnote †: *Both authors contributed equally to this research.
+
+
Footnote †: *Both authors contributed equally to this research.
+
Footnote †: *Both authors contributed equally to this research.
The findings of this longitudinal measurement study have given us a lesson regarding the NFT market. Our study has shown that NFT whales have been powerful enough to manipulate the entire market. In other words, the NFT market itself is still immature and unstable. Furthermore, we also observe that the whales have been abusing weaknesses of the NFT market to inflate the value of the NFTs they own. In order to establish a mature and stable NFT market, the existing regulatory and legal environments must adapt to include NFT. It is also recommended to implement security countermeasures to protect the market from any abusive behaviors.
## 2. Background
**Non-fungible token.** An _NFT_ is unique, irreplaceable, and its value is decided individually depending on demand. An NFT is a digital asset that represents objects like art, video, in-game entities, etc. NFTs are stored in blockchain, with Ethereum being the representative choice. NFTs are categorized in a _collection_, which refers to a set of NFTs. Commonly, NFTs in the same collection share common features. For example, one of most popular collection named _Bored Ape Yacht Club (BAYC)_ is a collection consisting of 10,000 ape tokens. The price of NFTs varies widely even within the same collection, although the range varies depending on the collection. **NFT life-cycle.** Due to the anonymous nature of cryptocurrency wallet, we define a unique wallet address as a _trader_. Here, we detail the procedures for how an NFT transaction operates, which consists of three steps: (i) mint, (ii) transfer/sale, and (iii) burn. _Mint_ refers to the process of offering newly created tokens to the public. Generally, NFT creators post the detailed schedule and price on various channels, such as discord, which is then conducted by a smart contract on a blockchain. _Transfer_ is an act of simple value transfer which only passes ownership of NFT tokens to another wallet. _Sale_ is the process of transferring ownership of NFTs for a price, and is commonly held in NFT markets (Bauer et al., 2017; Bauer et al., 2017). _Burn_ refers to the method of removing ownership of NFT tokens on purpose. Once an NFT is burned, no one is able to gain its control.
**Classifying NFT traders.** There has been no official term for traders (individuals or entities) that hold differing amounts of NFTs. Thus, we consider each wallet as an individual _trader_ and classify the traders into three groups according to their holding values of NFTs, for every time range in our dataset:
* top 0.1% traders
* top 10% traders excluding whales
* all other traders excluding whales and dolphins (89.9%)
To clearly identify the trader groups, we define _holding value_ to be used as a concrete criteria for our classification. The estimation of the current market price of NFTs may be imprecise due to the high volatility of the market value of NFTs compared to traditional economics. Instead, we believe that the latest trading price of each NFT could represent its value well. We define _holding value of a trader_ as the sum of the last traded price for each token held by the trader.
To focus on the top holding value traders, we refer to whales and dolphins collectively as _holding value leaders_. The number of traders in each group is shown in Table 1 and is further described in Table 2 of the Appendix.
## 3. Tracing NFT Transactions on the Blockchain
In Ethereum, there are two types of transactions: _external_ and _internal_ transactions. Ether (ETH) transfers between users is recorded as external transactions. External transactions have information such as receiver's and sender's addresses and transferred amount in ETH is recorded in the blockchain and readily available to anyone for reference.
On the other hand, transferring tokens, such as NFTs or fungible tokens, is a type of an internal transaction, which is not stored on the blockchain. Instead, we can use the _token transfer log_, which is recorded by token contracts when token transferring occurs. By collecting transfer logs from NFT contracts, we can trace how NFT transactions work in the NFT ecosystem.
We collect external transactions and token transfer logs from the Ethereum blockchain to track NFT ownership changes and subsequent payments. More details can be found in Section A.2.
For the NFT ecosystem, 2021 marks the first year when the market began to grow rapidly with public attention (Krishnan et al., 2020), with NFT trading volumes showing an increase of 21,000% from 2020 (Bauer et al., 2017). As a result, we focus on data with block_timestamp from the first day of 2021 to February 28, 2022 (14 months). During this collection period, we obtain 3,838,587 transactions for over a million NFT items in total. Also, the unique number of accounts participating in a transaction at least once is 430.2K. Our data collection is summarized in Table 1.
## 4. Characteristics of NFT whales
In this section, we characterize NFT whales through a deep analysis of their behavior patterns such as unique trading methods for NFT items and portfolio management.
### Changes in Whales' Composition
The NFT market has changed dramatically over the past 14 months with the influx of new traders and the advent of various types of new collections. In this rapidly changing market, _how do traders become whales?_ To answer this question, we study how traders move across between groups during 14 months, especially focusing on whales. Figure 1 shows the changes in composition of the whale group until February 2022.
In the beginning, since the size of whale group is small, the composition of the group is highly volatile as the market size increases. However, from June 2021, more than 80% of whales remain in the next whale group, except in August 2021. August is when
\begin{table}
\begin{tabular}{l r} \hline \hline
**Type** & **Collection** \\ \hline NFT & 1,129,6967 \\ Transaction & 3,838,587 \\ Account & 3,086,046 \\ Whale & 430 \\ Dolphin & 42,593 \\ Minnow & 387,204 \\ \hline Total accounts & 430,277 \\ \hline \hline
**Period** & January 1,2021 - February 28, 2022 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Summary of data collection. The accounts are divided by whale, dolphin, minnow (February, 2022)
the most significant change in the whale group occurs. The surge of public interest at that time (Kumar et al., 2017) caused the largest number of new traders compared to the previous months. To understand how traders become whales and how they maintain their status, we investigate the characteristics according to the whales' origin as follows. The percentage of each group where the whales belonged to is summarized in Table 4 of the Appendix.
**Whales from _whales_.** Whales maintain their status through various ways: buying, receiving, and minting. In the first half, more than half of whales buy NFTs in order to increase the holding value. However, in early 2022, whales who mainly receive tokens grow in number. Meanwhile, some whales actively mini NFTs. For example, one whale minted a tremendous number of tokens in several collections and sold them to dolphins and minnows. There also exist whales who were inactive for more than a few months. Their extremely expensive tokens let them stay in the whale group.
**Whales from _dolphins_.** As the size of the whale group increases, it is natural for dolphins to become whales. In August, when the market size grows most rapidly, dolphins take up 39% of the new whale group. Dolphins raise their holding value in various ways like whales from _whales_. However, in early 2022, the percentage of new whales from dolphins see a decrease to 15%.
**Whales from _minnows_.** We rarely observe whales who formerly belonged to _minnows_. They increase their NFT holding value through buying or minting. They barely receive tokens from other traders, which indicates that they do not form particular relationships with other traders. This is completely different to first-time trading whales, which is described in detail below.
**First-time trading whales.** There are a lot of traders who become whales as soon as they begin participating in the NFT market. When the size of the market increases rapidly, whales from this group sometimes outnumber the whales from the dolphin group. They usually become whales through buying or receiving, but many of them become whales only through the act of receiving tokens. Usually, the tokens they receive are NFTs of popular collections with high trading volume (e.g. _CrytoPunks_). Interestingly, most of the NFTs they received come from the former whales, implying a close relationship between them.
**Findings and Insight.** Although the NFT ecosystem is rapidly growing, whales maintain their place firmly. In addition, newly emerging whales are often associated with former whales. This clearly shows that it is hard for minnows to become whales.
### Whales' Portfolio Holdings
Common participants in the NFT market pursue financial gains. In this regard, examining their portfolio from various angles is highly valuable for measuring significant financial factors on the
Figure 1. Changes in the group of traders in _whale_ from January 2021 (t = 1) to February 2022 (t = 14). The top and bottom only depict traders that have belonged to the _whale_ group at least once, until August and February 2022, respectively. Transitions from one color to another depict a quantity of traders with group changes.
Figure 2. Price distribution of tokens held by each group. Dark-blue, sky-blue, olive each represents holdings of whales, dolphins, and minnows.
NFT ecosystem. In this section, we examine the whales' NFT portfolio, preferences, and dominance in NFT collections.
**Top collection holdings.** We begin by searching for top 10 collections that whales mostly hold. Interestingly, we discover that the majority of the collections in the top 10 are popular collections with large trading volume. The whales' most held collection is _Art Blocks_ (9.6K tokens held). Interestingly, whales hold 1.6K _CryptoPunks_ tokens which accounts for 16% of total tokens in this collection. The number of tokens held by Whales in other collections can be found on Table 3 of the Appendix.
**Price distribution.** To look into the price of each group's holdings, we divide the token price into six price points, ranging from $0 to the maximum price in our data, which is $23.3M. Figure 2 describes the price distribution of tokens that each group held on February 28, 2022. Figure 2(a) shows the distribution of tokens over total 91 collections. In addition, to look at distribution within a collection, 2(b) shows the holding range of the most popular NFT project, _CryptoPunks_. Note that the price distribution of popular collections (e.g., _BAYC_, _Art Blocks_) also resembles the distribution of _CryptoPunks_. Interestingly, almost all of the tokens with prices ranging over $10M are in hands of whales. Whales also hold most of the tokens in the price range of $1M to $10M across all collections. On the whole, Figure 2 indicates that whales are the only holders of the high-price tokens, both for overall collections and the top collections.
**Findings and Insight.** Generally speaking, price of goods in a market is an indicator of its 'value'. The result obtained in this section suggests that whales gain dominance over almost all highly valuable tokens.
## 5. Whales' Impact on the NFT Market
To examine how influential whales are in the NFT ecosystem, we analyze the whales' trading behavior in terms of their impact on market sentiment and traders.
In Figure 3, we observe that the average number of transactions by whales is overwhelmingly large. We observe larger fluctuations in the trading activities of whales than dolphins and minnows. To understand the whales' behavior, we study how they respond to real-world events related to NFT.
**Impact on market volatility during liquidation.** Although the number of whales makes up a small portion of all traders, whales impact market volatility when liquidating NFT collections. Liquidation by whales has drastically increased market volatility. We discuss important events (E1-E3) in which the whales had a huge impact on the market.
First, we observe a rapid but short-lived peak in February 2021 in all groups. With the growing popularity of one collection, _Hashmasks_[(10)], traders buy a huge number of tokens from the collection, which accounts for more than half of their purchases.
**(E1)** However, in April, NFT market conditions cooled abruptly. Traders in all groups barely participated in transactions due to a sharp decline (nearly 70%) in the average price of NFTs [(22)]. In fact, before the market crash, we observe whale activities drop to its lowest position in March. This implies that the decrease in whale activities has had a big impact on the NFT market sentiment.
**(E2)** In May, we observe an uplift in the number of cells only in the transaction graph for whales. In particular, the whales sell tokens that they have minted on this month. Interestingly, 95% of such tokens are from two new collections, _Meebits_ and _Bored Ape Yacht Club (BAYC)_. _Meebits_ received the spotlight even before launch, since it is made by the creators behind _CryptoPunks_. On the other hand, _BAYC_ suddenly gained popularity due to a large quantity of whales' mintings [(23)]. This promoted the sales of _BAYC_ tokens, especially in the dolphins and minnows group. In fact, _BAYC_ accounts for 40% and 54% of dolphins' and minnows' sales, respectively.
**(E3)** We observe a steady increase in overall sales volume in July 2021, which increases dramatically by August. This is related to a surge in price of _CryptoPunks_ caused by whales. One day in August, a whale bought over 100 _CryptoPunks_ NFTs worth more than $6M in total. Since then, the whales' purchase of the collection continued. Such whale activity sparked public interest in the NFT market, resulting in the influx of new traders [(14)].
Except for specific periods in which whales actively liquidate their assets, whales tend to transfer (specifically, the act of receiving) instead of selling assets on the market. They prefer to hold assets relatively longer compared to dolphins and minnows. This is discussed in more detail in Section 6.1.
**Leading investment trends.** Whales have actively participated in _mint_ as early stage investors. Also, they prefer to invest specific collections that they are interested in, while other groups participate in a variety of collections. Here, we uncover several remarkable events as follows.
Figure 3. Monthly statistics of each transaction type of each group on average. (stacked plot)
In January 2021, whales mint about four times more than any other transaction types, and most of them are _Hashmasks_. Their mintings surged again in May, due to _Meebits_ and _BAYC_ as mentioned in E2. In the later months including August, some collections (e.g.,_Punks Comic_) embed governance rights on tokens (a.k.a. DAO tokens), that increased the demand for mint. This suggests that whales are interested in exercising governance rights on a collection.
NFT market capitalization is increasing due to rise in the number of collections and traders. However, we see a drop in practical trader activities compared to the early months. In fact, only 20% of minnows participates in trading in the last month (see Table 5 in the Appendix). This implies that the NFT market is mainly operated by whales and dolphins. In addition, we cannot observe any noticeable increase in whale transactions after August, which suggests the lack of emergence of influential collections (e.g., _CryptonPunks_). Despite the large number of collection launches, only a few major collections account for the majority of the NFT market capitalization (Han et al., 2017). Therefore, holding major collection NFTs allows the whales to maintain their position as whales.
**Findings and Insight.** The NFT market is predominantly being driven by whales. Whales have distinct trading patterns compared to that of other groups; whales have a huge impact on the market and often alter market sentiment.
## 6. Deep-Dive Into Whales' Investing Strategies
In this section, we discuss several investment strategies that whales utilize to achieve high financial gains. We discover three strategies: (i) minting or buying expensive NFTs and holding them for a long time until liquidation, (ii) intensively investing during the minting period, and (iii) taking advantage of self-trading.
### Long-term Investment
In a market, the value of a token is decided by its price; valuable tokens are limited but takes up a major portion of the market cap. Therefore, we define and track _most-valuable NFTs_ as the top 1% most expensive NFTs from each collection. By accumulating such tokens, we obtain 14,192 most-valuable NFTs.
To investigate the strategies whales use on the most-valuable NFTs, we begin by calculating how long each group holds the tokens before selling them. Figure 4 shows the holding time of each group on most-valuable NFTs. Here, holding time is the duration between the last buy/receive time of each token and the last day of our data (February 28, 2022). As clearly shown from the graph, whales are likely to hold tokens longer than any other group. It turns out that some whales even hold tokens for the entire period of our data. On the other hand, for minnows, the peak in the left range of the graph is due to their relatively short holding period on collections that have low average token price. While the holding time distributions of minnows and whales are very distinct from each other, the distribution of dolphins resemble that of whales. Hence, the graph indicates that holding value leaders tend to hold the valuable tokens longer.
These observations bring about a question related to the liquidity of high-value tokens; do the other traders have any chance at acquiring such tokens? To answer this, we track the sale history of these tokens. Figure 5 illustrates the number of sales by whales on most-valuable NFTs. The graph clearly shows that the number of buys across all months heavily outweigh the number of sells. This indicates that high-value NFTs have low-liquidity.
These two strategies are likely to be long-term tactics by whales to wait for valuable tokens to reach their desired price. Indeed, the results suggest that whales consistently collect most-valuable NFTs but barely sell them for a long duration.
**Findings and Insight.** Whales are patient; they invest by holding valuable NFTs for a long period and do not sell them to other traders. NFT traders must be aware of low-liquidity of high-value tokens caused by whales.
### Deliberate Investment During the Minting Phase
We find out that whales take advantage of minting via unique methods. They use a number of strategies to raise the price of tokens they minted. We describe each strategy throughout this section.
**Strategy 2-A.** The first strategy is related to the number of transfers between the first minting period and the first sale. Around 10% of tokens minted by each group are transferred at least once before the first sale. To compare each group's transfer tendency, we look into those tokens in detail. Figure 6 shows the number of transfers between the mint and the first sale, where the number is shown
Figure 4. The holding time of most-valuable tokens per group
Figure 5. Buy/sell plot by whales on most-valuable tokens (stacked plot)
in percentage. As shown in the figure, a large portion of NFTs minted by whales are transferred many times (up to 25 times). 72.0% of tokens minted by whales are transferred at least two times whereas the percentages are much smaller in dolphins and minnows. Furthermore, 13.8% of tokens minted by whales go through transfer at least 5 times.
**Strategy 2-B**. Another noticeable strategy is relevant to time duration before the first sale. This can be seen from Figure 7 where the time duration is divided into four ranges: _within a day_, _a day to a week_, _a week to a month_ and _more than a month_. Surprisingly, whales tend to wait for long periods of time before the first sale; 82.6% of tokens take more than a week and 58.5% take more than a month. In contrast, dolphins and minnows wait less and the majority of tokens are sold within a week for both groups.
**Strategy 2-C**. This strategy is used by some of the minters. Usual sales after minting do not involve the address (i.e., minter) that minted the token. However, we find that some minters later receive back the token that they minted. To give an instance that occurred in September 2021, the minter first minted the token and sold it to another wallet address for 6.25 ETH. Then, the token was transferred back to the minter. Finally, the minter sold the token to another trader for 6 ETH. Indeed, we find that 13.7K tokens are involved in this type of transaction pattern. Whales and dolphins are involved in 10.4K tokens which accounts for 77.4% of such tokens. This is a significant number considering the fact that usually the total number of tokens for a collection is around 10K.
All three strategies are effectively utilized to maximize the profit of whales. We find out that the profit of the first sale is proportional to all of these strategies. Details of these profits can be found in Table 6 and 7. Further details for mint profits from whales are discussed in Section 7.2
**Findings and Insight**. Whales use their own special strategies to maximize the profit of tokens they minted.
### Wash Trading
Wash trading is a collusion by the buyer and the seller to artificially inflate the trading volume of an asset. They repeatedly trade their assets between them, which results in cycles in the sale graph, where nodes are traders and edges are sales between them. Therefore, wash tradings are captured by finding strongly connected components (SCCs) in each token sale graph (Han et al., 2017). Indeed, multiple unconditional token transfers can be used in trading malpractices to avoid monitoring (Brandt et al., 2017; Han et al., 2017). Thus, we construct each NFT transaction graph, where nodes are traders and edges are transactions between them, and find SCCs in each graph including sales at least once.
We detect 3,558 instances of wash trading in 1,676 NFTs across 82 out of 91 collections including 3,676 traders in our data. Note that the collected NFT transaction data is different for each period and for each collection, the number of wash trading instances can be different from the work done by Victor et al. (Han et al., 2017). Surprisingly, the holding value leaders perform 69% of wash trading. Furthermore, 42 whales, which makes up 10% of all whales, are involved. We find that popular collections with high trading volume (e.g., _CryptoPunks_, _BAYC_) also belong to manipulations done by the holding value leaders.
To understand when and why wash trading occurs over time, we locate SCCs in each token transaction graph from the monthly transaction records. Figure 8 shows the number of wash trading over time. First, we observe a noticeable peak in October. Wash trading for _The n project_ started in September (when it was launched) and recorded 506 cases in October. We confirm that the average price of the collection tokens traded sharply rises whenever washing trading is involved as shown in Figure 12 of the Appendix.
This is a common phenomenon in newly launched collections. In order for a newly-launched collection to be verified by _Opensea_ (the largest NFT marketplace), the trading volume must be over 100 ETH (Brandt et al., 2017). For this reason, many traders are tempted to perform wash trading. Similarly, we can see a increase in the number of wash trading in August 2021, when many new collections emerged.
Figure 8. Number of wash trading instances(stacked plot)
Figure 6. Number of transfers between _mint_ and first _sell_
Figure 7. Time duration between _mint_ and first _sell_
The number of wash trading shows another small peak in January 2022, which is related to the policy of LooksRare (2021), a newly-launched NFT marketplace. It rewards valuable tokens to traders according to each trader's trading volume, which lures traders into wash trading.
**Findings and Insight.** Whales and dolphins actively participate in wash trading to receive verification of newly-launched collections in NFT marketplaces or earn rewards by abusing NFT market policy. NFT prices can be driven up and down along with their wash trading. Popular collections (e.g., _CryptoPunks_, _BAYC_) are also unavoidable.
## 7. Evaluating Investment Performance
In this section, we evaluate the investment performance of each group and discuss how whales achieve high returns compared to other market participants.
### Investment Performance
Figure 9 illustrates the investment performance of each trading group. We divide the profit range with a log scale due to the wide range of profit among traders. Note that we do not include traders who have never participated in profit activity (i.e., zero profit). The trader who gained the maximum profit is a whale ($18.9M) and the trader with the largest loss is a dolphin ($2.7M). The most noticeable observation is that 35.9% of traders who belong to whales make profits larger than $1M while there hardly exists any from dolphins and minnows. Moreover, while 79.8% of whales produced profits greater than $100K, the percentages are much lower in other groups. This suggests that a large fraction of whales produces a considerably higher profit compared to any other group.
**Findings and Insight.** We identify that whales have achieved significant financial gain. A large fraction of whales achieved profit over $1M which shows great contrast with other groups.
### Sources of Financial Gains from Whales
Finally, we scrutinize the source of profit from whales. We divide the profit source into two parts: buy profit and mint profit.
**Buy profit.** This type of profit occurs from ordinary sales, except the very first sale of a token. For profit generated from buy, we focus on the collections that whales take advantage of. Whales gained profit from 74 collections out of 91 in total. Among them, we sort the top collections in which the whales were most profitable. If a collection does not generate any profit for several months (e.g., launching in the middle of the entire period), we only mark zero profit once for graph's visibility.
Figure 10 shows the average profit of whales from the top 7 collections (top 10% of 74 collections) for each month. Overall, the majority of these collections are well-known for their large trading volume. Among them, profits from _CryptoPunks_, _Art Blocks_ and _BAYC_ are consistently large, which indicates that whales continuously gain profit from very popular collections. For the peak of the rest of the collections, the profit ranks the top due to the selling of several expensive tokens by small number of whales. For example, a peak of _ASM AIFA Genesis_ on December 2021 was generated by a single whale that sold 56 tokens.
**Mint profit.** Whales consistently gained profit from mint profit, which is the profit from the first sale just after mint. Surprisingly, although mint profit occurs only once per token, mint profits are comparable to buy profits and even are larger on some months. This is noteworthy as, 1) the number of mint logs accounts for 28.2% of total transfer logs in our data (1.1M out of 3.9M) and 2) most collections have a limited number of tokens available for mint (e.g., only 10K tokens for _BAYC_).
Until July 2021, whales sell the tokens that they have minted from a limited number of collections. For example, The mint profit of whales consistently occur from _Hashmasks_ in the first half of the year. However, starting from August where the number of traders in the NFT market drastically increased, whales start to gain mint profit from a wide range of collections. This is likely to be due to the launch of many collections after August. Nevertheless, most of the mint profit are obtained from popular collections mentioned above, e.g., _Art Blocks_, _Meebits_, _BAYC_ until the last month (February 2022).
Meanwhile, some collections utilize _airdrop_ as a way to advertise themselves during the launch period; they let traders mint the tokens without paying any mint fees (Han et al., 2021). To amplify the effects of advertising, some collections even choose to send their tokens to popular wallet addresses in a unilateral way. Since whale accounts are easily available in NFT platforms (e.g.,NFTGo (Krishnam et al., 2021)), it is likely for whales to receive lots of tokens through airdrop. Therefore, we investigate the profits produced by whales through airdrop.
Figure 11 shows the top 5 collections in which the total profit of whales is the largest. The number of airdrop tokens sold by whales take up 9 to 20% of airdrop tokens in these collections. This is a large portion considering the small number of whales and implies that whales actively participate in selling airdrop tokens. Interestingly, the five collections in Figure 11 are very popular collections with large trading volumes. Overall, among the 41 collections from which the whales obtained profit through airdrop, we find that the tokens sold by whales are generally more expensive than tokens by other groups on 36 collections. This indicates two possibilities: 1) whales already have the public's trust in the NFT market and the traders are willing to pay high prices for the tokens minted by whales or 2) whales receive relatively high-value tokens of a collection via airdrop. Still, in either case, it is an undeniable fact that whales have power in the minting process that can lead to larger profit while no others can.
**Findings and Insight.** Whales are mainly lucrative through selling NFTs of popular collections. More importantly, they actively participate in minting process, sell NFTs at higher price than any other group. This allows whales to become successful investors.
## 8. Discussion
Throughout this paper, we find out that whales are highly influential traders in the NFT ecosystem. Whales hold a mere 5 percent of all NFT tokens on the Ethereum blockchain, but their worth accounts for nearly 20 percent of the entire NFT market value. In addition, although the NFT market features relatively wide variations in prices, nearly all of the high value items worth more than one million dollars belong to the whales. This means that whales
exclusively own almost all of the dominant NFT items. Considering that a small number of NFT projects with a high market cap (e.g., _CryptoPunks_, _BAYC_, _Doodles_) is bringing success to the NFT ecosystem, to say that the NFT market is being driven by whales is not an overstatement.
In summary, the average profit from whales is far greater than that of the other groups. In addition, the most highly profitable traders (i.e., make profits of more than $1M) make up over 35% of all whale traders, while there such traders only make up under one percent of dolphins and minnowns. A large portion of traders from minnowns did not make profits, with some even faced with considerable losses. This indicates that a small number of whales (note that we only consider the top 0.1 percent of traders as whales) takes most of the rewards for success in the NFT market. In other words, the NFT ecosystem is a harsh environment to find success for the majority of the traders involved (non-whales), and stakes are high for the minority traders. Therefore, it is imperative that various studies about the NFT ecosystem are conducted and measures to protect investors are established.
## 9. Related Work
To the best our knowledge, this is the first longitudinal study of NFT ecosystem that focuses on identifying and characterizing market movers from a financial point of view. Wang _et al._ (Wang _et al._, 2022) provides an overview of the NFT ecosystem with regard to the technical components, etc. Our work is different in that we analyze how NFT market operates from a financial standpoint. Dowling _et al._ (2020) and Ante _et al._ (2022) study correlation between price of NFTs and cryptocurrencies, and shows that they have a low correlation. Distinct from their works, we focus on the NFT market driven by traders. Brunet _et al._ (2022) and Nandi _et al._ (Nandi _et al._, 2022) study the basic topological structure of NFT trading networks and show that each is similar to other social networks. Das _et al._ (2022) presents a systematic overview of how the NFT ecosystem works and uncovers potential security issues. Wachter _et al._ (Wachter _et al._, 2022) also examines potentially illicit trading patterns in the NFT market with two detection algorithms and their effect on price. Still, there is no understanding on how malicious behaviors have changed over time and analysis on the performers of wash trading.
## 10. Conclusion
Following the rapid growth of cryptocurrencies, NFTs have recently received wide attention from the public. Many participants have successfully traded NFT items with financial gain, leading to the emergence of NFT whales. However, many participants suffer from the lack of a clear viewpoint on the NFT ecosystem. In this research, we perform the first longitudinal study to construct in-depth analysis on the NFT market, focusing on unique activities of market participants including whales and market movements. Specifically, we reveal whales on the NFT ecosystem and discover that they possess unique strategies to maximize their earnings from NFT trades and greatly impact the entire market. Consequently, we believe that our findings and insight in this work shed light on the NFT ecosystem, which has been minimally investigated to date.
|
2310.06198 | Motion Memory: Leveraging Past Experiences to Accelerate Future Motion
Planning | When facing a new motion-planning problem, most motion planners solve it from
scratch, e.g., via sampling and exploration or starting optimization from a
straight-line path. However, most motion planners have to experience a variety
of planning problems throughout their lifetimes, which are yet to be leveraged
for future planning. In this paper, we present a simple but efficient method
called Motion Memory, which allows different motion planners to accelerate
future planning using past experiences. Treating existing motion planners as
either a closed or open box, we present a variety of ways that Motion Memory
can contribute to reduce the planning time when facing a new planning problem.
We provide extensive experiment results with three different motion planners on
three classes of planning problems with over 30,000 problem instances and show
that planning speed can be significantly reduced by up to 89% with the proposed
Motion Memory technique and with increasing past planning experiences. | Dibyendu Das, Yuanjie Lu, Erion Plaku, Xuesu Xiao | 2023-10-09T23:01:32Z | http://arxiv.org/abs/2310.06198v2 | # Motion Memory: Leveraging Past Experiences to
###### Abstract
When facing a new motion-planning problem, most motion planners solve it from scratch, e.g., via sampling and exploration or starting optimization from a straight-line path. However, most motion planners have to experience a variety of planning problems throughout their lifetimes, which are yet to be leveraged for future planning. In this paper, we present a simple but efficient method called _Motion Memory_, which allows different motion planners to accelerate future planning using past experiences. Treating existing motion planners as either a closed or open box, we present a variety of ways that Motion Memory can contribute to reduce the planning time when facing a new planning problem. We provide extensive experiment results with three different motion planners on three classes of planning problems with over 30,000 problem instances and show that planning speed can be significantly reduced by up to 89% with the proposed Motion Memory technique and with increasing past planning experiences.
## I Introduction
Motion planning refers to the computational process of determining a sequence of control inputs and actions to move a robot from a given start state to a desired goal location while avoiding obstacles and observing system and environment constraints. Motion planners are essential components for almost all robotic applications [1], such as autonomous navigation [2, 3] and manipulation [4, 5]. Therefore, quick, efficient, and optimal collision-free motion planning is of paramount value to the entire robotics community.
Decades of research into motion planning have made significant progress and are able to find motion-planning solutions for different robot platforms, e.g., using Probabilistic Roadmaps (PRM) [6], Expansive Spaces algorithm [7], and Rapidly-Exploring Random Trees (RRT) [8, 9] to move mobile robots or manipulator arms. Nevertheless, these planners still face challenges in complex real-world settings when real-time planning is required to assure fast and reliable motion execution. Conventional motion planners need to plan from scratch every time they encounter a new environment. This situation remains true even when robots repeatedly face similar environments, where prior experiences could be beneficial. Such repetitive planning introduces unnecessary planning time and therefore limits the robot performance in real-world environments where fast planning time can benefit the downstream tasks, such as quickly moving through highly constrained obstacle spaces.
On the other hand, advances in machine learning have demonstrated that robots are capable of learning emergent behaviors in a data-driven manner without depending on heavily engineered attributes and heuristics. One particular benefit of learning methods is the potential to continually improve with increasing real deployment experiences [10], a capability that the classical motion planners lack.
Considering the limitations of classical motion planners and the potential of learning from experiences, we present Motion Memory, a new paradigm based on past planning experiences to guide traditional motion-planning methods when facing new planning problems in order to reduce computational overhead and therefore improve planning efficiency, as robots gather more and more deployment experiences in the real world. Leveraging machine learning, Motion Memory includes an experience augmentation technique and a representation learning method that enable robots to reflect on prior planning experiences for efficient future planning as shown in Fig. 1. To be specific, the experience augmentation strategy automatically generates new planning problems, for which past motion plans are (or are not) the solutions, and thus provides Motion Memory with an extensive corpus of training data to generalize to future planning problems. Motion Memory also utilizes representation learning to enable autonomous robots to learn from augmented previous planning experiences so that motion planners can identify, store, memorize, and retrieve past planning experiences to facilitate motion planning in unseen future environments. We present different ways to integrate Motion Memory with three existing motion planners in three different categories of environments both in a closed and open box manner to showcase the wide applicability and generalizability of the technique. Our experiments demonstrate that Motion Memory significantly reduces the motion-planning time in future unseen environments by up to 89% with increasing deployment experiences.
Fig. 1: Traditional motion planners require significant amount of effort to plan from scratch (left), such as large amount of samples or iterations (illustrated in green); Motion Memory utilizes past planning experiences to accelerate future planning when facing new planing problems (right). |
2305.02148 | Semi-Supervised Segmentation of Functional Tissue Units at the Cellular
Level | We present a new method for functional tissue unit segmentation at the
cellular level, which utilizes the latest deep learning semantic segmentation
approaches together with domain adaptation and semi-supervised learning
techniques. This approach allows for minimizing the domain gap, class
imbalance, and captures settings influence between HPA and HubMAP datasets. The
presented approach achieves comparable with state-of-the-art-result in
functional tissue unit segmentation at the cellular level. The source code is
available at https://github.com/VSydorskyy/hubmap_2022_htt_solution | Volodymyr Sydorskyi, Igor Krashenyi, Denis Sakva, Oleksandr Zarichkovyi | 2023-05-03T14:29:09Z | http://arxiv.org/abs/2305.02148v2 | # Semi-Supervised Segmentation of Functional Tissue Units at the Cellular Level
###### Abstract
We present a new method for functional tissue unit segmentation at the cellular level, which utilizes the latest deep learning semantic segmentation approaches together with domain adaptation and semi-supervised learning techniques. This approach allows for minimizing the domain gap, class imbalance, and captures settings influence between HPA and HubMAP datasets. The presented approach achieves comparable with state-of-the-art-result in functional tissue unit segmentation at the cellular level. The source code is available at [https://github.com/VSydorskyv/hubmap_2022_htt](https://github.com/VSydorskyv/hubmap_2022_htt) solution
semantic segmentation, functional tissue unit, semi-supervised learning
## 1 Introduction
It is estimated that the human body contains approximately 37 trillion cells, and comprehending the complex relationships and functions among them poses a significant challenge for researchers, requiring a colossal effort [1]. One of the research directions aims to map human body at a cellular level to detect functional tissue units (FTU). FTU is defined as a unit consisting of a three-dimensional block of cells centered around a capillary, such that each cell in this block is within diffusion distance from any other cell in the same block [2]. These cellular compositions - cell population neighborhoods are responsible for performing an organ's main physiologic functions. Functional tissue units, such as colonic crypts, renal glomeruli, alveoli, etc. (examples can be observed in Figure 1) have pathological relevance that are essential for modeling and comprehending the development of a disease. However, manually annotating FTUs is time consuming and costly. At the same time current algorithms suffer from poor generalizability and low accuracy [3]. So the task for the competition was to segment FTU on stained microscope slides in a way that is invariant to different staining protocols. In this paper a new method is proposed, which utilizes the latest deep learning semantic segmentation [4] approaches together with domain adaptation techniques and semi-supervised learning techniques.
## 2 Related work
One of the most common approaches to functional tissue units segmentation, specifically kidney glomerulus and colon crypt [3] segmentation is based on the use of supervised learning techniques and were introduced in the previous Kaggle competition [5]. In these methods, the training data consists of annotated images, where each pixel is labeled as belonging to a particular cell or background. These techniques typically require a large amount of labeled data to achieve high accuracy, which can be time-consuming and expensive to obtain.
Most of these models are heavily inspired by the U-Net [6], UnetPlusPlus [7], FPN architectures [8], and DeepLabV3+ [9] in a combination with ImageNet pre-trained backbones such as resnet50_32x4d, resnet101_32x4d and RegNet [10]. Models used a combination of general data augmentation techniques such as flipping, rotation, scale shifting, artificial blurring, CutMix [11] and MixUp [12] to improve model performance. Models were trained using binary cross-entropy and Lovasz Hinge loss [13] functions, RAdam [14], Lookahead [15], AdamW [16], SGD [17], and Adam [18] optimizers. These models used a dynamic sampling approach to sample tiles of size 512x512, 768x768 and 1024x1024 pixels from regions with visible glomeruli based on the annotations.
## 3 Dataset
The dataset includes biopsy slides from several organs, namely kidney, prostate, large intestine, spleen, and lung. The key feature of the proposed dataset is that it consists of images from two data sources: HPA [19-26] and HuBMAP [27]. Furthermore, the training data includes only HPA samples, while the test data comprises a mixture of HPA and HuBMAP samples [1]. Additionally, only HubMAP data was used for the final score (private dataset). The images from the HPA and HuBMAP data sources differ in staining protocol, pixel sizes, and sample thicknesses [1]. Figure 2 provides an example that illustrates the visual differences between HPA and HubMAP images. The whole slide images in the HPA and HuBMAP data sources were stained using three distinct protocols. HPA samples were stained with antibodies visualized with 3,3'-diaminobenzidine (DAB [28]), counterstaineded with hematoxylin, whereas HuBMAP samples were stained using either Periodic acid-Schiff (PAS [29]), hematoxylin and eosin stains (H&E [30]). Each of the staining protocols highlights different cellular structures using colored dyes, and the final stained slide images vary greatly in color, contrast, and overall image
Figure 1: Examples of microscopic images of different organs with FTU regions highlighted with blue color
structure, making direct matching of cellular structures between images less straightforward (see Figure 4). Another crucial feature of the proposed dataset is that HubMAP images have different pixel sizes for different organs, while for HPA, it is constant (see Table 1) [1].
Finally, the images also differ in tissue section thickness. While all HPA images were sliced with a fixed thickness of 4 \(\upmu\)m, the HuBMAP samples have tissue slice thicknesses ranging from 4 \(\upmu\)m for the spleen and up to 10 \(\upmu\)m for the kidney [1], adding another layer of complexity. The training dataset contained 352 samples along with additional metadata, including the dataset label (HPA or HuBMAP), organ, image height, image width, pixel size, tissue thickness, age (patient age), and sex (patient sex) [1]. During the testing stage, we had access to all meta information listed in the train dataset except for age and sex [1]. The test data comprised 550 images, of which 45% were for the public dataset and 65% for the private dataset [1]. The counterplot in Figure 3 also illustrates the class imbalance across organs, as presented in the training data.
## 4 Metric and Evaluation
For model evaluation Dice coefficient [31, 32] was used, which was simply averaged across all segmentation masks. For model evaluation metrics on three different datasets were used:
1. **Out Of Fold predictions**, using 5 Cross-Validation folds [33]. In order to preserve class imbalance and make metric more robust, stratification by organ was used.
2. Results from the **public Kaggle test only on the HubMAP part**. While the public Kaggle test set score was computed using both HPA and HuBMAP images, the final private dataset score was calculated using only HuBMAP data. We thus decided to focus solely on the HuBMAP score by not
\begin{table}
\begin{tabular}{c c c} \hline \hline Organ & Dataset & Pixel size \\ \hline Kidney & HPA & 0.4 \\ Large intestine & HPA & 0.4 \\ Lung & HPA & 0.4 \\ Prostate & HPA & 0.4 \\ Spleen & HPA & 0.4 \\ Kidney & HubMAP & 0.229 \\ Large intestine & HubMAP & 0.7562 \\ Lung & HubMAP & 0.4945 \\ Prostate & HubMAP & 6.263 \\ Spleen & HubMAP & 0.4945 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Pixel size table
Figure 2: Left image refers to the spleen sample from the HPA dataset and the right to the spleen image from the HubMap dataset
predicting masks for HPA images and adjusting the Kaggle public dataset score by the proportion of HuBMAP images (roughly 72%) [1].
3. Results from the **private Kaggle test set.**
## 5 Methods
### Model Architecture
We have used Unet [34] and Unet\(++\)[35] architectures with pre-trained EfficientNet B7 [36] and Mix Vision Transformer [37] encoders. In our experiments, Unet\(++\) showed comparable or better results compared to the pure Unet decoder, and Mix Vision Transformer outperformed EfficientNet B7 encoders on both Cross-validation and on Private Kaggle Dataset. For our final solution, we used a simple average of predictions from 15 models using EfficientNet B7 and Mix Vision Transformer encoder along with Unet and Unet\(++\) style decoder which outperformed either of the single models (Table 5).
### Data Preparation
In this challenge, competitors were asked to build a solution that can segment FTUs in a way that is invariant to the staining protocol (HPA or HubMAP). To achieve this goal, organizers provided competitors with image data for microscope slides stained using HPA protocol and evaluated solutions on the mixed HPA+HubMAP dataset for Public Leaderboard and on the HubMAP dataset only for the Private Leaderboard. Therefore, the biggest challenge for this competition was domain adaptation from the HPA dataset to HubMAP. In order to solve it we had to adapt our training data in 3 ways:
* Pixel size
* Color space difference
* Tissue thickness difference
**Adopting pixel size.**
One of the key points was adapting to wildly varying pixel sizes. The image scales ranged from 6.3um/pixel for the prostate to 0.2um/pixel for the large intestine. We tackled this issue by rescaling our train dataset to the target HuBMAP resolution. However, to increase the model's receptive field we applied additional downscalers for larger images and upscalers for smaller images (prostate). It is important to note that additional downscalers were also used at the inference stage to avoid changing the train/test pixel size. We used two datasets: one rescaled to HuBMAP scales and another with the original HPA scales. The latter one was not only important for HPA predictions (absent in the private LB) but also to provide some additional scaling information to the model. Therefore, we scaled down images of each organ by N times in order to match HubMAP pixel size and then by M times to upscale too small images of organs. Values of N and M can be found in Table 2.
Figure 3: Class distribution of different organs in train dataset (HPA)
**Adopting color space.**
The color spaces between HPA and HubMAP datasets were also different due to different stain methods - DAB [28] for HPA, PAS [29], and H&E [30] for HubMAP (see Figure 4). As the competition required segmentation of FTUs on slides stained using different staining protocols, we decided to make the neural network invariant to color variations by applying heavy color augmentations such as histogram matching [38] to match the color distribution of the training images to that of HuBMAP dataset (Figure 5).
We also applied hue-value-saturation, contrast, and gamma augmentations. To provide additional robustness to scale and geometrical differences in FTU shapes, we also applied a range of geometric augmentations, which included random flips, rotations, scales, shifts, elastic transforms, and more. Some competition participants chose to apply stain normalization [39] to cycle color between different staining protocols. However, in our experiments, we didn't see any improvement from stain normalization, probably because regular stain normalization techniques are specialized for one particular type of stain and don't work well when applied to images stained with different protocols.
We have gathered additional data from the GTEX portal [40] and a few images from HubMAP to which we applied histogram matching [38] of all train data to GTEX and HubMAP images. The results
\begin{table}
\begin{tabular}{c c c} \hline \hline Organ & N & M \\ \hline Kidney & 1.25 & 2 \\ Large intestine & 0.5725 & 2 \\ Lung & 1.8905 & 1 \\ Prostate & 15.65 & 0.3 \\ Spleen & 1.23625 & 2 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Scale sizes for each organ
Figure 4: Examples of different stain methods. Top left - DAB, top right - PAS, bottom - H&E
of histogram matching may be observed in Figure 5. Besides we have used heavy augmentations - Geometric, Color, Distortions, and Scales. The main idea behind the color augmentation was to suggest to the model that the color is not important and that it had to look for other features.
**External data.**
We have not tried to solve the problem of tissue thickness explicitly but we have decided to download additional data from different data sources and apply pseudo-labeling. We used data from GTEX [40] and HPA [19-26] portals to complement the initial training data. The GTEX data was especially important here because it was stained similarly to HuBMAP [27] slides with H&E [30]. From GTEX we downloaded prostate, large intestine, kidneys, and spleen data for patients with no apparent pathologies. We ignored lungs from GTEX as we couldn't figure out how to segment them and neither manually nor using pseudo labeling. We were progressively adding GTEX images to our pipeline ending up with around 140 at the end of the competition, though it is worth mentioning that each image was quite large measuring tens of thousands of pixels in width and height. From the HPA site, we used a plethora of DAB [28] stained slides very similar to those provided by organizers. Overall, we have added between 57-61K of additional HPA images for each organ.
We pseudo-labeled both HPA and HuBMAP images with the best ensemble (according to the Cross Validation Score) available at the time of labeling. We did not select the most confident pseudo labels but rather sampled the HPA and GTEX datasets at random at training time. The selection process was inspired by the pseudo-labeling technique proposed in a semi-supervised paper [41]. We have repeated the pseudo-labeling procedure twice. Examples of pseudo-labeled images can be observed in Figure 6.
**Cutmix.**
CutMix [42] augmentation was among the top contributors to our score. We applied it with a probability of 0.5 and used uniform distribution to sample which part of the original image to replace
Figure 5: Left top - original spleen HPA image. Left right - original spleen HubMAP image. Bottom-matched HPA to HubMAP image
with a patch from a different image. The key trick though was to apply CutMix augmentation within a single class. Examples of CutMixed image in Figure 7.
\begin{table}
\begin{tabular}{c c} \hline \hline Organ & Dice \\ \hline Kidney & 0.85301 \\ Large intestine & 0.87770 \\ Lung & **0.04659** \\ Prostate & 0.81167 \\ Spleen & 0.69313 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Cross Validation Dice for different organs
Figure 8: Example of lung FTU, highlighted with blue color
trained an EfficientNet model with PointRend head [47] and scaled loss with a factor of 2. While we didn't notice a meaningful performance boost from the PointRend alone we think that its main contribution was in adding diversity to our model ensemble as well as some regularization. We have used PyTorch [48] built-in mixed precision training in order to reduce GPU memory consumption which allowed us to use a batch size of 32 samples on A100 GPUs.
### Inference Process
For each fold of each of our final models we have averaged model parameters of 3 best checkpoints by validation dice [49]. For ensembling, we have simply averaged probability masks from each model. We have also used Test Time Augmentations [50] with original images and three flips. We have removed small regions after thresholding to reduce noisy masks. To do so we have used the next heuristic:
\[RegionArea/ImageArea<OrganTresh \tag{1}\]
OrganTresh for different organs was found empirically by testing its effects on Cross Validation Dice and can be found in Table 4.
We have used a 1024x1024 sliding window approach with 0.75 overlap for Mixed Vision Transformers [22] due to GPU memory constraints and predicted on full-scale images for CNN models.
## 6 Results
### Final results
The results of training 5 models using 5 folds on out-of-fold data, public and private datasets are outlined in Table 5. Experiment ensembles include 5 models from each experiment and metrics from them are outlined in Table 6. Results of our approach compared to other top 5 best solutions can be found in Table 7.
Analyzing results from both Table 1 and Table 2 we can make the following conclusions:
\(\bullet\) Completely CNN approaches worked better on the Public HubMAP dataset, which can be caused by slight overfit to this test data.
\begin{table}
\begin{tabular}{c c} \hline \hline Organ & Dice \\ \hline Kidney & 0.001 \\ Prostate & 0.0005 \\ Large intestine & 0.0001 \\ Spleen & 0.001 \\ Lung & 0.000001 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Relative Min Region area
\begin{table}
\begin{tabular}{c c c c} \hline \hline Model & Out Of Fold & Public Leaderboard & Private Leaderboard \\ \hline Unet \(++\) & 0.84338 & **0.61189** & 0.82878 \\ w/EfficientNet B7 & & & \\ \hline Unet w/EfficientNet & 0.83915 & 0.61160 & 0.82698 \\ B7 & & & \\ \hline Unet w/Mix Vision & **0.85405** & 0.60826 & **0.83332** \\ Transformer B5 & & & \\ \hline Unet w/Mix Vision & 0.85356 & 0.60273 & 0.82657 \\ Transformer B3 & & & \\ \hline \hline \end{tabular}
\end{table}
Table 5: 5 Folds Dice results
* Mixed Vision models [37] outperformed CNN models both on Cross-Validation and Private test data, which can advise that these models perform better in terms of segmentation quality and domain adaptation.
* Mean ensemble of CNNs and Mixed Vision models [37] slightly improved results comparing to solo CNN or Mixed Vision Transformer [37] approach.
* Introduced changes improved Out of Fold Dice and Private Dice, which means that overall model performance increased both on HPA and HubMAP datasets.
* Introduced changes decreased, mostly eliminated the gap between Out of Fold Dice, and Private Dice, which means that they have completed the domain adaptation task between HPA and HubMAP datasets.
Also, each organ dice improved, especially the lung dice was improved more than 10 times - Table 9.
## 7 Conclusion
This paper introduced the FTU segmentation training pipeline, which showed near state-of-the-art performance both on HPA [19-26] and HubMAP [27] datasets, minimizing the domain gap between them. Proposed methods allowed the adoption of models from the HPA domain to HubMAP, reducing the difference in the Dice score between test sets on HPA and HubMAP domains. Also, we have considerably increased our score on the HPA test set. We believe that the proposed methods can be used both for increasing the performance of semantic segmentation models on one domain and for adopting these models from one domain to another.
## 8 Acknowledgements
First, we would like to thank the Armed Forces of Ukraine, Security Service of Ukraine, Defence Intelligence of Ukraine, State Emergency Service of Ukraine for providing safety and security to participate in this great competition, complete this work, and help science, technology not stop and move forward. Also, we want to thank the Kaggle team, Google team, Genentech, and Indian University for hosting HuBMAP + HPA - Hacking the Human Body competition, which gave us all the needed data and materials to build models, test hypotheses, and write this paper.
|
2306.04069 | Superluminal propagation along the brane in space with extra dimensions | We demonstrate that a model with extra dimensions formulated in Csaki et al.
(Phys Rev D 62, 045015), which fatefully reproduces Friedmann-Robertson-Walker
(FRW) equations on the brane, allows for an apparent superluminal propagation
of massless signals. Namely, a massive brane curves the spacetime and affects
the trajectory of a signal in a way that allows a signal sent from the brane
through the bulk to arrive (upon returning) to a distant point on the brane
faster than the light can propagate along the brane. In particular, the signal
sent along the brane suffers a greater gravitational time delay than the bulk
signal due to the presence of matter on the brane. While the bulk signal never
moves with the speed greater than the speed of light in its own locality, this
effect still enables one to send signals faster than light from the brane
observer's perspective. For example, this effect might be used to resolve the
cosmological horizon problem. In addition, one of the striking observational
signatures would be arrival of the same gravitational wave signal at two
different times, where the first signals arrives before its electromagnetic
counterpart. We used GW170104 gravitational wave event to impose a strong limit
on the model with extra dimensions in question. | De-Chang Dai, Dejan Stojkovic | 2023-06-07T00:02:50Z | http://arxiv.org/abs/2306.04069v2 | # Superluminal propagation along the brane in space with extra dimensions
###### Abstract
We demonstrate that a model with extra dimensions formulated in [1], which fatefully reproduces Friedmann-Robertson-Walker (FRW) equations on the brane, allows for an apparent superluminal propagation of massless signals. Namely, a massive brane curves the spacetime and affects the trajectory of a signal in a way that allows a signal sent from the brane through the bulk to arrive (upon returning) to a distant point on the brane faster than the light can propagate along the brane. In particular, the signal sent along the brane suffers a greater gravitational time delay than the bulk signal due to the presence of matter on the brane. While the bulk signal never moves with the speed greater than the speed of light in its own locality, this effect still enables one to send signals faster than light from the brane observer's perspective. For example, this effect might be used to resolve the cosmological horizon problem. In addition, one of the striking observational signatures would be arrival of the same gravitational wave signal at two different times, where the first signals arrives before its electromagnetic counterpart. We used GW170104 gravitational wave event to impose a strong limit on the model with extra dimensions in question.
## I Introduction
Superluminal propagation of a signal in some theoretical model is usually associated with problems, most notably causality. Since not so many physicists are willing to sacrifice causality (at least not at the macroscopic level), there is no vast literature on this topic.
If we avoid propagation of a signal with intrinsically superluminal velocities, we are not left with many options. A light signal cannot overtake itself in its own locality by definition. However, in a curved space, one can easily imagine a situation where a light signal sent along a certain trajectory can overtake another light signal sent along a along a different trajectory. Black holes are templates for interesting phenomena in curved space. Any signal propagating in a vicinity of a black hole will suffer a significant redshift (or equivalently gravitational time delay). In an extreme case, light emitted exactly from the horizon will be practically stopped. Imagine a situation like in Fig. 1 where one signal (labeled by 1) travels very close to the black hole horizon from the point A to the point B, while another signal (labeled by 2) travels also from A to B, first away from the black hole and then back. If these two signals are sent from A simultaneously, under the right conditions, the signal 2 can arrive to B before the signal 1.
A new playground was introduced in the context of the brane world models [2; 3; 4; 5; 6; 7] where all the standard model particles are located on a subspace (brane) in a higher dimensional universe. Gravity is allowed to propagate everywhere including the bulk. In such a setup, it is easy to construct a shortcut if the brane is curved. For example, Fig. 2 shows that a signal traveling through the bulk can overtake the light signal traveling at the speed of light along the curved brane. Thus, an observer confined on the brane might register an apparently superluminal propagation of a signal. In [8], it has been proposed that such shortcuts can be used to solve the cosmological horizon problem. Similar cases were studied in [9; 10]. Obviously, these shortcuts are not a generic feature of all the brane world models, and they require elaborate setups.
In an interesting work in [11; 12; 13], it was shown that an observer on a moving brane with compact extra dimensions can also register an apparently superluminal
Figure 1: The spacetime is highly curved nearby a compact object like a black hole. If two light signals, 1 and 2, are sent from A simultaneously, under the right conditions, the signal 2 can arrive to B before the signal 1. While the speed of light in its own locality always remains the same, it appears to the observers located at A and B that the signal 2 traveled faster than the light signal 1.
propagation of a signal.
## II Model
In this paper we extend the previous work to fix some of the shortcomings of the existing models. We consider the metric found by Csaki et. al. in [1]. In contrast with [11], the brane is fixed in the bulk and is not moving. The extra dimension is compactified on a \(S^{1}/Z_{2}\) manifold (as in Fig. 3). Thus, unlike [8], a signal sent into the bulk is guaranteed to return to the brane. The metric is written as
\[d\tau^{2}=n(y,t)^{2}dt^{2}-a(y,t)^{2}(dx_{1}^{2}+dx_{2}^{2}+dx_{3}^{2})-b(y,t) ^{2}dy^{2}. \tag{1}\]
and the components of the Einstein tensor are
\[G_{00} = 3\Bigg{(}\Big{(}\frac{\dot{a}}{a}\Big{)}^{2}+\frac{\dot{a}\dot{b }}{ab}-\frac{n^{2}}{b^{2}}\Big{(}\frac{a^{\prime\prime}}{a}+\Big{(}\frac{a^{ \prime}}{a}\Big{)}^{2}-\frac{a^{\prime}b^{\prime}}{ab}\Big{)}\Bigg{)} \tag{2}\] \[G_{ii} = \frac{a^{2}}{b^{2}}\Bigg{(}\Big{(}\frac{a^{\prime}}{a}\Big{)}^{2 }+2\frac{a^{\prime}n^{\prime}}{an}-\frac{b^{\prime}n^{\prime}}{bn}-2\frac{b^{ \prime}a^{\prime}}{ba}+2\frac{a^{\prime\prime}}{a}+\frac{n^{\prime\prime}}{n }\Bigg{)}+\] (3) \[\frac{a^{2}}{n^{2}}\Bigg{(}-\Big{(}\frac{\dot{a}}{a}\Big{)}^{2}+ 2\frac{\dot{a}\dot{n}}{an}-2\frac{\ddot{a}}{a}-\frac{\ddot{b}}{b}\Big{(}2 \frac{\dot{a}}{a}-\frac{\dot{n}}{n}\Big{)}-\frac{\ddot{b}}{b}\Bigg{)}\] \[G_{05} = 3\Bigg{(}\frac{n^{\prime}\dot{a}}{na}+\frac{a^{\prime}\dot{b}}{ ab}-\frac{\dot{a}^{\prime}}{a}\Bigg{)}\] (4) \[G_{55} = 3\Bigg{(}\frac{a^{\prime}}{a}\Big{(}\frac{a^{\prime}}{a}+\frac{ n^{\prime}}{n}\Big{)}-\frac{b^{2}}{n^{2}}\Big{(}\frac{\dot{a}}{a}\Big{(}\frac{ \dot{a}}{a}-\frac{\dot{n}}{n}\Big{)}+\frac{\ddot{a}}{a}\Bigg{)} \tag{5}\]
In this setup there are two branes, one is located at \(y=0\) and another at \(y=1/2\). The brane at \(y=0\) is called the "Planck brane", while the brane at \(y=1/2\) where all the standard model particles are localized is called the "TeV brane". So we assume that our universe is at the TeV brane.
The energy momentum tensor for this configuration is approximately
\[T_{a}^{b} = \frac{\delta(y)}{b}diag(\rho_{*},-p_{*},-p_{*},-p_{*},0)+ \tag{6}\] \[\frac{\delta(y-\frac{1}{2})}{b}diag(\rho,-p,-p,-p,0),\]
where \(p\) and \(p_{*}\) denote pressure on the TeV and Planck branes respectively, while \(\rho\) and \(\rho_{*}\) are their corresponding energy densities. The space time is \(S^{1}/Z_{2}\) symmetric, i.e. it spans from \(y=0\) to \(y=1\) and is mirror symmetric at \(y=0\) and \(y=1/2\). Because of the mirror symmetry, each brane has its own mirror images. Thus, taking images into account, the locations of the TeV brane are \(y=...-1/2,1/2,3/2,5/2,...\), while the locations of the Planck brane are \(y=...-1,0,1,2,3,...\).
The Einstein equations are
\[G_{\alpha\beta}=\kappa^{2}T_{\alpha\beta}, \tag{7}\]
where \(\kappa^{2}=1/2M^{3}\), while \(M\) is the five dimensional Planck scale. Apart from matter on the branes, a radion field is introduced to stabilize the extra dimension. Here we are not going into unnecessary details and quote an approximate solution from the appendix B in [1]:
\[a = a_{0}(t)(1+\alpha\rho_{*}(t)(y-\frac{1}{2})^{2}+\beta\rho(t)y^{2}) \tag{8}\] \[n = (1+\gamma\rho_{*}(t)(y-\frac{1}{2})^{2}+\lambda\rho(t)y^{2})\] (9) \[b = b_{0}(1+\delta b). \tag{10}\]
Figure 3: An extra dimension is compactified on a \(S^{1}/Z_{2}\) manifold. The bulk is mirror symmetric around the brane where the standard model particles are located (the TeV brane). A massless signal emitted from the TeV brane can travel through the bulk, reach the upper Planck brane which is identified with the lower Planck brane, and then return to the original TeV brane.
Figure 2: A signal traveling through the bulk can overtake the light signal traveling at the speed of light along the curved brane. Thus, an observer confined on the brane might register an apparently superluminal propagation of a signal.
From the jump conditions we get
\[\alpha = \beta=\frac{\kappa^{2}b_{0}}{6} \tag{11}\] \[\gamma = -\frac{(2+3\omega_{*})\kappa^{2}b_{0}}{6}\] (12) \[\lambda = -\frac{(2+3\omega)\kappa^{2}b_{0}}{6}\, \tag{13}\]
where \(p=\omega\rho\) and \(p_{*}=\omega_{*}\rho_{*}\), while \(\delta b=O(\rho^{2},\rho\rho_{*},\rho_{*}^{2})\). This solution is valid between \(y=0\) and \(y=1/2\). For the other regions, the solutions are obtained by reflecting around \(y=0\) or \(y=1/2\). The Einstein equations are satisfied to the first order in \(\kappa^{2}b_{0}\rho\). The scale factor \(a_{0}\) is obtained from the Friedmann equations
\[\left(\frac{\dot{a}_{0}}{a_{0}}\right)^{2}=\frac{\kappa^{2}}{3b_ {0}}(\rho+\rho_{*}), \tag{14}\] \[\left(\frac{\dot{a}_{0}}{a_{0}}\right)^{2}+2\frac{\ddot{a}_{0}}{a _{0}}=-\frac{\kappa^{2}}{b_{0}}(\omega\rho+\omega_{*}\rho_{*}), \tag{15}\]
where, \(M_{p}^{2}=b_{0}/\kappa^{2}\).
## III Propagation of the bulk and brane signals
We consider signals delivered by massless particles. Since proper time has no meaning for a massless particle, we choose the coordinate time (\(t\)) which defines a coordinate velocity as
\[u^{\alpha}=\frac{dx^{\alpha}}{dt}. \tag{16}\]
In this case, \(u^{t}=1\), and the geodesic equation is
\[\frac{d^{2}x^{\lambda}}{dt^{2}}=-\Gamma^{\lambda}_{\nu\alpha}\frac{dx^{\nu}}{ dt}\frac{dx^{\alpha}}{dt}+\Gamma^{t}_{\nu\alpha}\frac{dx^{\nu}}{dt}\frac{dx^{ \alpha}}{dt}\frac{dx^{\lambda}}{dt}. \tag{17}\]
We assume that matter is concentrated on the TeV brane, while \(\rho_{*}=0\), and \(\omega=\omega_{*}=0\) on the Planck brane. The energy density on the Tev brane is
\[\rho=\frac{\rho_{0}}{a^{3}}\approx\frac{\rho_{0}}{a_{0}^{3}}. \tag{18}\]
Here we kept only the leading order in \(a\), while \(\rho_{0}\) is the initial density at \(t=0\). The scale factor takes the form
\[a_{0}=\Big{(}1+\sqrt{\frac{3\kappa^{2}\rho_{0}}{4b_{0}}}t\Big{)}^{2/3}. \tag{19}\]
We now analyze propagation of signals in this setup. A massless particle is emitted from the TeV brane with a small initial velocity component in y-direction, i.e. \(u_{0}^{y}\neq 0\). The corresponding velocity component \(u_{0}^{x}\) is obtained from \(a^{2}(u_{0}^{2})^{2}+b^{2}(u_{0}^{y})^{2}=n^{2}\), since the particle is massless. We assume that \(u_{0}^{y}=O(\rho)\). According to Eq. 17, the acceleration in \(y\) direction is
\[\frac{du^{y}}{dt}=-\Gamma^{y}_{\alpha\beta}u^{\alpha}u^{\beta}+\Gamma^{t}_{ \alpha\beta}u^{\alpha}u^{\beta}u^{y}=O(\rho) \tag{20}\]
Therefore, when the particle crosses the bulk, its y-direction velocity is still \(O(\rho)\). For the component along the brane, \(u^{x}\), the acceleration is
\[\frac{du^{x}}{dt}=-2\Gamma^{x}_{tx}u^{t}u^{x}-2\Gamma^{x}_{yx}u^{y}u^{x}+ \Gamma^{t}_{\alpha\beta}u^{\alpha}u^{\beta}u^{x}. \tag{21}\]
The first term on the right hand side is induced by the space expansion, the second term is of order \(O(\rho^{2})\), while the third term is of order \(O(\rho)\). If \(u^{y}\) remains \(O(\rho)\), the deviation of \(u^{x}\) from the initial one should be of order \(O(\rho^{2})\). It then is possible to construct a signal that leaves and then returns to the brane due to the \(Z_{2}\) symmetry.
Consider a signal that starts from \((x,y)=(0,1/2)\), and returns to the TeV brane at the moment T. The displacements in x- and y-directions are
\[x(t) = \int_{0}^{T}u^{x}dt \tag{22}\] \[y(t) = \int_{0}^{T}u^{y}dt.\]
We numerically integrate Eq. (22) and plot the results. Fig. 4 shows the displacement of a signal in \(y\)-direction as a function of time. If the initial magnitude of the velocity \(u_{0}^{y}\) is small, the signal cannot go far away from the brane and is just oscillating near the brane. If \(u_{0}^{y}\) is large enough, it can leave the TeV brane, propagate to the Planck brane, and return to the TeV brane due to mirror symmetry (doted line). Even in the first (oscillatory) case, the oscillation amplitude is increasing in time since the energy density is reducing due to expansion. When the density is diluted enough and gravitational attraction weakened, the signal can leave the brane in this case too.
We plot the velocity \(u^{y}\) as a function of time in Fig. 5. It is clear that \(u^{y}\) is also oscillating. Many sharp changes in velocity are noticeable since the attractive gravitational force changes direction when the signal crosses the TeV brane. The \(u^{x}\) component is reduced much faster than \(u^{y}\) because our universe expands in x-direction. Therefore, the \(u^{y}\) component increases and becomes the dominant component in velocity. In that regime, the signal moves perpendicular to the brane.
In Fig. 6, we compare the propagation of the signals on the brane and the bulk. We can see that, at first, the signal on the brane moves faster than in the bulk because \(u^{x}\) is larger than \(u^{y}\) in magnitude. When the bulk signal propagates far enough, the situation changes and the bulk signal overtakes the brane signal. From the same figure, one can see that when the signal sent to the bulk returns to brane due to gravity, it is ahead of the signal on the brane. This clearly shows that an observer confined to the brane may register an apparent superluminal motion.
For larger values of the initial bulk component of velocity \(u_{0}^{y}\), the signal is not trapped in vicinity of the brane. It is able to make a trip to the other brane and return to the original TeV brane. Fig. 7 shows again that a brane confined observer can observe an apparent superluminal effect. However, this effect lasts only for a first few round-trips. Once the magnitude of \(u^{y}\) is dramatically increased by the universe expansion, the \(u^{x}\) component of the bulk signal is so small comparing to signal on the brane that the redshift effect cannot compensate for the velocity difference between these two signals. In this regime, the bulk signal cannot overtake the signal along the brane.
In Fig. 8) we plot the lightcones that nicely illustrate the whole situation. The lightcone drawn by an observer located on the brane (who can observe only signals along the brane) is smaller than light cone that includes the whole setup (the bulk and the brane). This explains the apparent superluminality observed by the brane observer. However, at late times the lightcones match again and the superluminal effect disappears.
## IV Limits from gravitational waves
All the discussion above was dedicated to propagation of massless signals. Therefore, practically all the conclusions will remain the same for the case of gravitational waves propagation. One of the striking observational signatures would be arrival of the same gravitational wave signal at two different times, where the first signals arrives before its electromagnetic counterpart. In addition, echo-like signals [14; 15] should also be present, because there will be many signals coming from different brane images. Detailed modeling of the gravitational wave signature will be reserved for future studies. However, we can already use some of the observed gravitational wave events to impose limits on the model in question.
For example, the siure of gravitational event GW170104 is located at luminosity distance \(l=880\)Mpc[16]. The time interval to see a repeated signal in the model with extra dimensions we considered here can be estimated from
\[\Delta t=\frac{\sqrt{l^{2}+h^{2}}-l}{c}\approx\frac{h^{2}}{2lc} \tag{23}\]
where \(h\) is the size of the compact space in between the branes. Since the repeated signal has not been observed in 5 years, we set \(\Delta t>5\)years. Then it follows that \(h>0.05\)Mpc. This is a pretty strong constraint on the size of extra dimensions in this model, which could not be established by any other means.
On the other hand if the brane is very close, the signals will interfere. It seems there is no self-interference in the gravitational signals. Therefore we expect the size of compact space to be either very small to prevent any effect assocoated with extra dimensions, or very big as it is estimated above.
Figure 4: The displacement of a massless signal in \(y\)-direction (bulk) sent from the TeV brane (located at \(y=1/2\)). We set \(\kappa^{2}=1\), \(b_{0}=1\) and \(\rho_{0}=10^{-4}\). The solid, doted, and dashed lines represent the displacement along the \(y\)-coordinate for three initial values of velocity: \(u_{0}^{y}=1\times 10^{-3}\), \(u_{0}^{y}=4\times 10^{-3}\), and \(u_{0}^{y}=5\times 10^{-3}\) respectively. When \(u_{0}^{y}\) is small, the signal is confined nearby the brane due to gravitational attraction of matter of the brane. However, when \(u_{0}^{y}\) is large enough, gravitational attraction is not able to confine the signal, and the signal can leave the TeV brane at \(y=1/2\), reach the Planck brane at \(y=1\) which is identified with the Planck brane at \(y=0\), and come back to the original TeV brane (the return is not shown here).
Figure 5: Velocity \(u^{y}\) as a function of time. We set \(\kappa^{2}=1\), \(b_{0}=1\) and \(\rho_{0}=10^{-4}\). The solid, doted, dashed and dash-doted lines represent \(u^{y}\) for four values of initial velocity: \(u_{0}^{y}=1\times 10^{-3}\), \(u_{0}^{y}=4\times 10^{-3}\), \(u_{0}^{y}=5\times 10^{-3}\), and \(u_{0}^{y}=6\times 10^{-3}\) respectively. When \(u_{0}^{y}\) is small, the velocity is oscillating around 0. However, when \(u_{0}^{y}\) is large, \(u^{y}\) increases with time. Since \(u^{x}\) decreases due to the expansion of our universe, \(u^{y}\) must increase to maintain the overall speed of light.
## V Conclusions
We demonstrated here that it is possible to send signals that appear superluminal from the point of view of an observer confined to a brane located in a higher dimensional universe. We used a variant of the Randall-Sundrum model with one extra dimension and two branes (the TeV brane that represents our universe and the Planck brane). Due to the imposed \(S^{1}/Z_{2}\) compactification, these branes have a series of images. Therefore, a signal sent to the bulk from the TeV brane has to come back to the original brane. The TeV brane is massive and curves the spacetime in such a way to allow for the bulk signal to reach a distant point on the TeV brane upon returning faster than the signal which propagates along the brane. Basically, a signal propagating along the brane is redshifted more (e.g. suffers a greater gravitational time delay) than the bulk signal, because of the presence of matter on the brane. While the signal never overtakes light in its own locality, it still allows for superluminal communication between two distant points on the brane (if the signal is sent through the bulk).
Unlike previous examples, where a specific shape of the brane was tailored in order to produce shortcuts, or a moving brane was invoked to break the Lorentz symmetry, our example is practically a generic (and yet realistic) variant of Randall-Sundrum models which fatefully reproduces FRW equations of the brane.
Note that in this setup causality is not violated. The bulk signal is always light-like. The effect we describe here is very similar to gravitational lensing. A lens can
Figure 8: The dashed line is the lightcone drawn (expected) by a brane confined observer. The doted line is the actual brane lightcone drawn by the bulk observer who sees the whole situation. Obviously, the expected lightcone is smaller than the actual lightcone, which gives rise of an apparent superluminality.
Figure 6: Comparison of the signals propagating on the brane and the bulk. We set \(\kappa^{2}=1\), \(b_{0}=1\) and \(\rho_{0}=10^{-4}\). We denote the displacements in \(x\) direction of the signals that travel with \(u_{0}^{y}=6\times 10^{-3}\) (bulk signal) and \(u_{0}^{y}=0\) (brane signal) respectively with \(x_{b}\) and \(x_{0}\). The solid line tracks the difference in these displacements, i.e. \(\Delta x=x_{b}-x_{0}\). The dashed line is \((y-0.5)\times 10^{-3}\). We plot it here to track the signal, and in particular to show when the signal returns to the brane. At first, the signal on the brane moves faster than in the bulk (\(\Delta x\) is negative), however at later time the bulk signal overtakes the brane signal (\(\Delta x\) is positive). Upon returning to the brane, the bulk signal is ahead of the signal on the brane. Thus, an observer confined to the brane may register an apparent superluminal motion.
Figure 7: Comparison of the signals propagating on the brane and the bulk, similar to Fig. 6 but for much larger value of \(u_{0}^{y}\). We set \(\kappa^{2}=1\), \(b_{0}=1\) and \(\rho_{0}=10^{-4}\). This time, the bulk signal initial velocity is \(u_{0}^{y}=8.3\times 10^{-3}\). The solid line tracks again the difference in displacements in \(x\)-direction, i.e. \(\Delta x=x_{b}-x_{0}\).The dashed line is again \((y-0.5)\times 10^{-3}\). For convenience, we locate the TeV brane at \(y=0\), while the doted lines represents two other images of the same brane. The signal can travel to the other images before the brane confined signal arrives. However, after a few round trips, this feature disappears (\(\Delta x\) becomes negative) because the signal is tilted into the bulk direction.
produce two (or more) images of an object that can arrive to an observer at different times due to the curvature of space. In this process causality is not violated.
Since the signals sent through the bulk always come back to the original brane and generically travel faster than signals along the brane, this effect might be used to resolve the cosmological horizon problem.
All the discussion here was based on propagation of massless signals, so practically all of the conclusions drawn here will remain the same for gravitational waves propagation. One of the striking observational signatures would be arrival of the same gravitational wave signal at two different times, where the first signals arrives before its electromagnetic counterpart. In addition, echo-like signals [14; 15] should also be present. We used GW170104 gravitational wave event to impose a strong limit on the model with extra dimensions in question. Therefore we expect the size of compact space to be either so small that the effects of extra dimension are washed out, or very big( \(>0.05\)Mpc).
###### Acknowledgements.
D.C. Dai is supported by the National Science and Technology Council (under grant no. 111-2112-M-259-016-MY3). D.S. is partially supported by the US National Science Foundation, under Grant No. PHY-2014021.
|
2310.10922 | Spatial HuBERT: Self-supervised Spatial Speech Representation Learning
for a Single Talker from Multi-channel Audio | Self-supervised learning has been used to leverage unlabelled data, improving
accuracy and generalisation of speech systems through the training of
representation models. While many recent works have sought to produce effective
representations across a variety of acoustic domains, languages, modalities and
even simultaneous speakers, these studies have all been limited to
single-channel audio recordings. This paper presents Spatial HuBERT, a
self-supervised speech representation model that learns both acoustic and
spatial information pertaining to a single speaker in a potentially noisy
environment by using multi-channel audio inputs. Spatial HuBERT learns
representations that outperform state-of-the-art single-channel speech
representations on a variety of spatial downstream tasks, particularly in
reverberant and noisy environments. We also demonstrate the utility of the
representations learned by Spatial HuBERT on a speech localisation downstream
task. Along with this paper, we publicly release a new dataset of 100 000
simulated first-order ambisonics room impulse responses. | Antoni Dimitriadis, Siqi Pan, Vidhyasaharan Sethu, Beena Ahmed | 2023-10-17T01:31:59Z | http://arxiv.org/abs/2310.10922v1 | Spatial HuBERT: Self-supervised Spatial Speech Representation Learning for a Single Talker from Multi-channel Audio
###### Abstract
Self-supervised learning has been used to leverage unlabelled data, improving accuracy and generalisation of speech systems through the training of representation models. While many recent works have sought to produce effective representations across a variety of acoustic domains, languages, modalities and even simultaneous speakers, these studies have all been limited to single-channel audio recordings. This paper presents Spatial HuBERT, a self-supervised speech representation model that learns both acoustic and spatial information pertaining to a single speaker in a potentially noisy environment by using multi-channel audio inputs. Spatial HuBERT learns representations that outperform state-of-the-art single-channel speech representations on a variety of spatial downstream tasks, particularly in reverberant and noisy environments. We also demonstrate the utility of the representations learned by Spatial HuBERT on a speech localisation downstream task. Along with this paper, we publicly release a new dataset of 100 000 simulated first-order ambisonics room impulse responses.
Speech representation learning, self-supervised pre-training, spatial speech processing, speech localisation
## I Introduction
Speech, as one of the most fundamental forms of human communication, carries a wealth of information, ranging from linguistic content to emotional cues and speaker characteristics. Inspired by the human brain, the goal of a speech representation learning (SRL) model is to extract this information in a way where it can be readily accessed by the simplest of downstream models, even in the presence of complex, structured noise sources that overlap with the target speech [1]. Unlike the human auditory system however, current speech representation models view speech as a single-channel audio signal, and are unable to utilise the rich spatial information that is present in multi-channel audio. This spatial information enables humans to both track the location of speech sources in space, and also to better isolate them from many forms of interfering noise. As the majority of modern commercial devices such as mobile phones and smart speakers contain multiple microphones, the ability to exploit this spatial information through the representation learning process has the potential to lead to significant improvements in performance when building speech processing systems for these devices.
Despite lacking multi-channel capabilities, representation learning techniques have shown significant promise when applied to speech signals, and offer many benefits over training end-to-end systems. Early approaches used supervised pre-training [2], sometimes referred to as transfer learning [3]. Supervised pre-training optimises a model to solve a specific downstream task on a large labelled dataset, and then re-uses the learned weights either for new tasks, or on new datasets [4]. In recent years however, significant progress has been made in the field of speech representation learning through the use of self-supervised learning (SSL), with the development of models such as wav2vec2.0 [5], HuBERT [6] and WavLM [7]. Unlike supervised pre-training methods, self-supervised pre-training does not require the use of external labels. Instead, a proxy task is designed that extracts training labels from the input data itself. These proxy tasks typically involve predicting unseen information extracted from future frames in the sequence, or frames that are masked to the model input, and can use regression, classification, or contrastive losses [8].
The major advantage of self-supervised pre-training is the ability to leverage large amounts of unlabelled data, allowing the models to train on multiple domains and covering a wide variety of conditions. This results in representations that generalise well to out of domain data, with far less performance degradation when evaluating on domains unseen during training [9, 10]. Supervised pre-training objectives encourage models to discard information not needed for the pre-training task, while due to the lack of labels, representations learned from self-supervised objectives are more universal than those trained in supervised settings [11, 12, 13], and can achieve reasonable performance on a wide range of downstream tasks [14, 15]. Building general purpose pre-trained models for speech enables significant improvements in tasks with limited access to supervised training data.
Self-supervised speech representation models have also enabled several completely novel applications such as unsupervised speech recognition [16] and synthesis [17]. Previous studies have also extended these representations to multi-lingual data [18, 19, 20], multi-modal data [21, 22], and recently mixtures of multiple speakers [23], all showcasing the benefits of training speech representations in a wide variety of downstream scenarios.
Despite the significant progress made in these works, these models are all restricted to to single-channel recordings in which the target speaker is typically in close proximity to the microphone. In order to retain the benefits of these
representation models and still exploit the multi-channel capabilities of many recording devices, modern speech processing systems must use either classical signal processing techniques or separately trained non-linear models to first perform multi-channel speech enhancement in order to extract a de-noised single channel speech signal to pass to a representation model [24, 25]. However, these systems are designed to remove the spatial information from the input signal making it completely inaccessible to downstream models. Instead, we seek to build a new self-supervised speech representation model directly from multi-channel inputs, allowing for both cleaner representations in the presence of spatial noise sources, and also enabling downstream models to directly access spatial information for tasks such as speaker localisation.
In this paper, we introduce Spatial HuBERT (Sp-HuBERT), a self-supervised training framework that pre-trains on simulated multi-channel recordings of reverberant speech. Sp-HuBERT follows the masked speech prediction and denoising framework used in WavLM [7], with the addition of a masked spatial prediction loss. Training effective speech representations requires a large training corpus, far more than any publicly available multi-channel speech datasets. To combat this issue, Sp-HuBERT utilises simulated room impulse responses in the first-order ambisonics domain to convert large single-channel datasets into a suitable format for self-supervised pre-training.
We compare our model to the state-of-the-art single channel speech representation of a similar size, WavLM Base+, on a selection of tasks from the SUPERB Benchmark [14] converted to a spatial audio format. In noisy and reverberant conditions, Sp-HuBERT achieves a relative reduction of over 40% in word error rate on Librispeech over WavLM Base+, despite using nearly 100 times less data for pre-training.
We implement our upstream model and training process using the Fairseq toolkit [26], and implement our downstream evaluation tasks using the s3prl toolkit [14, 15]. Along with our code, we release a new dataset of 100 000 simulated FOA impulse responses 1.
Footnote 1: FOA IR Dataset hosted on Huggingface: [https://huggingface.co/datasets/adimiri/sp-hubert_impulse_responses](https://huggingface.co/datasets/adimiri/sp-hubert_impulse_responses)
The rest of this paper is organised as follows. Section II highlights some key related publications on which our work is based. Section III gives a brief technical overview of the Ambisonics spatial format, and the Masked Prediction Loss utilised in our work. Section IV details the Sp-HuBERT architecture, losses and data augmentation techniques. Section V provides experimental details including all hyper-parameter values used for training both our upstream model, and all of the downstream models. Section VI presents our results, including experiments detailing how performance varies in noisy and reverberant conditions.
## II Related Work
This work builds upon two existing single-channel speech representation learning models, Hidden-Unit BERT (HuBERT) [6] and WavLM [7]. The HuBERT architecture is made up of two main blocks: the first block consists of several CNN layers that down-sample the input into frames with a stride of 20ms, and the second block is a stack of transformer encoders that are able to use utterance-wide context to learn deep representations of the speech. HuBERT introduces a novel self-supervised learning objective, masked prediction loss, heavily inspired by the Masked Language Modelling loss used by the BERT language model. HuBERT uses unlabelled clean speech recordings to pre-train the speech representation model for use on an automatic speech recognition (ASR) downstream task. We describe this loss in more detail in section III-B. While the BERT language model uses the input token itself as the label, HuBERT obtains discrete pseudo-labels for each frame via a K-means clustering of audio features. The HuBERT model initially trains on labels generated by clustering mel-frequency cepstral coefficients (MFCCs), and later generates new labels using features from the 6th layer of its transformer encoder.
WavLM expands on the HuBERT framework with some small modifications to the transformer architecture by replacing the absolute position bias with a gated relative position bias [27], and additionally introducing a denoising component to the training process. Rather than training on clean speech, WavLM mixes utterances with randomly sampled within-batch secondary speech, or with recorded noise samples taken from the Deep Noise Suppression Challenge dataset [28]. These changes lead to improved overall performance on a variety of downstream speech tasks, with particular improvement on speaker identification.
Additionally, our downstream evaluation methodology is based heavily upon the Speech Universal PERformance Benchmark (SUPERB) [14]. The SUPERB Challenge consists of a broad set of speech processing tasks, each with a prescribed downstream model architecture, and compares speech representation models by evaluating their performance on each task without fine-tuning. Tasks are selected to cover the diverse range of information present in speech signals, and are categorised as either speaker, content, semantic, para-linguistic, or generative.
## III Background
### _Higher Order Ambisonics Format_
Higher Order Ambisonics (HOA) is a _system-independent_ spatial audio format used for capture and reproduction of sound in a full three-dimensional sphere [29]. HOA represents the sound-field as a series of spatially-orthogonal spherical harmonics. Multi-channel microphone signals from any fixed array configuration with enough channels can be converted into HOA components by computing the weighted scalar products between the signals and the corresponding spherical harmonic functions for each channel [30]. A continuous sound-field can be reproduced as an infinite linear combination of these so-called HOA components with high accuracy [31].
In practice, the representation is truncated to a desired order, and only a fixed number of HOA components are used. First-order Ambisonics (FOA), is the first-order truncation of HOA, consisting of 4 channels, typically referred to as W (omnidirectional), X (front-to-back), Y (left-to-right), and Z (up-and-down).
### _Masked Prediction Loss_
Similarly to the language model BERT [32], the Masked Prediction training objective masks a portion of the input sequence and trains the model to predict a label associated with each of the masked frames from the context of surrounding unmasked frames. More formally, let \(\mathbf{x}\) be a speech waveform, \(\mathbf{y}=[y_{1},\dots,y_{T}]=f_{t}(\mathbf{x})\) be the output of the CNN-block, \(h_{t}=g_{t}(\mathbf{y})\) be the output of the of the \(L\)-layer transformer encoder block at time \(t\), and \(z_{t}\) be the class-label for the frame at time \(t\). The model parameterises the distribution over the classes as
\[p(c\,|\,\mathbf{y},t)=\frac{\exp(\text{sim}(Ag_{t}(\mathbf{y}),e_{c})/\tau)}{\sum_{c^{ \prime}=1}^{C}\exp(\text{sim}(Ag_{t}(\mathbf{y}),e_{c^{\prime}})/\tau)}, \tag{1}\]
where \(c\in[1,C]\) is the true class label of frame \(t\), \(A\) is a trainable projection matrix, \(e_{c}\) is the trainable embedding for class \(c\), \(\text{sim}(a,b)\) computes cosine similarity, and \(\tau\) is a logit scaling factor that we set to 0.1 as in prior works. The masked prediction loss is given by
\[\mathcal{L}=\sum_{t\in\mathcal{M}}-\log p(z_{t}\,|\,\operatorname{MASK}( \mathbf{y}),t),\]
where \(\operatorname{MASK}(\cdot)\) randomly replaces frames with a trainable masked embedding, and \(\mathcal{M}\) is the set of all frames that are masked.
## IV Spatial HuBERT
We present Spatial HuBERT (Sp-HuBERT), a multi-channel self-supervised speech representation model trained to produce noise-robust speech representations using room impulse responses for a fixed spatial configuration. We extend the single-channel training objectives used by WavLM with spatial audio simulation. By using simulated spatial audio, our training data is not restricted by the limited availability of multi-channel recordings.
### _Simulating Spatial Data_
It is necessary to assume a fixed microphone array configuration at the input to the model. In order to maximise the adaptability, we selected the First-order Ambisonics (FOA) format. FOA is a full-sphere system-independent format, and only requires 4 channels at the input. Recordings from different microphone array configurations can be converted into FOA if necessary, but larger arrays may lose some spatial resolution in the process, and planar arrays will have no resolution in the perpendicular axis.
While there are some publicly available FOA impulse response datasets [33], they are insufficient in size for self-supervised learning. We utilise a statistics-based impulse response (IR) generation algorithm to produce a large dataset of FOA impulse responses. IR properties are controlled by specifying room dimensions (height, width, and length), source location, and RT60 parameters. In lieu of releasing the code used for IR generation, we release the dataset of 100 000 simulated impulse responses, generated using parameters given in table I.
We convert clean single-channel speech recordings into stationary FOA spatial speech by convolving with the generated impulse responses. Specifically, given a clean speech recording \(\mathbf{a}\) of length \(L\) samples, and an impulse response \(\mathbf{u}\) with a direction label \(l\) we set
\[\mathbf{a}^{\prime}=\mathbf{a}*\mathbf{u},\quad\mathbf{l}=\underbrace{(l,l,\cdots,l)}_{L\text { elements}},\]
where \(\mathbf{a}^{\prime}\) is the simulated multichannel speech, and \(\mathbf{l}\) is a sequence of DOA labels for each frame. Our impulse response generation method could not be easily extended to the case of moving sound sources. As a result, we simulate moving sound sources in a free field environment (no reverberation) by computing the FOA gains at each position along the trajectory.
We limit our simulations to linear trajectories, and restrict the velocity of the potential source. Specifically, with maximum initial distances of the source to the microphone array of \(m_{x},m_{y},m_{z}\) along the \(x,y,z\) axes respectively, a minimum distance from the microphone array \(m_{\text{dist}}\) we first randomly sample \(x\sim\mathcal{U}(-m_{x},m_{x}),y\sim\mathcal{U}(-m_{y},m_{y}),z\sim\mathcal{ U}(-m_{z},m_{z})\) such that \(||(x,y,z)||>m_{\text{dist}}\), and set our start point \(s=(x,y,z)\). Next, we randomly sampled a trajectory length \(|d|\sim\mathcal{U}(0,L\nu_{\text{max}}/f_{s})\), where \(\nu_{\text{max}}\) is the maximum source velocity. The trajectory direction is uniformly sampled on the surface of a unit sphere using the rejection method. That is, we sample \(d_{x},d_{y},d_{z}\sim\mathcal{U}(-1,1)\) until \(||d_{x},d_{y},d_{z}||\leq 1\), and set the trajectory direction
\[\overrightarrow{d}=\frac{(d_{x},d_{y},d_{z})}{||d_{x},d_{y},d_{z}||}.\]
We also reject samples where the trajectory extending from \(s\) along these direction will pass within \(m_{\text{dist}}\) meters of the microphones. That is, if
\[\frac{||s\times\overrightarrow{d}||}{||\overrightarrow{d}||}>m_{\text{dist}}\]
then we reject and re-sample the trajectory direction \(\overrightarrow{d}\). We set the trajectory end-point \(e=s+|d|\cdot\overrightarrow{d}\), and the full sampled trajectory at each sample \(i\) is given by
\[g_{i}=e\cdot\frac{i-1}{L-1}+s\cdot\frac{L-i}{L-1}\]
for each \(i\) from \(1\) to \(L\). The normalised direction label at sample \(i\) can be obtained from the trajectory as
\[l_{i}=\frac{g_{i}}{||g_{i}||}.\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Parameter** & **Description** & **Distribution** \\ \hline \(L\) & Room Length & \(L\sim U(3,6)\) \\ \hline \(W\) & Room Width & \(W\sim U(2,5)\) \\ \hline \(H\) & Room Height & \(H\sim U(3,4)\) \\ \hline \(x\) & & \(x\sim U(0.5,L)\) \\ \(y\) & Source Location & \(y\sim U(0.5,W)\) \\ \(z\) & & \(z\sim U(0.5,H)\) \\ \hline RT60 & Reverberation Time & RT60 \(\sim N(0.45,0.18)\) \\ \hline \end{tabular}
\end{table} TABLE I: Table of parameters used for IR generation
Finally, our spatial audio source at sample \(i\) is assigned the values
\[a_{i}^{\prime}=\frac{a_{i}d_{\min}}{||g_{i}||}(1,l_{i_{x}},l_{i_{y}},l_{i_{z}})\]
where \(\boldsymbol{a}=(a_{1},\ldots,a_{L})\) is a clean single-channel recording, \(l_{i_{x}},l_{i_{y}},l_{i_{z}}\) are the normalised \(x,y,z\) coordinates of the source at sample \(i\), \(d_{\min}\leftarrow\min_{i}\left(||g_{i}||\right)\) is the closest point on the trajectory to the microphone array, and \(||g_{i}||\) is the distance of the source to the array at sample \(i\). The W channel simply receives the original recording, while the X, Y, and Z channels at each sample are multiplied by the normalised co-ordinates of the source. The scaling factor of \(d_{\min}/||g_{i}||\) accounts for the change in intensity due to the change in distance between the source and microphones.
Our training data is made from a mixture of reverberant, stationary simulated sound sources using the generated impulse responses, and free field, moving sound sources simulated using the method described above. The proportion of the mixture is controlled with a fixed ratio \(p_{r}\). During training, with probability \(p_{r}\) we select the stationary source approach, and with probability \(1-p_{r}\) we select the moving source approach.
### _Model Architecture_
Figure 1 shows the overall model structure for Sp-HuBERT. Similarly to single-channel speech representations, the Sp-HuBERT model architecture consists of a convolutional feature encoder followed by a transformer encoder. The convolutional encoder takes a 4-channel input, and is built of 7 layers of temporal convolutions followed by a layer normalisation. Each layer has 1024 channels and uses a GELU activation [34], with strides of (5,2,2,2,2,2,2) and (10,3,3,3,3,2,2) respectively, resulting in frames of approximately 25ms wide with a 20ms stride. Sp-HuBERT uses double the channel count of WavLM in each convolutional layer, to allow the encoder to represent cross-terms between channels in the input.
The transformer encoder uses the same structure as WavLM Base. It is comprised of 12 transformer layers, each with 12 attention heads and 768-dimensional hidden states, and utilises a gated relative position bias on the first layer.
### _Training Objective_
As shown in figure 1, Sp-HuBERT utilises a two-part masked prediction loss, as described in section III-B. The first part aims to learn acoustic units by using pseudo-labels generated by K-means clustering the 6th layer of a 1st iteration trained HuBERT model, similarly to both the HuBERT Base model and the WavLM Base model.
In addition to the acoustic loss, there is also a spatial loss component to encourage learning spatial information. The spatial loss uses quantised direction labels generated from direction-of-arrival (DOA) information available from the spatialisation process described in section IV-A. DOA labels for each frame are converted into azimuth and elevation angles, and discrete labels are generated by a uniform segmentation in each dimension. Specifically, for frame \(t\) with a normalised position \((x,y,z)\), we assign it a discrete label \(\zeta_{t}\) as
\[\zeta_{t}=\left\lfloor\frac{n\theta}{\pi}\right\rfloor+n\left\lfloor\frac{m \phi}{2\pi}\right\rfloor\]
where \(\theta=\arccos(z)\) is the elevation of the source ranging from 0 to \(\pi\), \(\phi=\arctan(y,x)+\pi\) is the azimuth of the source ranging from 0 to \(2\pi\), \(n\) is the number of segments in elevation, and \(m\) is the number of segments in azimuth. This results in a total of \(nm\) discrete classes for the classification task.
The total loss is a weighted sum of these two components. Specifically,
\[\mathcal{L}_{\text{acoustic}} =\sum_{t\in M}-\log p(z_{t}\mid\operatorname{MASK}(\boldsymbol{ y}),t)\] \[\mathcal{L}_{\text{spatial}} =\sum_{t\in M}-\log p(\zeta_{t}\mid\operatorname{MASK}( \boldsymbol{y}),t)\] \[\mathcal{L}_{\text{total}} =\mathcal{L}_{\text{acoustic}}+\lambda\mathcal{L}_{\text{spatial}}\]
where \(z_{t}\) and \(\zeta_{t}\) are the acoustic and spatial class labels respectively for frame \(t\), \(p\) is defined as in equation 1, and \(\lambda\) is a hyper-parameter that adjusts the weight of the spatial loss.
Sp-HuBERT also makes use of data augmentation akin to WavLM by mixing DNS noise and secondary speech into utterances during training. A similar utterance mixing protocol to WavLM [7, Alg. 1] is employed. For each batch of spatial speech signals, utterances are mixed with some probability \(p_{m}\). If mixing occurs, the interfering signal will be sampled from a DNS noise dataset with probability \(p_{n}\) and spatialised using the method given in section IV-A, or otherwise sampled from a secondary speech utterance from within the same batch. If the interference is speech, it is truncated to be at most half the length of the primary signal. The primary speech is mixed with the interference at a random selected SNR.
Fig. 1: Sp-HuBERT model architecture
## V Experimental Setup
### _Upstream Training_
We train Sp-HuBERT using 960 hours of LibriSpeech audio [35], spatialised using simulated impulse responses and augmented with noise drawn from the DNS challenge dataset [28]. Unless specified otherwise, augmentation hyperparameters are set to \(p_{r}=0.5\), \(p_{m}=0.3\), \(p_{n}=0.5\), and the spatial loss weight \(\lambda=0.25\). We use 512 classes for the spatial loss, uniformly dividing azimuth into \(m=32\) segments, and elevation into \(n=16\) segments, resulting in an overall segmentation width of \(11.25\) degrees. The Sp-HuBERT model is trained on 4 GPUs for 300k steps, with a batch size of at most 140s of audio per GPU. An Adam optimizer is used with \(\beta=(0.9,0.98)\) and the learning rate ramps up linearly from zero to 3e-4 over the first 30k iterations before decaying linearly back to zero. We use the same masking configuration as HuBERT, with mask span set to 10 frames and \(8\%\) of frames chosen as mask starts.
We select a value of \(\lambda\) by comparing upstream validation losses. Table II shows the values of the acoustic and spatial losses at 200k and 300k iterations for 3 different values of \(\lambda\). It is clear from this table that increasing \(\lambda\) results in a reduction in the spatial loss, with the lowest values at \(\lambda=0.5\). For the acoustic loss however, we note that decreasing \(\lambda\) results in diminishing returns, with only a minimal improvement from \(\lambda=0.25\) to \(\lambda=0.125\). We prioritise acoustic performance over spatial performance, as the primary purpose of the model is to achieve better performance on acoustic focused tasks in noisy environments, and therefore opt to use \(\lambda=0.25\) as to minimise spatial loss without compromising on the acoustic loss.
### _Downstream Evaluation_
We adapt a selection of tasks from various categories of the SUPERB benchmark to use both spatialisation and noise augmentation. From the speaker information category, we have chosen Speaker Identification (SID). From the content category, we have chosen Phoneme Recognition (PR) and Automatic Speech Recognition (ASR). Finally, we evaluate the Sp-HuBERT model on Emotion Recognition (ER) from the para-linguistic category. For all tasks, pre-trained upstream models are frozen, and the input to the downstream model is a trainable weighted sum of the transformer encoder layers.
For PR and ASR, we use the same task setup as the SUPERB benchmark. Both tasks are trained using a CTC loss, and performance is measured using Levenshtein distance on the phoneme sequence and word sequence respectively. The ASR task also uses the official LibriSpeech 4-gram model for language model decoding. For the SID and ER tasks, we change the downstream model from mean pooling to attentive pooling, to better accommodate the noisy setting. Both tasks are trained using a cross-entropy loss and performance is measured using classification accuracy. The four tasks are summarised in table IV.
For baseline comparisons, we also train downstream models for the WavLM Base and WavLM Base+ speech representations. Table III compares the model sizes, training times, and training set sizes of these representations to Sp-HuBERT. In terms of training time and dataset size, the closest comparison to our model is WavLM Base, while WavLM Base+ is the current state-of-the-art fully self-supervised single-channel representation model of a comparable size.
In addition to the acoustic tasks featured in the SUPERB Benchmark, Sp-HuBERT also learns spatial information through the spatial masked prediction loss. To evaluate the presence and accessibility of spatial information, we implement a Speech Localisation (SL) task using simulated data. The dataset is comprised of a subset of speech data taken from LibriLight [36], convolved with simulated FOA room impulse responses from our own dataset. Each simulated utterance contains exactly 10 seconds of audio from a stationary talker. We use a simple attention pooling downstream model for Sp-HuBERT, and train with an MSE loss on the normalised Cartesian co-ordinates of the speaker, as this was found to be the most effective method in [37]. We measure performance using geodesic angular distance.
Similarly to upstream training, \(p_{r}\) controls proportion of sources that are reverberant, and \(p_{m}\) controls the proportion of utterances that are augmented. For downstream training, we always use \(p_{n}=1\) so as to never augment with secondary speech.
rates and choose the model that has the best validation set performance. The learning rates used are given in Table V.
Results for Sp-HuBERT, WavLM Base, and WavLM Base+ upstream models are shown in table VI. As expected, WavLM Base+ performs the best across all tasks in the clean setting due to its larger training corpus and duration, with 94000 hours of data and 1M gradient updates compared to Sp-HuBERT's 960 hours of data and 300k gradient updates. Sp-HuBERT significantly outperforms WavLM Base on Speaker ID, and shows comparable performance on ASR. In the noisy setting however, we see Sp-HuBERT offer a considerable performance improvement over both WavLM Base and Base+. With language model decoding, Sp-HuBERT achieves greater than 40% reduction in WER when compared to WavLM Base+, along with significant improvements in SID. Across the board, the degradation in performance arising from the introduction of noise is significantly higher for WavLM Base+ when compared to Sp-HuBERT.
### _Sensitivity to Noise_
Figure 2 shows the performance of each upstream model vs SNR on the PR and SID tasks. The solid lines show performance using the downstream model trained only on clean speech, while the dashed lines show performance using the downstream model trained with noise at 0-20dB SNR. On both tasks, Sp-HuBERT begins to outperform WavLM Base+ when the SNR drops below 15dB. At 5dB, Sp-HuBERT achieves an 8% reduction in phoneme error rate on Librispeech, and a 6% improvement in classification accuracy on Voxceleb1.
The difference between performance when training the downstream model on noisy data is another key point of interest here. We observe that on the phoneme recognition task, exposing the downstream model to noise during training has a minimal impact on performance, but on the speaker identification task, there is a significant improvement gained by training on noisy data, with an 8% increase in absolute accuracy at 10dB SNR when using Sp-HuBERT.
This difference in performance indicates that when exposed to noise during training, the downstream model is able to learn a more effective way to extract speaker information from the representation model. The mechanism behind this effect will be discussed further in section VI-E.
### _Sensitivity to Reverb_
Figure 3 shows the performance of each upstream model vs SNR on the ASR and SID tasks with different reverberation conditions, using the downstream model trained on noisy data. Dashed lines show the performance on test data with both reverberant speech and noise, while solid lines show the performance on free field speech and noise mixtures.
On both tasks, reverberation has a significant impact on the performance of the representations. For Sp-HuBERT, WER increases by 7% and SID accuracy decreases by 11% at 5dB SNR when introducing reverberation. However, the performance degradation is more severe for WavLM. On both tasks, even at high SNR Sp-HuBERT outperforms WavLM Base+ in reverberant conditions. Particularly on the ASR task, the performance of Sp-HuBERT degrades significantly slower than that of WavLM as the SNR decreases, with Sp-HuBERT offering a 16% WER improvement at 5dB in reverberant conditions.
### _Speech Localisation_
Similarly to the speech tasks in section VI-A, we train two downstream models to solve the Speech Localisation task. The clean trained model uses \(p_{r}=1,p_{m}=0\), and the noisy trained model uses \(p_{r}=1,p_{m}=1\) with random SNRs randomly sampled between 0 and 20dB. We evaluate the performance of both models in free-field and reverberant settings. We do not compare to baseline representations for this task, as this is the first work to produce a spatial representation.
Figure 4 shows angular error vs SNR of both models in reverberant and free field testing scenarios. Firstly, we see that as SNR decreases, the presence of reverb significantly increases the difficulty of the task. On free field recordings, the performance at 5dB SNR is nearly the same as the performance at 30dB SNR, while in the reverberant recordings the average angular error increases by over 8 degrees. At high
\begin{table}
\begin{tabular}{|c|c||c|c|c|c||c|c|c||c|} \hline \multirow{3}{*}{Model} & \multicolumn{3}{c|}{Spatial SUPERRB Clean} & \multicolumn{5}{c|}{Spatial SUPERRB Noisy} \\ \cline{2-10} & Speaker & \multicolumn{3}{c||}{Content} & \multicolumn{1}{c||}{ParalL} & \multicolumn{1}{c||}{Speaker} & \multicolumn{3}{c||}{Content} & \multicolumn{1}{c||}{ParaL} \\ \cline{2-10} & SID & PR & ASR (WER) & ER & SID & PR & ASR (WER) & ER \\ \hline \multirow{2}{*}{
\begin{tabular}{c} WavLM Base \\ \end{tabular} } & Acc.\(\uparrow\) & PER\(\downarrow\) & LM\(\downarrow\) & No LM\(\downarrow\) & Acc.\(\uparrow\) & Acc.\(\uparrow\) & PER\(\downarrow\) & LM\(\downarrow\) & No LM\(\downarrow\) & Acc.\(\uparrow\) \\ \hline WavLM Base & 62.51 & 6.43 & 5.82 & 7.83 & 59.00 & 54.08 & 17.85 & 18.04 & 20.43 & 55.05 \\ \hline WavLM Base+ & **77.03** & **5.06** & **4.78** & **6.56** & **61.74** & 65.48 & 13.62 & 13.26 & 15.26 & 58.79 \\ \hline Sp-HuBERT & 73.10 & 7.25 & 5.70 & 7.87 & 60.86 & **69.43** & **9.58** & **7.84** & **10.46** & **59.77** \\ \hline \end{tabular}
\end{table} TABLE VI: Results of WavLM Base, WavLM Base+ and SpHuBERT on a spatial version of 4 tasks from the SUPERRB benchmark
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{3}{c|}{SID} & \multicolumn{3}{c|}{PR} & \multicolumn{3}{c|}{ASR} & \multicolumn{3}{c|}{ER} \\ \cline{2-10} & Clean & Noisy & Clean & Noisy & Clean & Noisy & Clean & Noisy \\ \hline WavLM Base & 2e-4 & 3e-4 & 2e-3 & 2e-3 & 1e-4 & 2e-4 & 1.5e-5 & 1.5e-5 \\ WavLM Base+ & 2e-4 & 3e-4 & 2e-3 & 2e-3 & 1e-4 & 2e-4 & 1.5e-5 & 1.5e-5 \\ Sp-HuBERT & 1e-4 & 2e-4 & 1e-3 & 1e-3 & 1e-4 & 1e-4 & 1.5e-5 & 1.5e-5 \\ \hline \end{tabular}
\end{table} TABLE V: A summary of the learning rates used in each downstream task by each model
SNRs however, the model appears to perform better under reverberant conditions. This is partially due to the fact that the downstream models were both trained with \(p_{r}=1\).
Next we compare training on clean data to training on noisy data. Firstly, we see that in reverberant environments, the noisy trained model consistently performs better than the clean trained version. In the free field test case however, we find that the model trained on clean data performs better at high SNRs, most likely due to these conditions more closely matching their training data.
We note that at high SNRs, localisation in free-field conditions on clean speech is a simple task in which traditional methods can easily obtain very high accuracy, but Sp-HuBERT averages around 8 degrees error at 30dB SNR. This is a significant limitation of the upstream model caused by the quantisation used during training, which separates both azimuth and elevation into segments with a width of 11.25 degrees. We hypothesise that using discrete DOA labels for upstream training restricts the resolution of the spatial information in the representation.
### _Layer Weight Analysis_
Following the approach of [7], we investigate the contribution of each layer of the transformer encoder to each of the
Fig. 3: A performance comparison between Sphubert, WavLM Base+ and WavLM Base at various SNRs for ASR on Librispeech and Speaker Identification on Voxceleb1. Solid lines show performance when on free field signals, and dashed lines show performance on reverberant signals.
Fig. 2: A performance comparison between Sphubert, WavLM Base+ and WavLM Base at various SNRs for two tasks. Solid lines show performance when the downstream model is trained only on clean speech, and dashed lines show performance when the downstream model is trained on noisy speech of SNRs varying from 0dB to 20dB.
4 downstream tasks along with Speech Localisation for Sp-HuBERT. The input to each downstream model that we trained in our earlier experiments is a weighted-sum of the 13 layers of the transformer encoder, including the input layer. These weights indicate which layers provide the most information for the downstream models in each task. Figure 5 shows the weights learned for each task, both when trained on clean spatial speech and when trained on noisy data. Larger layer weights indicate greater contribution of the corresponding layer.
Figures 4(a) and 4(b) show that the weights learned for each of the 4 tasks are similar in both Sp-HuBERT and WavLM when trained on clean data. Consistent with the findings of [7, 38], we see that speaker information is most easily accessible from the earlier layers of the model, with the dominant weight at layer 5, while phoneme recognition and automatic speech recognition utilise layers closer to the end of the model. We also note that the layer weights for emotion recognition are near uniform, with all layers contributing very similar amounts. For the SL task, once again there is an increased contribution in the later layers, particularly layers 10 and 12.
Figures 4(c) and 4(d) show the weights learned when trained on noisy data for both Sp-HuBERT and WavLM, while figures 4(e) and 4(f) show the difference between the weights trained on noisy and clean data. For WavLM there are some subtle changes between the clean and noisy case, with an increase in the use of layer 0 for SID and an increase in the use of layer 11 for ASR. In contrast, there is a significant change in weights for Sp-HuBERT. For the SID task, the downstream model trained on noisy data is using layers 6 and 7 almost exclusively, indicating that the speaker information in these layers is far more robust to noise than that in layer 5. We also see a slight preference towards deeper layers in the ASR task, with a notable increase in the weight of layer 11 and a decrease in the weights of layers 8-10. This suggests that particularly in the case of Sp-HuBERT, later layers of the representation tend to be more robust to spatial noise than earlier layers.
This analysis also provides some insight on the performance improvements when training on noise that were previously observed in section VI-B. Through the layer weights, we see a clear difference in how the two downstream models extract information from the representations in each of the tasks. For both Sp-HuBERT and WavLM Base+, we see the most significant changes in layer weights between clean and noisy on the SID task, on which a substantial performance improvement was observed. In contrast, we see minimal change in layer weights on the PR task, on which only minimal performance improvements were observed. It appears that exposing the downstream model to noise during training allows it to select layers of the representation that contain the required speaker information, and are more robust to the noise sources. In the case of phonetic information however, it appears that no significant advantages can be found in other layers.
## VII Conclusion
This paper presents Spatial HuBERT, a self-supervised spatial speech representation model trained on a spatial speech dataset generated using simulated first order ambisonics impulse responses, which we release to the public for future development. Spatial HuBERT extends the masked prediction and denoising losses of HuBERT and WavLM with a spatial loss term and produces representations that are more robust to both noise and reverberation than state-of-the-art single channel models. Despite training on only 960 hours of data from LibriSpeech, Spatial HuBERT outperforms even WavLM Base+ on a variety of downstream tasks in noisy testing conditions. Additionally, the representations learned by Spatial HuBERT contain spatial information, enabling its use for speech localisation tasks.
For future work, we aim to increase the size of the training corpus and scale up the size of the model to enable comparisons with WavLM Large. Another potential avenue for improvement involves incorporating the loss terms from Cocktail HuBERT [23], to train the model to disentangle multiple simultaneous talkers in noisy spatial environments.
## Acknowledgments
The work for this paper was conducted as a Research Internship at Dolby Australia. We thank them for providing the compute resources required to conduct our experiments. We also thank Henry Chen and David McGrath for useful discussions, and Dylan Harper-Harris for his assistance in debugging our code.
|
2301.06801 | First thought on a high-intensity $K_S$ experiment | The $K \rightarrow \mu\mu$ decays have recently been identified as another
golden kaon physics mode alongside the rare $K \rightarrow\pi\nu\bar{\nu}$
processes. These golden modes provide precision tests of the Standard Model
with very high sensitivity to New Physics. The presented study is exploring the
possibility to address the $K_L - K_S \rightarrow \mu^+\mu^-$ interference
experimentally and outlines the challenges associated with such an ambitious
project for the far future. A next-generation experiment at the intensity
frontier is required that should be capable of collecting a large sample of
$\mathcal{O}(10^{14} - 10^{15})$ $K_L$ and $K_S$ decays. Challenges related to
the beamline design and detector technology need to be overcome if we want to
address this mode experimentally. A significant background suppression of $K_S
\rightarrow \pi^+\pi^-$ and radiative $K_L \rightarrow \mu^+\mu^-\gamma$ decays
is imperative for a few $\%$ measurement, which would require excellent
kinematic resolution and efficient photon detection. The first attempt at a
possible experimental setup to measure this effect is presented. Last but not
least, a huge number of neutral particles produced offers the possibility to
study a plethora of other rare $K_L$, $K_S$ decays as well as hyperon decays
enhancing the physics motivation for such an initiative. | Radoslav Marchevski | 2023-01-17T10:53:59Z | http://arxiv.org/abs/2301.06801v1 | # First thought on a high-intensity \(K_{s}\) experiment
###### Abstract
The \(K\rightarrow\mu\mu\) decays have recently been identified as another golden kaon physics mode alongside the rare \(K\rightarrow\pi\nu\bar{\nu}\) processes. These golden modes provide precision tests of the Standard Model with very high sensitivity to New Physics. The presented study is exploring the possibility to address the \(K_{L}-K_{S}\rightarrow\mu^{+}\mu^{-}\) interference experimentally and outlines the challenges associated with such an ambitious project for the far future. A next-generation experiment at the intensity frontier is required that should be capable of collecting a large sample of \(\mathcal{O}(10^{14}-10^{15})\)\(K_{L}\) and \(K_{S}\) decays. Challenges related to the beamline design and detector technology need to be overcome if we want to address this mode experimentally. A significant background suppression of \(K_{S}\rightarrow\pi^{+}\pi^{-}\) and radiative \(K_{L}\rightarrow\mu^{+}\mu^{-}\gamma\) decays is imperative for a few % measurement, which would require excellent kinematic resolution and efficient photon detection. The first attempt at a possible experimental setup to measure this effect is presented. Last but not least, a huge number of neutral particles produced offers the possibility to study a plethora of other rare \(K_{L}\), \(K_{S}\) decays as well as hyperon decays enhancing the physics motivation for such an initiative.
## 1 Introduction
Over the past few years, kaon physics is attracting more and more attention. Recent experimental results on kaon Flavour Changing Neutral Currents (FCNCs) from NA62 [1, 2, 3] and LHCb [4] at CERN, and KOTO [5, 6] in Japan have produced a wide range of important results, which triggered a broad theoretical interest. Recently it has been shown that a measurement of the interference between \(K_{L}\rightarrow\mu^{+}\mu^{-}\) and \(K_{S}\rightarrow\mu^{+}\mu^{-}\) decays can be used to extract the \(CP\) violation parameter \(\eta\) with a theoretical precision of 1%[7, 8]. The exclusive sensitivity of the interference to the short-distance \(CP\)-violating part of the \(K\rightarrow\mu^{+}\mu^{-}\) amplitude turns the \(K\rightarrow\mu^{+}\mu^{-}\) process into yet another golden rare kaon channel. This process can precisely probe the CKM structure of the SM, and at the same time offers a large sensitivity to contributions from physics beyond the Standard Model. A measurement of the \(K_{S}-K_{L}\) interference effect (\(BR_{eff}\sim 8\times 10^{-10}\)) to a few % precision to match the theoretical knowledge of this process is an essential flavour physics objective, and effort must be spent to address it. Such a measurement will be an extremely challenging task far beyond the reach of modern kaon physics experiments. It will only be possible at next-generation kaon experiments at the intensity frontier.
## 2 High-intensity neutral kaon experiment
The first attempt at an experimental setup to measure extremely rare neutral kaon decays is inspired by the design of the successful NA62 experiment[9]. The setup uses a neutral secondary beam instead of a charged one. The experiment will use a 400 GeV/\(c\) primary proton beam extracted from the CERN SPS accelerator impinging on a Beryllium target at a 12 mrad incident angle. The secondary beam opening angle is 1 mrad with respect to the center of the target defined by a 6 m long collimating system. The large incident angle of the primary beam will result in a soft kaon momentum spectrum, resulting in a geometrical acceptance of 30-40% for two-body kaon decays. The collimator is followed by a 60 m long decay region and a spectrometer, both located in the same vacuum tank. A charged hodoscope will allow precise timing measurements of the traversing charged particles. A calorimetric system complemented by a fast muon veto detector will provide particle identification capability to separate muons from pions and electrons. A sketch of the experimental layout is presented in Figure 1.
An option to increase signal acceptance and provide parallel tracks for two-body decays by adding a second magnet to the experimental setup can also be studied. This double-bend technique was successfully employed by the E871 experiment at the Brookhaven National Laboratory [10, 11] to reduce background from semileptonic kaon decays and can be an important improvement of the proposed experimental setup.
## 3 Sensitivity and charged particle rates
A toy simulation is developed to estimate the sensitivity of the setup. The differential momentum spectrum of the secondary kaon beam is generated using the Malensek parametrization [12]. The spectrum depends on the incident angle of the primary beam and the solid angle covered by the collimator opening. The signal process is generated according to the time-dependent rate [8] and include three components: \(K_{S}\rightarrow\mu^{+}\mu^{-}\); \(K_{L}\rightarrow\mu^{+}\mu^{-}\); \(K_{S}-K_{L}\rightarrow\mu^{+}\mu^{-}\) interference. After the decay, the resulting muons are further propagated through the whole experimental setup to estimate the signal acceptance. The main kinematic quantity used is the di-muon invariant mass, \(M_{\mu\mu}=\sqrt{(P_{\mu 1}+P_{\mu 2})^{2}}\), where \(P_{\mu 1}\) and \(P_{\mu 2}\) are the momenta of the two muon tracks. The \(M_{\mu\mu}\) distribution for the signal is a peak centered at the neutral kaon mass. In the simulation, the momentum and angular resolution of the spectrometer for muon tracks are assumed to be the same as for the existing straw tube tracker of NA62. A smearing factor is applied, resulting in an invariant mass resolution of \(\sigma_{M}\sim 1.9\) MeV/\(c^{2}\) for di-muon events. The signal region is defined in the 492-504 MeV/\(c^{2}\) range. The geometrical signal selection results in 40% acceptance for the signal, which is quite encouraging. Further corrections are applied to account for additional background-suppressing conditions, projected DAQ, Trigger, and detector efficiencies bringing signal efficiency down to 15%. This number should represent a more realistic estimate of the true signal efficiency but it relies on a lot of assumptions and results in significant uncertainties. The signal yield of the experimental setup described above is between 75 and 300 interference events/year after applying the 15% signal efficiency. The expected number of interference events in the final experiment can't reliably be computed because it depends heavily on the choice of a beam setup (incident angle, collimation scheme) and the strong phase, \(\varphi_{0}\), governing the size of the interference effect. Optimizations of the beamline are essential to determine the ultimate sensitivity of the experiment.
The main background is produced by \(K_{S}\rightarrow\pi^{+}\pi^{-}(BR\approx 70\%)\) decays either through a double \(\pi\rightarrow\mu\) misidentification or two consecutive \(\pi^{+}\rightarrow\mu^{+}\nu_{\mu}\) decays. Both backgrounds must be suppressed by at least \(10^{11}\), which will present significant experimental challenges. These challenges can be
Figure 1: Sketch of a possible experimental layout for the high-intensity neutral kaon experiment.
addressed by: strong particle identification, and excellent kinematic resolution. Another source of background will be \(K_{L}\rightarrow\mu^{+}\mu^{-}\gamma(BR\approx 3.6\times 10^{-7})\) decay, where the presence of an additional photon and the small branching ratio should be sufficient to bring the background to the desirable level.
A main challenge for the experiment will be the particle rate in the detectors, which is primarily generated by \(K_{S}\) and \(\Lambda\) decays. Assuming \(10^{19}\) POT the rate of charged particles at the first spectrometer station, located at a \(z=85\) m from the primary target is about 1 GHz over a surface area of 3.7 m\({}^{2}\). The rate is heavily dependent on the distance from the beam (see Figure 2). The high rate in the central parts of the detector is produced primarily by \(\Lambda\to p\pi^{-}\) decays, where the proton takes a larger part of the momentum and primarily traverses the central part of the detector. The highly non-uniform rate imposes technological challenges on the required detectors. While the charged particle rates are low at the outer parts of the detector (50-100 KHz/cm\({}^{2}\)) in the central parts the rates can reach up to 0.7-1 MHz/cm\({}^{2}\), an order of magnitude higher. This challenge can be solved by developing high-granularity detectors with different technology as a function of radius. However, the interface between the different detector materials will not be trivial. Solid-state detectors might be the solution and we can look for solutions required for detectors at the HL-LHC which impose similar requirements. Finally, the large rates also require a novel readout system. A standard TDAQ chain involving a low-level hardware trigger followed by an online software-based system might not be feasible. Solutions adopting a purely software-trigger system will be investigated.
## 4 Areas for future development
The presented sensitivity projections albeit quite crude are encouraging and warrant more serious feasibility studies. The number of expected interference events depends on the strong phase \(\varphi_{0}\) and values that produce constructive interference are favourable. Work on the SM estimate gives a value of \(\cos\varphi_{0}=0.978\pm 0.009\)[13] for the strong phase, indicating maximal constructive interference and better signal sensitivity.
A more realistic design of the beamline should be developed. Optimizing the incident angle of the primary beam onto the target, collimation, and muon shielding will determine if the necessary statistics of \(\mathcal{O}(10^{3})\) interference events can be collected. To collect such a large number of rare events implies large rates in the detectors downstream of the fiducial volume. A dedicated R&D program is required to provide tracking and calorimetry at the GHz regime. High-granularity detectors with a time resolution of about 100 ps or better are needed, which presents a technological challenge that must be tackled over the next years. The rate in the detectors will be highly non-uniform over the detection surface (see Figure 2). The large particle rates close to the beam pipe would require the development of hybrid detectors using detection techniques with different rate capabilities as a function of the distance from the beam axis.
Figure 2: Sketch of a possible experimental layout for the high-intensity neutral kaon experiment.
The detectors must also provide excellent spatial, momentum, and energy resolution to ensure the necessary suppression of background decays and accidentals. The sensitivity of the measurement can be heavily impacted by background contamination adding a systematic uncertainty to the extraction of the \(\eta\). The background from \(K_{S}\to\pi^{+}\pi^{-}\) and \(K_{L}\to\mu^{+}\mu^{-}\gamma\) processes produce the main background from kaon decays. The experimental design will be heavily geared towards precise kinematic reconstruction, fast timing, and hermetic photon detection that must provide the necessary background rejection. The large rate of muons generated from the beamline can lead to accidental di-muon pairs and can be a large source of background. This background has not been addressed so far and must be one of the main concerns during the beamline optimization process.
The large amount of \(\mathcal{O}(10^{14})\) neutral kaon decays and \(\mathcal{O}(10^{13})\)\(\Lambda\) baryon decays offers a great opportunity to discover and measure other very rare processes. Notable examples are the \(K_{L}\to\pi^{0}l^{+}l^{-}\) and \(K_{L}\to\mu e\) decays, which are of great theoretical interest because they can provide important constraints on various NP scenarios. Sensitivity studies for a broad range of processes must be performed and new ideas about interesting observables are welcome.
## 5 Conclusions
The study of \(K\to\mu^{+}\mu^{-}\) decays presents a golden opportunity to obtain a clean determination of the \(CP\)-violating parameter \(\eta\) from kaon physics, in addition to the golden \(K_{L}\to\pi^{0}\nu\bar{\nu}\) mode. The capabilities of the CERN facilities to deliver high-intensity kaon beams offer interesting prospects to measure the \(K_{S}-K_{L}\to\mu^{+}\mu^{-}\) interference with a few % precision in the future. If such an experiment can be constructed it will enable a very broad physics program complementary to the \(K\to\mu^{+}\mu^{-}\) studies. The large number of \(3-4\times 10^{13}\) kaon decays/year will allow significant improvement in the precision of a wide range of kaon observables, as well as very sensitive searches addressing a broad range of NP scenarios. The large particle rates in the detectors impose severe technical challenges, which might need at least a decade for its solution. Very fast and highly granular detectors will be imperative and require a dedicated R&D program. Innovative solutions can be found by exploring synergies with detector developments for experiments at the high-luminosity phase of the LHC, which will be essential for the next-generation kaon experiments at the intensity frontier.
|
2305.05627 | An Exploration of Encoder-Decoder Approaches to Multi-Label
Classification for Legal and Biomedical Text | Standard methods for multi-label text classification largely rely on
encoder-only pre-trained language models, whereas encoder-decoder models have
proven more effective in other classification tasks. In this study, we compare
four methods for multi-label classification, two based on an encoder only, and
two based on an encoder-decoder. We carry out experiments on four datasets --
two in the legal domain and two in the biomedical domain, each with two levels
of label granularity -- and always depart from the same pre-trained model, T5.
Our results show that encoder-decoder methods outperform encoder-only methods,
with a growing advantage on more complex datasets and labeling schemes of finer
granularity. Using encoder-decoder models in a non-autoregressive fashion, in
particular, yields the best performance overall, so we further study this
approach through ablations to better understand its strengths. | Yova Kementchedjhieva, Ilias Chalkidis | 2023-05-09T17:13:53Z | http://arxiv.org/abs/2305.05627v1 | # An Exploration of Encoder-Decoder Approaches to
###### Abstract
Standard methods for multi-label text classification largely rely on encoder-only pre-trained language models, whereas encoder-decoder models have proven more effective in other classification tasks. In this study, we compare four methods for multi-label classification, two based on an encoder only, and two based on an encoder-decoder. We carry out experiments on four datasets--two in the legal domain and two in the biomedical domain, each with two levels of label granularity-- and always depart from the same pre-trained model, T5. Our results show that encoder-decoder methods outperform encoder-only methods, with a growing advantage on more complex datasets and labeling schemes of finer granularity. Using encoder-decoder models in a non-autoregressive fashion, in particular, yields the best performance overall, so we further study this approach through ablations to better understand its strengths.
## 1 Introduction
Multi-label classification constitutes the task of predicting multiple labels for an input as opposed to a single (possibly binary) one. The labels are drawn from a set of up to several hundred classes, often with the added challenge of class imbalance. While the order in which labels are predicted is irrelevant, there can be interdependence between subsets of labels. The task is commonly approached with a classification model based on a pre-trained encoder followed by a multi-output classification head.
Encoder-decoder models, like T5 (Raffel et al., 2020), have taken over recent NLP literature with state-of-the-art results on various tasks, such as question-answering (QA), summarization, single-label classification, etc. Raffel et al. (2020) showed that any given NLP task could be reformulated as a _text-to-text_ task and solved with conditional generation, i.e., generating a text sequence that represents the desired output, be that a span of text in QA, a text summary, a label descriptor, etc. Liu et al. (2021) presented an alternative use of encoder-decoder models for classification tasks in particular, wherein T5's decoder is used in a non-autoregressive fashion to obtain output representations, which are then fed to a classification head.
The application of encoder-decoder methods to multi-label classification is currently limited to one experiment in the work of Liu et al. (2021), who compare a text-to-text approach and their non-autoregressive approach on a single dataset, including an encoder-only baseline built off of a different pre-trained model, BERT (Devlin et al., 2019). They obtain results favorable to the two encoder-decoder methods, but since the focus of their work is not multi-label classification in particular, their evaluation is insufficient to draw hard conclusions about this task, and analysis on the contribution of different model components to performance on the task is missing altogether.
In this work, we carry out an extensive study of encoder-decoder approaches to multi-label classification. To ensure the thorough and fair evaluation of all methods:
1. We experiment on four datasets from two different domains (legal and biomedical), each with two levels of label granularity.
2. We include four methods for multi-label classification, two encoder-only methods and two encoder-decoder methods.
3. We conduct preliminary development to determine the best configuration for the application of each method, e.g. choice of label descriptors for the text-to-text approach.
4. We explore how model size affects performance, by fine-tuning small, base, and large T5 models.
5. We ablate components of the best performing approach, the non-autoregressive encoder-decoder method of Liu et al. (2021), to better understand its strengths.
We release our code base to assure reproducibility and let others extend our study by experimenting with new methods and more datasets.1
Footnote 1: [https://github.com/coastalcph/Multi-Label-Classification-T5](https://github.com/coastalcph/Multi-Label-Classification-T5)
## 2 Related Work
Class imbalance is a critical issue in multi-label classification, with researchers searching for the best method to handle rare (less represented) labels.
Encoder-only ApproachesSnell et al. (2017) introduced the idea of a _prototype_ label vector, obtained by averaging over all instances of a given class and used to add inductive bias to their Prototypical Network for multi-label classification. In a similar vein, Mullenbach et al. (2018) developed the Label-Wise Attention Network (LWAN) architecture, in which label-wise document representations are obtained by learning to attend to the most informative input words for each label, using trainable label vectors as keys.
Chalkidis et al. (2020) systematically studied the effects of different language encoders (CNNs, BIGRUs, BERT) and several variants of LWAN with regards to the representation of prototype labels. Experimenting with three datasets (EURLEX, MIMIC-III, and AMAZON), they showed that better language encoders counter-play the positive effect of the LWAN module, i.e., a standard BIGRU classifier outperforms CNN-based LWANs Mullenbach et al. (2018), and a standard BERT outperforms BIGRU-LWAN, respectively. Moreover, BERT-based LWANs offer minor overall improvements compared to a vanilla BERT classifier, wherein BERT's _CLS_ token representation is passed to a classification head Devlin et al. (2019).
Chalkidis et al. (2021) were the first to explore the use of a T5 model for multi-label classification, although they only considered an encoder-only classifier, disregarding the model's decoder. They followed the now standard approach of a classification head on top of the </s> token representation. In experiments with mT5 Xue et al. (2021), they showcased improved results compared to XLM-R Conneau et al. (2020) on a newly introduced multilingual dataset, MultiEURLEX.
Encoder-Decoder ApproachesText-to-text approaches, which utilize the full encoder-decoder model, have proven effective for binary and single-label classification tasks Raffel et al. (2020); Chung et al. (2022). The key to such approaches are label verbalizers, words in natural language which verbalize the underlying semantics of a given class. Label verbalizers are represented in the embedding space of pre-trained models and in this way benefit from the model pre-training. This can be more optimal especially for few- and zero-shot labels, in comparison to head-based classification methods where randomly initialized parameters have to be learned from scratch.
Liu et al. (2021) presented an alternative use of the full T5 model for non-autoregressive tasks, e.g. single-label and multi-label classification, wherein the decoder is used to obtain label-wise representations informed by the input document, which in turn are fed to label-specific binary classification heads. Liu et al. (2021) performed one set of experiments on the EURLEX-57K dataset Chalkidis et al. (2019), in which they compared their non-autoregressive approach to a T5-based text-to-text approach and a standard BERT-based classifier. They found that both T5-based approaches outperformed the encoder-only classifier, the non-autoregressive method performing best. Nonetheless, the encoder-only classifier had less than half the parameters of the T5 model (110M vs 222M). Encoder-decoder approaches thus seem to carry potential for multi-label classification, still with insufficient empirical evidence, however.
## 3 Methods
We experiment with four methods for multi-label classification, _Encoder+Head_, _LWAN_, _Seq2Seq_, and _T5Enc_, basing their implementation on the T5 model Raffel et al. (2020). T5 is a transformer-based encoder-decoder model Vaswani et al. (2017), which encodes a string of input tokens and generates a string of output tokens.
All methods discussed below use T5's encoder to represent input documents, a document being denoted as \([x_{1},x_{2},\dots,x_{N}]\), where \(N\) is the document length in terms of T5 subword tokens. Some methods further use the model's decoder--we introduce decoder notation where needed.
Encoder+HeadIn this case, we use only the encoder of T5 in the standard classification setting, as introduced by Devlin et al. (2019). We feed the
document to the encoder, and use the representation of the special \(<\)/s\(>\) token as document representation (\(d\in\mathbf{R}^{dim}\)). This representation is passed to \(L\) standard classification heads, one per label.
LwanIn this case, we use a Label-Wise Attention Network (LWAN) (Mullenbach et al., 2018) on top of the T5 encoder, as done in Chalkidis et al. (2020). We feed the document to the encoder, and use one attention head per label to generate \(L\) label-wise document representations \(d_{l}\in\mathbf{R}^{dim}\), i.e., \(L\) weighted averages of the contextualized token representations. Intuitively, each head focuses on possibly different tokens of the document relevant to the corresponding label. LWAN employs \(L\) linear layers (\(o_{l}\in\mathbf{R}^{dim\times 1}\)) each operating on a different label-wise document representation \(d_{l}\), to produce \(L\) scores (logits), one per label.
Seq2SeqIn this case, we use T5 for conditional generation, which is the standard form of use, since T5 was trained in an autoregressive fashion. The target labels are formatted as a sequence of label descriptors, separated by a comma and a space, and ordered alphabetically, e.g., 'EU, finance'. We feed the document to the encoder and use the decoder to generate the tokenized output sequence, \([s_{1},s_{2},\dots,s_{M}]\). When we evaluate the trained model's performance in inference time, we split the generated sequences using comma as a delimiter, keeping only valid label descriptors, and treat them as a set (since their order does not matter for the task). We consider different options for the label descriptors, discussed in Section 5.2.
T5EncIn this case, we follow the work of Liu et al. (2021), where they use T5 in a non-autoregressive fashion.2 We feed the document to the encoder, and use the decoder in non-autoregressive fashion, where its inputs are fixed (pre-populated), i.e., we feed the decoder with single-token label descriptors, \([d_{1},d_{2},...,d_{L}]\), where \(L\) is the size of the full label set. We then use a binary classification head (\(o_{l}\in\mathbf{R}^{dim\times 1}\)) per decoder output representation to produce \(L\) scores, one per label. This method can be seen as an advanced version of the LWAN method which builds label-wise representations (\(d_{l}\)) via attention. In this case, however, these representations are further co-attended (conditioned) via the standard decoder self-attention across many decoder layers.
Footnote 2: We keep the name T5Enc, as coined by the authors, for consistency, although the model actually uses both the encoder and the decoder of T5.
## 4 Datasets
We experiment with four datasets from the legal and biomedical domains, each with two different label granularities, i.e., label sets including more abstract or more specialized concepts.
UklexUnited Kingdom (UK) legislation is publicly available as part of the United Kingdom's National Archives.3 Most of the laws have been categorized in thematic categories (e.g., healthcare, finance, education, transportation, planing), which are stated in the document preamble and are used for archival indexing purposes. The UKLEX dataset (Chalkidis and Sogaard, 2022) comprises
Figure 1: Depiction of the four task-specific methods for multi-label classification: encoder-only (_Encoder+Head_, _LWAN_), and encoder-decoder (_Seq2seq_, _T5Enc_). \(x\): input tokens, \(y\): label predictions, \(d\): label descriptors, \(N\): input sequence length, \(L\): label set size, \(M\): number of labels for the given input.
36.5k UK laws. The dataset is chronologically split in training (20k, 1975-2002), development (8k, 2002-2008), and test (8.5k, 2008-2018) sets.
EurlexEuropean Union (EU) legislation is published on the EUR-Lex website. All EU laws are annotated by EU's Publications Office with multiple concepts from EuroVoc, a thesaurus maintained by the Publications Office.4 EuroVoc has been used to index documents in systems of EU institutions. We use the English part of the dataset of Chalkidis et al. (2021), which comprises 65k EU laws (documents). The dataset is chronologically split in training (55k, 1958-2010), development (5k, 2010-2012), and test (5k, 2012-2016) sets. It supports four different label granularities. We use the 1st and 2nd level of the EuroVoc taxonomy.
Footnote 4: [http://eurovoc.europa.eu/](http://eurovoc.europa.eu/)
BioasQThe BIOASQ (Task A) dataset consist of biomedical articles from PubMed,5 annotated with concepts from the Medical Subject Headings (MeSH) taxonomy (Tsatsaronis et al., 2015; Nentidis et al., 2021).6 MeSH is a hierarchically-organized vocabulary produced by the National Library of Medicine. The current version of MeSH contains more than 29k concepts referring to various aspects of the biomedical research (e.g., diseases, chemicals and drugs). It is primarily used for indexing, cataloging, and searching of biomedical and health-related information. We subsample 100k documents from the period 2000-2021 in the latest version (v.2022) of the dataset, and split those chronologically for training (80k, 1964-2015), development (10k, 2015-2018), and testing (10k, 2018-2020). We use the 1st and 2nd levels of the MeSH taxonomy.
Footnote 5: [https://pubmed.ncbi.nlm.nih.gov](https://pubmed.ncbi.nlm.nih.gov)
Footnote 6: [https://www.nlm.nih.gov/mesh/](https://www.nlm.nih.gov/mesh/)
MIMIC-IIIThe MIMIC-III dataset (Johnson et al., 2017) contains approximately 50k discharge summaries from US hospitals. Each summary is annotated with one or more codes (labels) from the ICD-9 hierarchy, which has eight levels in total.7. The International Classification of Diseases, Ninth Revision (ICD-9) is the official system of assigning codes to diagnoses and procedures associated with hospital utilization in the United States. Documents in MIMIC-III have been anonymized to protect patient privacy, including chronological information (e.g., entry/discharge dates). Hence, it is not possible to split the data chronologically, so we split it randomly in train (30k), development (10k), and test (10k) sets. We use the 1st and 2nd level of the ICD-9 hierarchy.
Footnote 7: www.who.int/classifications/icd/en/
All four datasets come with label descriptors, e.g. 'Agriculture & Food', 'Immigration & Citizenship' (UKLEX), and 'Chemicals and Drugs', 'Skin and Connective Tissue Diseases' (BIOASQ).8 More details about the datasets are provided in Table 1. Notice that Level 2 label sets are considerably larger than Level 1 label sets, and that the number of label assignments per document do not grow proportionately from Level 1 to Level 2, which means Level 2 labels have less representation on average.
Footnote 8: See Appendix B for label descriptors across all datasets.
## 5 Experiments
### Experimental Setup
We use the original checkpoints of T5 released by Raffel et al. (2020) from the Hugging Face Hub.9 Following Raffel et al., for all four methods we use the Adafactor optimizer (Shazeer and Stern, 2018) with a fixed learning rate of 1e-4 after warm-up for one epoch.10 Seq2Seq models are trained with teacher forcing. We report results in terms of micro-F1 (\(\mu\)-F\({}_{1}\)), and macro-F1 (\(m\)-F\({}_{1}\)) scores, the former more indicative of performance on well-represented labels, the latter, of performance on rare labels. When fine-tuning models, we use early stopping based on validation micro-F1 scores. We run each experiment with 4 seeds, and report the mean and standard deviations across runs.
Footnote 9: [https://huggingface.co/t5-base](https://huggingface.co/t5-base)
Footnote 10: In preliminary experiments, we also considered the widely used AdamW optimizer (Loshchilov and Hutter, 2017), which led to lower performance in most cases.
### Preliminary Experiments
We conduct a series of preliminary experiments to identify the most promising setting for the examined methods. All results reported here are on the development split of respective datasets.
\begin{table}
\begin{tabular}{c|c|c c c|c c c} \hline \hline Dataset & Size & \(\|\)L\(\|\)1 & L/D & T/L & \(\|\)L\(\|\)2 & L/D & T/L \\ \hline UKLEX & 36.5k & 18 & 1.2 & 2.1 & 69 & 1.5 & 1.7 \\ EURLEX & 65k & 21 & 3.2 & 2.4 & 127 & 4.5 & 2.9 \\ BIOASQ & 100k & 16 & 5.6 & 3.4 & 116 & 8.9 & 4.0 \\ MIMIC-III & 50k & 19 & 6.0 & 7.8 & 184 & 10.1 & 8.4 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of datasets in terms of size, number of labels on Level 1 (\(|\)L\(|\)1) and 2 (\(|\)L\(|\)2), average number of gold labels per document (L/D), and average number of tokens per label (T/L) in the T5 vocabulary.
LWAN - Number of attention headsPrevious work which employed the LWAN approach always used a single attention head in the label-wise attention mechanism. Here, we experiments with \(N\in[1,4,6,12]\). In Table 2, we reports results on two datasets, UKLEX (L1) with 18 labels, and EURLEX (L2) with 127 labels. We observe that in the case of UKLEX (L1) increasing the number of attention heads does not improve results, while in the case of EURLEX (L2) it harms performance. It appears that the added expressivity from multi-head attention is either not needed, or it is not easily utilized, since it adds more randomly initialized parameters which have to be learned from scratch. In subsequent experiments, we thus use the standard single-head attention mechanism.
Seq2Seq - Form of Label DescriptorsWe consider three alternative forms of label descriptors:
1. the _original_ label descriptors, which may include complex multi-word expressions, e.g., 'Anthropology, Education, Sociology, and Social Phenomena'
2. _simplified_ versions of the original label descriptors, manually curated to consist of single-token expressions (as per the T5 vocabulary), e.g., 'Anthropology' for the example above
3. _numbers_ arbitrarily assigned to labels, e.g. '1'. In Table 3, we present results on two datasets, UKLEX (L1), where the original label descriptors are mostly single-word expressions that map onto T5 sub-word tokens, and MIMIC (L1), where the original label descriptors are multi-word expressions which are further tokenized into subwords
We observe mixed rankings between the three forms of label descriptors across different metrics and datasets, with slight advantage for a lexical form over the arbitrary numerical one. This is in line with the intuition that the semantics of the label descriptors contribute to the learning of the task. In subsequent experiments, we use the original label descriptors across all datasets.
Seq2Seq - Greedy Decoding vs. Beam SearchRaffel et al. (2020) suggested using greedy decoding for single-label classification tasks but also found beam search decoding (N=4) to work better for tasks with long output sequences, as is the case in multi-label classification. In Table 4, we compare the two decoding strategies on UKLEX (L1) and MIMIC (L1). We find that the choice of decoding strategy has little effect on performance, likely because the output space in these tasks is constrained to a fixed set of valid labels, in a single permissible (alphabetical) order. In subsequent experiments, we use beam search (N=4), as it performs slightly better on average.
T5Enc - Form of Label DescriptorsWe compare two forms of label tokens, lexical (using simplified descriptors, as they have to be single tokens), and pseudo descriptors, where we introduce special tokens to the vocabulary of T5 (e.g., <label_1>). Results on UKLEX (L1) and MIMIC (L1) are presented in Table 5. We observe that results are comparable for UKLEX, while simplified label descriptors perform slightly better for MIMIC. In subsequent experiments, we thus use simplified label descriptors for Level 1 datasets.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{**Decoding**} & \multicolumn{2}{c|}{**UKLEX (L1)**} & \multicolumn{2}{c}{**MIMIC (L1)**} \\ & \(\mu\)-F\({}_{1}\) & m-F\({}_{1}\) & \(\mu\)-F\({}_{1}\) & m-F\({}_{1}\) \\ \hline Greedy & **84.3 \(\pm\) 0.0** & **81.6 \(\pm\) 0.2** & 72.9 \(\pm\) 0.2 & 69.4 \(\pm\) 0.4 \\ Beam & 84.2 \(\pm\) 0.0 & **81.6 \(\pm\) 0.2** & **73.2 \(\pm\) 0.1** & **70.3 \(\pm\) 0.2** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Greedy decoding vs. beam search for Seq2Seq.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{**No. Heads**} & \multicolumn{2}{c|}{**UKLEX (L1)**} & \multicolumn{2}{c}{**EURLEX (L2)**} \\ & \(\mu\)-F\({}_{1}\) & m-F\({}_{1}\) & \(\mu\)-F\({}_{1}\) & m-F\({}_{1}\) \\ \hline N=1 & **83.3 \(\pm\) 0.2** & **79.3 \(\pm\) 0.7** & **76.3 \(\pm\) 0.3** & **55.5 \(\pm\) 0.8** \\ N=4 & 82.8 \(\pm\) 0.3 & 78.1 \(\pm\) 0.7 & 75.1 \(\pm\) 0.1 & 51.7 \(\pm\) 2.1 \\ N=6 & 83.2 \(\pm\) 0.3 & **79.3 \(\pm\) 0.5** & 75.1 \(\pm\) 0.3 & 54.1 \(\pm\) 0.6 \\ N=12 & 83.0 \(\pm\) 0.4 & 78.8 \(\pm\) 1.4 & 75.2 \(\pm\) 0.3 & 53.0 \(\pm\) 1.2 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Number of attentions heads for LWAN.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{**Label**} & \multicolumn{2}{c|}{**UKLEX (L1)**} & \multicolumn{2}{c}{**MIMIC (L1)**} \\ & \(\mu\)-F\({}_{1}\) & m-F\({}_{1}\) & \(\mu\)-F\({}_{1}\) & m-F\({}_{1}\) \\ \hline Original & 84.2 \(\pm\) 0.0 & **81.6 \(\pm\) 0.2** & 73.2 \(\pm\) 0.0 & **70.2 \(\pm\) 0.2** \\ Simplified & **84.8 \(\pm\) 0.2** & 78.7 \(\pm\) 0.3 & 73.1 \(\pm\) 0.1 & 70.1 \(\pm\) 0.1 \\ Numbers & 83.8 \(\pm\) 0.2 & 80.2 \(\pm\) 0.7 & **73.3 \(\pm\) 0.1** & 69.7 \(\pm\) 0.2 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Form of label descriptors for Seq2Seq.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{**Label**} & \multicolumn{2}{c|}{**UKLEX (L1)**} & \multicolumn{2}{c}{**MIMIC (L1)**} \\ & \(\mu\)-F\({}_{1}\) & m-F\({}_{1}\) & \(\mu\)-F\({}_{1}\) & m-F\({}_{1}\) \\ \hline Simplified & **84.2 \(\pm\) 0.0** & **81.6 \(\pm\) 0.2** & 73.2 \(\pm\) 0.0 & **70.2 \(\pm\) 0.2** \\ Simplified & **84.8 \(\pm\) 0.2** & 78.7 \(\pm\) 0.3 & 73.1 \(\pm\) 0.1 & 70.1 \(\pm\) 0.1 \\ Numbers & 83.8 \(\pm\) 0.2 & 80.2 \(\pm\) 0.7 & **73.3 \(\pm\) 0.1** & 69.7 \(\pm\) 0.2 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Form of label descriptors for Seq2Seq.
we use pseudo labels, since we cannot manually curate simplified descriptors for hundreds of labels.
Encoder-only ModelsComparing encoder-only to encoder-decoder methods fro multi-label text classification in a fair manner is non-trivial since inherently encoder-only pre-trained models like BERT Devlin et al. (2019), and RoBERTa Liu et al. (2019) are trained on different data and with a different objective than the encoder-decoder model T5. Using T5's encoder for encoder-only methods circumvenes this problem but introduces another concern: that this encoder was trained in an encoder-decoder architecture and may thus be handicapped in comparison to encoders trained in an encoder-only architecture.
In Table 7, we present development results on UKLEX (L1) and BIOASQ (L2) for encoder-only classifiers trained from BERT, RoBERTa and T5's encoder.11 We observe mixed results with BERT performing best on UKLEX (L1) and T5 performing best on EURLEX (L2), with absolute differences between the three models being relatively small and on average between the two datasets, favouring T5. We thus conclude that T5's encoder makes for a fair and strong encoder-only baseline and use it in subsequent experiments.
Footnote 11: We use the prepended [CLS] token representation for BERT and RoBERTa.
### Main Results
In Table 6, we present test results for all methods trained from T5-Base.12 The overall best performing approach is T5Enc, followed by Seq2Seq, LWAN and then Encoder+Head. The trend is thus for encoder-decoder approaches (T5Enc and Seq2Seq) to outperform encoder-only approaches (LWAN and then Encoder+Head), which use just half the model parameters. This result corroborates and considerably substantiates the observations of Liu et al. (2021). We gain further insights through a breakdown by metric and label granularity.
Footnote 12: We present development results in Table 11 in Appendix A for completeness.
The advantage of encoder-decoder methods can be especially seen across macro-F1 scores, where both T5Enc and Seq2Seq outperform encoder-only approaches almost categorically (the one exception being UKLEX (L1)). This indicates that encoder-decoder approaches are particularly good at assigning less frequent labels, which is a key challenge in multi-label classification. This reading of the results is further reinforced by the observation that the performance gap increases from Level 1 datasets, which contain a smaller number of labels, to Level 2 datasets, which contain more and thus on average less frequent labels. The most striking performance gap we observe measures 7 p.p. between LWAN and Seq2Seq on MIMIC (L2).
Between the two encoder-decoder approaches, we see that the non-autoregressive use of the T5 decoder is more effective (T5Enc) than the conditional generation of labels (Seq2Seq), the gap
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c|}{**UKLEX (L1)**} & \multicolumn{2}{c|}{**EURLEX (L1)**} & \multicolumn{2}{c|}{**BIOASQ (L1)**} & \multicolumn{2}{c}{**MIMIC (L1)**} & \multicolumn{2}{c}{**Average**} \\ & \(\mu\)-F\({}_{1}\) & m-F\({}_{1}\) & \(\mu\)-F\({}_{1}\) & m-F\({}_{1}\) & \(\mu\)-F\({}_{1}\) & \(\mu\)-F\({}_{1}\) & \(\mu\)-F\({}_{1}\) & \(\mu\)-F\({}_{1}\) & \(\mu\)-F\({}_{1}\) & \(\mu\)-F\({}_{1}\) \\ \hline Enc+Head & **80.8 \(\pm\) 0.5** & **77.2 \(\pm\) 0.4** & 78.9 \(\pm\) 0.4 & 67.9 \(\pm\) 1.1 & 86.4 \(\pm\) 0.0 & 76.8 \(\pm\) 0.1 & 72.2 \(\pm\) 0.2 & 66.3 \(\pm\) 0.7 & 79.6 & 72.1 \\ LWAN & 80.4 \(\pm\) 0.3 & 76.6 \(\pm\) 0.5 & 79.6 \(\pm\) 0.4 & 68.4 \(\pm\) 0.7 & 86.3 \(\pm\) 0.1 & 77.2 \(\pm\) 0.2 & 72.3 \(\pm\) 0.3 & 66.8 \(\pm\) 0.8 & 79.7 & 72.3 \\ \hline Seq2Seq & 79.6 \(\pm\) 0.6 & 76.4 \(\pm\) 0.6 & 78.8 \(\pm\) 0.2 & 69.1 \(\pm\) 0.3 & 86.0 \(\pm\) 0.1 & 77.8 \(\pm\) 0.2 & 72.9 \(\pm\) 0.1 & **69.7 \(\pm\) 0.2** & 79.3 & 73.3 \\ T5Enc & **80.8 \(\pm\) 0.4** & 77.1 \(\pm\) 0.5 & **80.0 \(\pm\) 0.3** & **70.5 \(\pm\) 0.4** & **86.6 \(\pm\) 0.0** & **77.9 \(\pm\) 0.4** & **73.4 \(\pm\) 0.3** & 68.8 \(\pm\) 1.4 & **80.2** & **73.6** \\ \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c|}{**UKLEX (L2)**} & \multicolumn{2}{c|}{**EURLEX (L2)**} & \multicolumn{2}{c|}{**BIOASQ (L2)**} & \multicolumn{2}{c}{**MIMIC (L2)**} & \multicolumn{2}{c}{**Average**} \\ & \(\mu\)-F\({}_{1}\) & m-F\({}_{1}\) & \(\mu\)-F\({}_{1}\) & m-F\({}_{1}\) & \(\mu\)-F\({}_{1}\) & m-F\({}_{1}\) & \(\mu\)-F\({}_{1}\) & \(\mu\)-F\({}_{1}\) & \(\mu\)-F\({}_{1}\) & \(\mu\)-F\({}_{1}\) & \(\mu\)-F\({}_{1}\) \\ \hline Enc+Head & 75.9 \(\pm\) 0.5 & 64.9 \(\pm\) 0.5 & 70.3 \(\pm\) 0.2 & 48.2 \(\pm\) 1.2 & 73.1 \(\pm\) 0.0 & 60.1 \(\pm\) 0.8 & 56.7 \(\pm\) 0.6 & 22.3 \(\pm\) 1.2 & 69.0 & 48.9 \\ LWAN & **76.6 \(\pm\) 0.2** & 65.0 \(\pm\) 0.8 & 70.3 \(\pm\) 0.3 & 49.0 \(\pm\) 0.7 & 73.0 \(\pm\) 0.1 & 59.7 \(\pm\) 0.9 & 57.2 \(\pm\) 0.4 & 24.2 \(\pm\) 0.3 & 69.3 & 49.5 \\ \hline Seq2Seq & 75.3 \(\pm\) 0.2 & 65.8 \(\pm\) 0.4 & 70.6 \(\pm\) 0.3 & 51.8 \(\pm\) 1.0 & 73.8 \(\pm\) 0.1 & 63.8 \(\pm\) 0.1 & 57.4 \(\pm\) 0.2 & **31.2 \(\pm\) 1.7** & 69.3 & 53.2 \\ T5Enc & 76.5 \(\pm\) 0.3 & **66.8 \(\pm\) 0.9** & **72.0 \(\pm\) 0.2** & **53.2 \(\pm\) 1.4** & **75.1 \(\pm\) 0.1** & **66.0 \(\pm\) 0.1** & **60.5 \(\pm\) 0.1** & 31.1 \(\pm\) 0.9 & **71.0** & **54.3** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Test results for encoder-only methods (Encoder+Head and LWAN) and encoder-decoder methods (Seq2Seq and T5Enc) trained from T5-Base.
\begin{table}
\begin{tabular}{l|c|c c} \hline \hline \multirow{2}{*}{**Encoder**} & \multicolumn{2}{c|}{**UKLEX (L1)**} & \multicolumn{2}{c}{**BIOASQ (L2)**} \\ & \(\mu\)-F\({}_{1}\) & m-F\({}_{1}\) & \(\mu\)-F\({}_{1}\) & \(\mu\)-F\({}_{1}\) & m-F\({}_{1}\) \\ \hline BERT & **84.4 \(\pm\) 0.3** & **81.3 \(\pm\) 0.9 & 71.7 \(\pm\) 0.0 & 59.1 \(\pm\) 0.0 \\ RoBERTa & 84.3 \(\pm\) 0.6 & 81.1 \(\pm\) 1.1 & 73.0 \(\pm\) 0.0 & 59.8 \(\pm\) 0.0 \\ T5 & 84.3 \(\pm\) 0.3 & 80.7 \(\pm\) 0.8 & **73.2 \(\pm\) 0.1** & **60.8 \(\pm\) 0.8** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Encoder-only pre-trained models vs. T5’s encoder in Encoder+Head classification setups.
between the two methods growing from Level 1 to Level 2 datasets. In the case of T5Enc, the decoder serves to build representations for all labels relevant to a dataset and in this sense defines and constraints the output space for the task. Meanwhile, in the Seq2Seq approach the model has to learn the constraints on the output space during training, and as such it is likely more prone to errors.
These main results give us a general idea of how the different approaches compare, indicating clearly that encoder-decoder approaches are superior. In subsequent sections we explore the source of performance and the limitations of encoder-decoder approaches further.
### Model Capacity
One possible explanation for the stronger performance of encoder-decoder methods is that they operate with twice as many parameters as encoder-only methods. Here, we test whether this alone is the source of their improved performance, by training models from different T5 models: small, base and large.13 Since we previously saw that trends in results are similar across L1 and L2 datasets, and more pronounced in the latter, we carry out this set of experiments on L2 datasets only. We include the stronger performing encoder-only approach, LWAN, as well as both encoder-decoder approaches. Results on the micro-F1 metric are presented in Figure 2, and on the macro-F1 metric in Figure 3 in Appendix A.14
Footnote 13: T5-Small has 12 layers of d=512, T5-Base has 24 layers of d=768, T5-Large has 48 layers of d=1024, where half of the layers are in the encoder and half in the decoder.
Firstly, we note that T5Enc consistently outperforms the other approaches across different model sizes, in line with earlier findings (see Table 6). We also see that all methods appear to scale, with steady improvements in performance observed across increasing model sizes.
Comparing models of similar size (i.e., models with the same number of layers), we gain a more precise idea of how methods compare. Here, T5Enc still proves to be the superior approach, with T5Enc-Small outperforming LWAN-Base on 3 out of 4 datasets (UKLEX being the exception), and similarly T5Enc-Base outperforming LWAN-Large on 3 out of 4 datasets. Notice that in these comparisons, the T5Enc variants are even at a disadvantage, having the same number of layers as the LWAN variants, but lower dimensionality. Seq2Seq models, on the other hand, underperform similarly-sized LWAN models on most comparisons in terms of micro-F1, which indicates that this approach is overall less suitable for the task.15
Footnote 15: See Appendix A for a discussion of macro-F1 results.
### Ablations on T5Enc Decoder
Here, we analyse the contribution of different aspects of the T5Enc decoder through ablations on the decoder's depth, width and self-attention.
Decoder DepthWe train T5Enc models with a varying number of decoder layers. We experiments with \(N\in[1,4,6,12]\). In Table 8, we report results on two datasets, UKLEX (L1) and EURLEX (L2). We observe that larger depth in the decoder contributes to performance, with the full set of decoder layers (12) performing best.
Decoder WidthIn this ablation, we are interested to establish the importance of label-wise representations being built in the decoder as opposed to using it to create a single output representation shared across the classification heads. To this end, we feed the decoder with a single token ID, e.g., the ID of token _'label'_, and then pass its output representation (\(d\in\mathbb{R}^{dim}\)) to a set of standard classification heads to produce \(L\) scores (logits), similar to the Encoder+Head method. This method can be seen as an advanced version of the Encoder+Head method that utilizes the decoder via cross-attention.
Results for Level 2 datasets are shown in Table 9 under Single-step T5Enc (Level 1 results are shown in Table 11 in the Appendix). In comparison to the Encoder+Head baseline, Single-step T5Enc is superior across the board, likely because of the added number of parameters available to the model. Compared to the standard T5Enc approach, Single-step T5Enc works slightly better for UKLEX but on all other datasets it underperforms by a large gap. We observe the same pattern for L1 results in Table 11 and thus conclude that the additional computational
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline \multirow{2}{*}{**Layers**} & \multicolumn{2}{c}{**UKLEX (L1)**} & \multicolumn{2}{c}{**EURLEX (L2)**} \\ & \(\mu\)-F\({}_{1}\) & m-F\({}_{1}\) & \(\mu\)-F\({}_{1}\) & m-F\({}_{1}\) \\ \hline N=1 & 84.6 \(\pm\) 0.1 & 81.9 \(\pm\) 0.1 & 76.6 \(\pm\) 0.1 & 56.9 \(\pm\) 0.1 \\ N=4 & 84.7 \(\pm\) 0.1 & 81.8 \(\pm\) 0.1 & 76.9 \(\pm\) 0.1 & 58.1 \(\pm\) 1.1 \\ N=6 & **84.8 \(\pm\) 0.1** & **82.2 \(\pm\) 0.1** & 77.0 \(\pm\) 0.1 & 58.4 \(\pm\) 1.3 \\ N=12 & **84.8 \(\pm\) 0.2** & 81.9 \(\pm\) 0.5 & **77.1 \(\pm\) 0.1** & **58.8 \(\pm\) 1.4** \\ \hline \hline \end{tabular}
\end{table}
Table 8: Development results for different numbers of decoder layers in T5Enc.
power of label-wise processing is important for the good overall performance of T5Enc.
Attention SchemeThe labels in multi-label classification are known to exhibit certain dependencies (Tenenboim et al., 2009; Bogatinovski et al., 2022). We measure the pair-wise dependency between labels in the four datasets included in this study, using Fisher's exact test.16 In Table 10, we report the percentage of label pairs in Level 2 label sets for which a significant association (\(p<.001\)) was discovered (see Appendix A for Level 1 results). Based on the observed non-trivial rates of inter-label dependency, we hypothesize that self-attention in the T5 decoder is of key importance to the performance of T5Enc.
Footnote 16: The test determines whether the observed distribution of one variable is likely to be random given the observed distribution of another variable and vice-versa.
The decoder in T5 models uses _causal_ attention, wherein decoder inputs can only attend to the left context. We measure the contribution of this system component by ablating it, i.e. training T5Enc models with no self-attention. In Table 9, we report results on Level 2 datasets under _No attention_ (see Table 11 in Appendix A for Level 1 results). We observe that without self-attention, performance suffers considerably for all datasets, most notably so in terms of macro-F1 on MIMIC (\(\Delta=3.7\)). This result indicates that self-attention indeed has a key role, although its contribution does not prove to be proportional to the rate of significant pair-wise associations in the data (Table 10)--this may be due to higher-order label dependencies taking precedence over pair-wise ones.
Having confirmed the importance of modeling label dependency above, we next consider whether we can achieve even better performance with bidirectional (rather than causal) attention in the T5 decoder. In Table 9_Full attention_, we see that the contribution of bidirectional attention is negligible. Assuming that the model is able to adjust to the new attention scheme during the fine-tuning process, we take these results to indicate that modeling label dependency in just one direction is sufficient. Indeed, Fisher's exact test measures two-way association, disregarding the direction of the dependency.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline
**Level** & **UKLEX** & **EURLEX** & **BIOASQ** & **MIMIC** \\ \hline L2 & 39.5 & 39.7 & 71.2 & 21.3 \\ \hline \hline \end{tabular}
\end{table}
Table 10: Percentage of Level 2 label pairs with significant association according to Fisher’s exact test.
Figure 2: Performance of the three strongest classification methods (_LWAN_, _Seq2Seq_, _T5Enc_) across three model sizes in terms of micro-F1 score. Dashed lines inside the boxes represent the mean performance across four seeds.
### Errors in Seq2Seq Models
The Seq2Seq approach similarly can model label dependency through self-attention and can even condition the prediction of labels on one another (in an autoregressive fashion), an ability which none of the other approaches included in this study posses. Yet, we find empirically that it underperforms T5Enc. Here, we investigate whether this finding can be explained in terms of the unconstrained output space in Seq2Seq models. Specifically, we analyse the models' predictions for the invention of novel labels.
Such errors occur for two out of the four datasets, EURLEX and UKLEX, but with extremely low frequency: the highest observed rate is 0.2% of novel labels generated for UKLEX (L2). Some examples include 'accommodation', 'domestic violence' and 'vulnerable persons'. Labels in UKLEX and EURLEX are phrased in common terms, compared to the rather technical, domain-specific labels in MIMIC and BIOASQ (see Appendix B for examples). Models trained on UKLEX and EURLEX therefore seem to interpret the output space as open-ended and on occasion generate novel labels. Still the total number of novel labels generated is negligible, so this could not explain the lower performance of this approach compared to T5Enc. The reason may instead lie with the fact that Seq2Seq models have to learn the bounds of the output space during training, whereas T5Enc models have that as a given via the fixed decoder input.
## 6 Conclusions
In this work, we compared four approaches to multi-label classification, two based on an encoder only and two based on an encoder-decoder. We experimented with 4 datasets from 2 different domains (legal and biomedical), which support two different label granularities. We found that encoder-decoder methods outperform encoder-only methods, in line with findings in other NLP tasks. We further found that the non-autoregressive use of an encoder-decoder model performs better than using it for conditional generation. We found that decoder depth, width and self-attention are all key contributors to the success of this best approach.
In future work, we will consider prompt-based approaches as well, specifically instruction-based fine-tuned models Wei et al. (2022), currently limited by the excessive computational cost of encoding the full label set as part of the input string.
## Limitations
Recent work has shown that models of a certain size (upwards of 3B parameters) exhibit learning properties that cannot be observed in smaller models. Due to practical limitations and environmental concerns, in this study we chose not to train models larger than T5-Large. It is thus not possible to know how emergent properties in larger models may have affected the comparison between the different approaches compared here. We believe that our findings will nevertheless be useful to NLP practitioners who operate on a constrained compute budget and may thus opt for moderately-sized models anyway.
In this work, we compare encoder-only and encoder-decoder models for multi-label classification. Decoder-only (GPT-like) models Radford et al. (2019) are omitted since at present there are no decoder-only methods for label classification in the literature. While we could have adapted the Seq2Seq approach in our experiments to operate in a decoder-only context, we deem this unsuitable for the datasets we work with, as they contain long documents which will quickly cause problems for standard decoder-only models like GPT-2.
Our experiments consider datasets from the legal and biomedical domains first and foremost because there are publicly available datasets with hierarchical labelling in these domains, unlike others. Moreover, we believe that working in critical application domains is a worthy purpose and covering two such domains with two different datasets in each domain gives us a good view on how the examined methods are expected to work in such domains.
## Ethics Statement
The legal and biomedical fields are both highly sensitive and have high impact on human life. In this work, we have ensured that the data we work with is sourced in compliance with the relevant regulations and are fully anonymized where necessary. The application of multi-label classification to this data carries no obvious risk as it can ease the processing and categorization of documents in these domains, without having any direct impact on individuals involved in legal and medical matters.
## Acknowledgments
This work was fully funded by the Innovation Fund Denmark (IFD, [https://innovationsfonden.dk/en](https://innovationsfonden.dk/en)). |
2306.13515 | Binary domain generalization for sparsifying binary neural networks | Binary neural networks (BNNs) are an attractive solution for developing and
deploying deep neural network (DNN)-based applications in resource constrained
devices. Despite their success, BNNs still suffer from a fixed and limited
compression factor that may be explained by the fact that existing pruning
methods for full-precision DNNs cannot be directly applied to BNNs. In fact,
weight pruning of BNNs leads to performance degradation, which suggests that
the standard binarization domain of BNNs is not well adapted for the task. This
work proposes a novel more general binary domain that extends the standard
binary one that is more robust to pruning techniques, thus guaranteeing
improved compression and avoiding severe performance losses. We demonstrate a
closed-form solution for quantizing the weights of a full-precision network
into the proposed binary domain. Finally, we show the flexibility of our
method, which can be combined with other pruning strategies. Experiments over
CIFAR-10 and CIFAR-100 demonstrate that the novel approach is able to generate
efficient sparse networks with reduced memory usage and run-time latency, while
maintaining performance. | Riccardo Schiavone, Francesco Galati, Maria A. Zuluaga | 2023-06-23T14:32:16Z | http://arxiv.org/abs/2306.13515v1 | # Binary domain generalization for sparsifying binary neural networks+
###### Abstract
Binary neural networks (BNNs) are an attractive solution for developing and deploying deep neural network (DNN)-based applications in resource constrained devices. Despite their success, BNNs still suffer from a fixed and limited compression factor that may be explained by the fact that existing pruning methods for full-precision DNNs cannot be directly applied to BNNs. In fact, weight pruning of BNNs leads to performance degradation, which suggests that the standard binarization domain of BNNs is not well adapted for the task. This work proposes a novel more general binary domain that extends the standard binary one that is more robust to pruning techniques, thus guaranteeing improved compression and avoiding severe performance losses. We demonstrate a closed-form solution for quantizing the weights of a full-precision network into the proposed binary domain. Finally, we show the flexibility of our method, which can be combined with other pruning strategies. Experiments over CIFAR-10 and CIFAR-100 demonstrate that the novel approach is able to generate efficient sparse networks with reduced memory usage and run-time latency, while maintaining performance.
Keywords:Binary neural networks Deep neural networks Pruning Sparse representation.
## 1 Introduction
The increasing number of connected Internet-of-Things (IoT) devices, now surpassing the number of humans connected to the internet [6], has led to a sensors-rich world, capable of addressing real-time applications in multiple domains, where both accuracy and computational time are crucial [1]. Deep neural networks (DNNs) have the potential of enabling a myriad of new IoT applications, thanks to their ability to process large complex heterogeneous data and to extract patterns needed to take autonomous decisions with high reliability [20].
However, DNNs are known for being resource-greedy, in terms of required computational power, memory, and energy consumption [4], whereas most IoT devices are characterized by limited resources. They usually have limited processing power, small storage capabilities, they are not GPU-enabled and they are powered with batteries of limited capacity, which are expected to last over 10 years without being replaced or recharged. These constraints represent an important bottleneck towards the deployment of DNNs in IoT applications [40].
A recent and notable example to enable the usage of DNNs in limited resource devices are binary neural networks (BNNs) [15]. BNNs use binary weights and activation functions that allow them to replace computationally expensive multiplication operations with low-cost bitwise operations during forward propagation. This results in faster inference and better compression rates, while maintaining an acceptable accuracy for complex learning tasks [10, 25]. For instance, BNNs have achieved over 80% classification accuracy on ImageNet [10, 31]. Despite the good results, BNNs have a fixed and limited compression factor compared to full-precision DNNs, which may be insufficient for certain size and power constraints of devices [22].
A way to further improve BNNs' compression capacity is through network pruning, which seeks to control a network's sparsity by removing parameters and shared connections [12]. Pruning BNNs, however, is a more challenging task than pruning full-precision neural networks and it is still a challenge with many open questions [38]. Current attempts [9, 19, 28, 32, 37, 36, 38] often rely on training procedures that require more training stages than standard BNNs, making learning more complex. Moreover, these methods fail in highly pruned scenarios, showing severe accuracy degradation over simple classification problems.
In this work, we introduce sparse binary neural network (SBNN), a more robust pruning strategy to achieve sparsity and improve the performance of BNNs. Our strategy relies on entropy to optimize the network to be largely skewed to one of the two possible weight values, i.e. having a very low entropy. Unlike BNNs that use symmetric values to represent the network's weights, we propose a more general binary domain that allows the weight values to adapt to the asymmetry present in the weights distribution. This enables the network to capture valuable information, achieve better representation, and, thus better generalization. The main contributions of our work can be summarized as follows: 1) We introduce a more general binary domain w.r.t. the one used by BNNs to quantize real-valued weights; 2) we derive a closed-form solution for binary values that minimizes quantization error when real-valued weights are mapped to the proposed domain; 3) we enable the regularization of the BNNs weights distribution by using entropy constraints; 4) we present efficient implementations of the proposed algorithm, which reduce the number of bitwise operations in the network proportionally to the entropy of the weight distribution; and 5) we demonstrate SBNN's competitiveness and flexibility through benchmark evaluations.
The remaining of this work is organized as follows. Section 2 discusses previous related works. The core of our contributions are described in Section 3. In Section 4, we study the properties of the proposed method and assess its perfor
mance, in terms of accuracy and operation reduction at inference, through a set of experiments using, CIFAR-10, CIFAR-100 [18] and ImageNet [31] datasets. Finally, a discussion on the results and main conclusions are drawn in Section 5.
## 2 Related Work
We first provide an overview of BNNs. Next, we review sparsification through pruning [2, 12, 27, 34] and quantization [11, 16, 39, 41], the two network compression strategies this work relies on. A broad review covering further network compression and speed-up techniques can be found in [21].
**Binary Neural Networks.** BNNs [15] have gained attention in recent years due to their computational efficiency and improved compression. Subsequent works have extended [15] to improve its accuracy. For instance, [30] introduced a channel-wise scaling coefficient to decrease the quantization error. ABC-Net adopts multiple binary bases [23], and Bi-Real [26] recommends short residual connection to reduce the information loss and a smoother gradient for the signum function. Recently, ReActNet [25] generalized the traditional sign(\(\cdot\)) and PReLU activation functions to extend binary network capabilities, achieving an accuracy close to full-precision ResNet-18 [13] and MobileNet V1 [14] on ImageNet [31]. By adopting the RSign, the RPReLU along with an attention formulation Guo et al. [10] surpassed the 80% accuracy mark on ImageNet. Although these works have been successful at increasing the performance of BNNs, few of them consider the compression aspect of BNNs.
**Network Sparsification.** The concept of sparsity has been well studied beyond quantized neural networks as it reduces a network's computational and storage requirements and it prevents overfitting. Methods to achieve sparsity either explicitly induce it during learning through regularization (e.g. \(L_{0}\)[27] or \(L_{1}\)[12] regularization), or do it incrementally by gradually augmenting small networks [2]; or by post hoc pruning [8, 33, 34].
BNNs pruning is particularly challenging because weights in the \(\{\pm 1\}\) domain cannot be pruned based only on their magnitude. Existing methods include removing unimportant channels and filters from the network [9, 28, 37, 38], but optimum metrics are still unclear; quantizing binary kernels to a smaller bit size than the kernel size [36]; or using the \(\{0,\pm 1\}\) domains [19, 32]. Although these works suggest that the standard \(\{\pm 1\}\) binary domain has severe limitations regarding compression, BNNs using the \(\{0,\pm 1\}\) domain have reported limited generalization capabilities [19, 32]. In our work, we extend the traditional binary domain to a more general one, that can be efficiently implemented via sparse operations. Moreover, we address sparsity explicitly with entropy constraints, which can be formulated as magnitude pruning of the generic binary weight values mapping them in the \(\{0,1\}\) domain. In our proposed domain, BNNs are more robust to pruning strategies and show better generalization properties than other pruning techniques for the same sparsity levels.
**Quantization.** Network quantization allows the use of fixed-point arithmetic and a smaller bit-width to represent network parameters w.r.t the full-precision
counterpart. Representing the values using only a finite set requires a quantization function that maps the original elements to the finite set. The quantization can be done after training the model, using parameter sharing techniques [11], or during training by quantizing the weights in the forward pass, as ternary neural networks (TNNs) [17], BNNs [5] and other quantized networks do [16, 39]. Our work builds upon the strategy of BNNs by introducing a novel quantization function that maps weights to a binary domain that is more general than the \(\{\pm 1\}\) domain used in most state-of-the-art BNNs. This broader domain significantly reduces the distortion-rate curves of BNNs across various sparsity levels, enabling us to achieve greater compression.
## 3 Method
The proposed SBNN achieves network pruning via sparsification by introducing a novel quantization function that extends standard BNNs weight domain \(\{\pm 1\}\) to a more generic binary domain \(\{\alpha,\beta\}\) and a new penalization term in the objective loss controlling the entropy of the weight distribution and the sparsity of the network (Section 3.2). We derive in Section 3.3 the optimum SBNN's \(\{\alpha,\beta\}\) values, i.e. the values that minimize the quantization loss when real-valued weights are quantized in the proposed domain. In Section 3.4, we use BNN's state-of-the-art training algorithms for SBNN training by adding the sparsity regularization term to the original BNN's objective loss. Section 3.5 describes the implementation details of the proposed SBNN to illustrate their speed-up gains w.r.t BNNs.
### Preliminaries
The training of a full-precision DNN can be seen as a loss minimization problem:
\[\operatorname*{arg\,min}_{\widetilde{\mathbf{W}}}\mathcal{L}(y,\hat{y}) \tag{1}\]
where \(\mathcal{L}(\cdot)\) is a loss function between the true labels \(y\) and the predicted values \(\hat{y}=f(\mathbf{x};\widetilde{\mathbf{W}})\), which are a function of the data input \(\mathbf{x}\) and the network's full precision weights \(\widetilde{\mathbf{W}}=\{\widetilde{\mathbf{w}}^{\ell}\}\), with \(\widetilde{\mathbf{w}}^{\ell}\in\mathbb{R}^{N^{\ell}}\) the weights of the \(\ell^{th}\) layer, and \(N=\sum_{\ell}N^{\ell}\) the total number of weights in the DNN. We denote the \(i^{th}\) weight element of \(\widetilde{\mathbf{w}}^{\ell}\) as \(\widetilde{w}_{i}^{\ell}\).
A BNN [15] uses a modified signum function as quantization function that maps full precision weights \(\widetilde{\mathbf{W}}\) and activations \(\widetilde{\mathbf{a}}\) to the \(\{\pm 1\}\) binary domain, enabling the use of low-cost bitwise operations in the forward propagation, i.e.
\[\overline{\mathbf{W}}=\operatorname*{sign}(\widetilde{\mathbf{W}})\,,\qquad \frac{\partial g(\widetilde{w}_{i})}{\partial\widetilde{w}_{i}}=\left\{ \begin{array}{ll}\frac{\partial g(\widetilde{w}_{i})}{\partial\widetilde{w} _{i}}&\text{, if }-1\leq\widetilde{w}_{i}\leq 1\\ 0&\text{, otherwise,}\end{array}\right.\]
where \(\text{sign}(\cdot)\) denotes the modified sign function over a vector, \(g(\cdot)\) is a differentiable function, \(\overline{\mathbf{W}}\) the network's weights in the \(\{\pm 1\}\) binary domain, \(\overline{w}_{i}\) a given weight in the binary domain, and \(\widetilde{w_{i}}\) the associated full-precision weight.
### Sparse Binary Neural Network (SBNN) Formulation
Given \(\Omega^{\ell}=\{\alpha^{\ell},\beta^{\ell}\}\) a general binary domain, with \(\alpha^{\ell},\beta^{\ell}\in\mathbb{R}\), and \(\alpha^{\ell}<\beta^{\ell}\), let us define a SBNN, such that, for any given layer \(\ell\),
\[w_{i}^{\ell}\in\Omega^{\ell}\qquad\forall\ i, \tag{2}\]
with \(w_{i}^{\ell}\) the \(i^{th}\) weight element of the weight vector, \(\mathbf{w}^{\ell}\), and \(\mathbf{w}=\left\{\mathbf{w}^{\ell}\right\}\) the set of weights for all the SBNN.
We denote \(S_{\alpha^{\ell}}\) and \(S_{\beta^{\ell}}\) the indices of the weights with value \(\alpha^{\ell}\), \(\beta^{\ell}\) in \(\mathbf{w}^{\ell}\)
\[S_{\alpha^{\ell}}=\{i\,|\,1\leq i\leq N^{\ell},w_{i}^{\ell}=\alpha^{\ell}\}, \qquad S_{\beta^{\ell}}=\{i\,|\,1\leq i\leq N^{\ell},w_{i}^{\ell}=\beta^{\ell}\}.\]
Since \(\alpha^{\ell}<\beta^{\ell}\;\forall\;\ell\), it is possible to estimate the number of weights taking the lower and upper values of the general binary domain over all the network:
\[L^{\ell}=|S_{\alpha^{\ell}}|,\qquad U^{\ell}=|S_{\beta^{\ell}}|,\qquad L=\sum_{ \ell}L^{\ell},\qquad U=\sum_{\ell}U^{\ell}, \tag{3}\]
with \(L+U=N\), the total number of SBNN network weights. In the remaining of the manuscript, for simplicity and without loss of generality, please note that we drop the layer index \(\ell\) from the weights notation.
To express the SBNN weights \(\mathbf{w}\) in terms of binary \(\{0,1\}\) weights, we now define a a mapping function \(r:\{0,1\}\longrightarrow\{\alpha,\beta\}\) that allows to express \(\mathbf{w}\):
\[w_{i}=r\left(w_{\{0,1\},i}\right)=\left(w_{\{0,1\},i}+\xi\right)\cdot\eta \tag{4}\]
with
\[\alpha=\xi\cdot\eta,\qquad\beta=(1+\xi)\cdot\eta, \tag{5}\]
and \(w_{\{0,1\},i}\in\{0,1\}\), the \(i^{th}\) weight of a SBNN, when restricted to the binary set \(\{0,1\}\). Through these mapping, 0-valued weights are pruned from the network, the making SBNN sparse.
The bit-width of a SBNN is measured with the binary entropy \(h()\) of the distribution of \(\alpha\)-valued and \(\beta\)-valued weights,
\[h(p)=-p\log_{2}(p)-(1-p)\log_{2}(1-p)\qquad\left[\text{bits/weight}\right], \tag{6}\]
with \(p=U/N\). Achieving network compression using a smaller bit-width than that of standard BNN's weights (1 bit/weight) is equivalent to setting a constraint in the SBNN's entropy to be less or equal than a desired value \(h^{*}\), i.e.
\[h(U/N)\leq h^{*}. \tag{7}\]
Given \(h^{-1}()\) the inverse binary entropy function for \(0\leq p\leq 1/2\), it is straightforward to derive such constraint, \(U\leq M\) where
\[M\triangleq N\cdot h^{-1}(h^{*}). \tag{8}\]
From Eq. (7) and (8), this implies that the constraint corresponds to restricting the maximum number of \(1s\) in the network, and thus the sparsity of the network. Thus, the original full-precision DNN loss minimization problem (Eq. (1)) can be reformulated as:
\[\begin{array}{ll}\operatorname*{arg\,min}&\mathcal{L}(y,\hat{y})\\ \text{s.t.}&\mathbf{w}_{\{0,1\}}\in\{0,1\}^{N},\\ &U\leq M<N.\end{array} \tag{9}\]
The mixed optimization problem in Eq. (9) can be simplified by relaxing the sparsity constraint on \(U\) through the introduction of a non-negative function \(g(\cdot)\), which penalizes the weights when \(U>M\):
\[\begin{array}{ll}\operatorname*{arg\,min}&\mathcal{L}(y,\hat{y})+\lambda g (\mathbf{W}_{\{0,1\}})\\ \text{s.t.}&\mathbf{W}_{\{0,1\}}\in\{0,1\}^{N}\end{array} \tag{10}\]
and \(\lambda\) controls the influence of \(g(\cdot)\). A simple, yet effective function \(g(\mathbf{W}_{\{0,1\}})\) is the following one:
\[g\left(\mathbf{W}_{\{0,1\}}\right)=\text{ReLU}\left(U/N-\text{EC}\right), \tag{11}\]
where \(\text{EC}=M/N\) represents the fraction of expected connections, which is the fraction of 1-valued weights in \(\mathbf{W}_{\{0,1\}}\) over the total number of weights of \(\mathbf{W}_{\{0,1\}}\).
Eq. (9) allows to compare the proposed SBNN with the standard BNN formulation. By setting \(\xi=-1/2\) and \(\eta=2\), for which \(\alpha=-1\) and \(\beta=+1\) (Eq. (4)), and removing the constraint on \(U\) leads to the standard formulation of a BNN. This implies that any BNN can be represented using the \(\{0,1\}\) domain and perform sparse operations. However, in practice when \(U\) is not contained to be \(\leq M\), then \(U\approx N/2\) and \(h(1/2)=1\) bit/weight, which means that standard BNNs cannot be compressed more.
### Weight Optimization
In this section, we derive the value of \(\Omega=\{\alpha,\beta\}\) which minimizes the quantization error when real-valued weights are quantized using it.
The minimization of the quantization error accounts to minimizing the binarization loss, \(\mathcal{L}_{B}\), which is the optimal estimator when \(\widetilde{\mathbf{W}}\) is mapped to \(\mathbf{W}\)[30]. This minimization is equivalent to finding the values of \(\alpha\) and \(\beta\) which minimize \(\mathcal{L}_{B}\). To simplify the derivation of the optimum \(\alpha\) and \(\beta\) values, we minimize
over two variables in one-to-one correspondence with \(\alpha\) and \(\beta\). To achieve this, as in Eq. 4-5, we map \(w_{i}\in\Omega\) to \(\overline{w}_{i}\in\{-1,+1\}\), i.e.
\[w_{i}=\tau\overline{w}_{i}+\phi,\]
where \(\tau\) and \(\phi\) are two real-valued variables, and \(\alpha=-\tau+\phi\) and \(\beta=\tau+\phi\). As a result, \(\alpha\) and \(\beta\) are in one-to-one correspondence with \(\tau\) and \(\phi\), and the minimization of \(\mathcal{L}_{B}\) can be formulated as
\[\tau^{*},\phi^{*}=\arg\min_{\tau,\phi}\mathcal{L}_{B}=\arg\min_{\tau,\phi}\left\| \widetilde{\mathbf{w}}-(\tau\overline{\mathbf{w}}+\phi\mathbf{1})\right\|_{2} \tag{12}\]
where \(\|\cdot\|_{2}\) is the \(\ell_{2}\)-norm and \(\mathbf{1}\) is the all-one entries matrix.
By first expanding the \(\ell_{2}\)-norm term and using the fact that \(\text{sum}(\overline{\mathbf{w}})=N^{\ell}(2p-1)\), it is straightforward to reformulate Eq. 12 as a a function of the sum of real-valued weights, their \(\ell_{1}\)-norm, the fraction of \(+1\)-valued binarized weights and the two optimization parameters. In such case, the \(\nabla\mathcal{L}_{B}\) is
\[\nabla\mathcal{L}_{B}=\begin{pmatrix}\frac{\partial\mathcal{L}_{B}}{\partial \mathcal{L}_{B}}\\ \frac{\partial\mathcal{L}_{B}}{\partial\phi}\end{pmatrix}=2\begin{pmatrix}- \|\widetilde{\mathbf{w}}\|_{1}+N^{\ell}\big{(}\tau+\phi(2p-1)\big{)}\\ -\text{ sum}(\widetilde{\mathbf{w}})+N^{\ell}\big{(}\phi+\tau(2p-1)\big{)} \end{pmatrix}. \tag{13}\]
Solving to find the optimal values \(\tau\) and \(\phi\) we obtain
\[\tau^{*}=\frac{\|\widetilde{\mathbf{w}}\|_{1}}{N^{\ell}}-\phi^{*}(2p-1)\,,\ \ \ \phi^{*}=\frac{\text{sum}(\widetilde{\mathbf{w}})}{N^{\ell}}-\tau^{*}(2p-1). \tag{14}\]
When \(p=0.5\), like in standard BNNs, it gives the classical value of \(\tau^{*}=\|\widetilde{\mathbf{w}}\|_{1}/N^{\ell}\) as in [30]. By substituting \(\phi^{*}\) in Eq. (12), we obtain the closed-form solution
\[\tau^{*}=\frac{\|\widetilde{\mathbf{w}}\|_{1}-(2p-1)\text{sum}(\widetilde{ \mathbf{w}})}{N^{\ell}(1-(2p-1)^{2})}\,,\ \ \ \phi^{*}=\frac{\text{sum}(\widetilde{\mathbf{w}})-(2p-1)\|\widetilde{\mathbf{w}} \|_{1}}{N^{\ell}(1-(2p-1)^{2})}. \tag{15}\]
As the gradient (Eq. 13) is linear in \(\phi\) and \(\tau\), this implies that there is a unique critical point. Moreover, an analysis of the Hessian matrix confirms that \(\mathcal{L}_{B}\) is convex and that local minimum is a global minimum. The derivation is here omitted as it is straightforward.
### Network Training
The SBNN training algorithm builds upon state-of-the-art BNN training algorithms [3, 15, 25], while introducing network sparsification. To profit from BNNs training scheme, we replace \(\mathbf{W}_{\{0,1\}},\xi\) and \(\eta\) (Eq. (10)) with \(\overline{W},\tau\) and \(\phi\). Doing so, \(\mathcal{L}(y,\hat{y})\) corresponds to the loss of BNN algorithms \(\mathcal{L}_{\text{\tiny{BNN}}}\). SBNN training also requires to add the penalization term from Eq. (11) to account for sparsity. To account for \(\overline{\mathbf{W}}\), the regularization function \(g(\mathbf{W}_{\{0,1\}})\) (Eq. (11)) is redefined according to
\[j(\overline{\mathbf{W}})=\text{ReLU}\left(\left(\sum_{i}\frac{\overline{w}_{i }+1}{2N}\right)-\text{EC}\right), \tag{16}\]
and the SBNN objective loss can be expressed as
\[\mathcal{L}_{\text{\tiny SBNN}}=\mathcal{L}_{\text{\tiny BBNN}}+\lambda\,j( \overline{\mathbf{W}}). \tag{17}\]
During training, we modulate the contribution of the regularization term \(j(\overline{\mathbf{W}})\) by imposing, at every training iteration, to be equal to a fraction of \(\mathcal{L}_{\text{\tiny SBNN}}\), i.e.
\[\gamma=\frac{\lambda\,j(\overline{\mathbf{W}})}{\mathcal{L}_{\text{\tiny SBNN }}}. \tag{18}\]
The hyperparameter \(\gamma\) is set to a fixed value over all the training process. Since \(\mathcal{L}_{\text{\tiny SBNN}}\) changes at every iteration, this forces \(\lambda\) to adapt, thus modulating the influence of \(j(\overline{\mathbf{W}})\) proportionally to the changes in the loss. The lower \(\gamma\) is set, the less influence \(j(\overline{\mathbf{W}})\) has on the total loss. This means that network sparsification will be slower, but convergence will be achieved faster. On the opposite case (high \(\gamma\)), the training will favor sparsification.
### Implementation Gains
We discuss the speed-up gains of the proposed SBNN through its efficient implementation using linear layers in the backbone architecture. Its extension to convolutional layers (Fig. 1) is straightforward, thus we omit it for the sake of brevity.
We describe the use of sparse operations, as it can be done on an FPGA device [7, 36]. Instead, when implemented on CPUs, SBNNs can take advantage of pruned layers, kernels and filters for acceleration [9, 28, 37, 38]. Moreover, for kernels with only a single binary weight equal to 1 there is no need to perform a convolution, since the kernels remove some elements from the corner of their input.
Figure 1: BNNs vs. SBNNs operations in a convolutional layer using \(c_{out}\) filters and input of \(c_{in}\) dimensions. BNNs’ (\(c_{out}\cdot c_{in}\)) convolutional kernels are dense and require all computations. SBNNs’ kernels are sparse, allowing to skip certain convolutions and sum operations. The removed filters are indicated by a dashed contour and no fill. Both BNNs and SBNNs perform convolutions using XNOR and popcount operations, while the sum is replaced by popcount operations.
The connections in a SBNN are the mapped one-valued weights, i.e. the set \(S_{1}\). Therefore, SBNNs do not require any XNOR operation on FPGA, being popcount the only bitwise operation needed during the forward pass. The latter, however, is performed only in a layer's input bits connected through the one-valued weights rather than the full input.
For any given layer \(\ell\), the number of binary operations of a BNN is \(\mathcal{O}_{\textsc{BNN}}=2N^{\ell}\)[3], \(N^{\ell}\) XNOR operations and \(N^{\ell}\) popcounts. A rough estimate of the implementation gain in terms of the number of binary operations of SBNNs w.r.t. BNNs can be expressed in terms of the EC as
\[\frac{\mathcal{O}_{\textsc{SBNN}}}{\mathcal{O}_{\textsc{BNN}}}\approx\frac{2N ^{\ell}}{\text{EC}\cdot N^{\ell}}\approx\frac{2}{\text{EC}}, \tag{19}\]
which indicates that the lower the EC fraction, the higher the gain w.r.t. BNNs.
Binary operations are not the only ones involved in the inference of SBNN layers. After the sparse \(\{0,1\}\) computations, the mapping operations to the \(\{\alpha,\beta\}\) domain take place, also benefiting from implementation gains. To analyze these, let us now denote \(\mathbf{x}\) the input vector to any layer and \(\mathbf{z}=\mathbf{w}\,\mathbf{x}\) its output. Using E. (4), \(\mathbf{z}\) can be computed as
\[\mathbf{z}=\xi\,\mathbf{z}^{\prime}+\xi\,\eta\,\mathbf{q}, \tag{20}\]
where \(\mathbf{z}^{\prime}=\mathbf{w}_{\{\mathbf{0},\mathbf{1}\}}\,\mathbf{x}\) is the result of sparse operations (Fig. 1), \(\mathbf{q}=\mathbf{1}\,\mathbf{x}\), and \(\mathbf{1}\) the all-ones matrix.
All the elements in \(\mathbf{q}\) take the value \(2\cdot\text{popcount}(\mathbf{x})-|\mathbf{x}|\), with \(|\mathbf{x}|\) the size of \(\mathbf{x}\). Therefore, they are computed only once, for each row of \(\mathbf{1}\). Being \(\xi\) and \(\eta\) known at inference time, they can be used to precompute the threshold in the threshold comparison stage of the implementation of the batchnorm and sign operations following the estimation of \(\mathbf{z}\)[35]. Thus, SBNNs require \(|\mathbf{x}|\) binary operations, one real product and \(|\mathbf{x}|\) real sums to obtain \(\mathbf{z}\) from \(\mathbf{z}^{\prime}\).
## 4 Experiments and Results
We first run a set of ablation studies to analyze the properties of the proposed method (Section 4.1). Namely, we analyze the generalization of SBNNs in a standard binary domain and the proposed generic binary domain; we study the role of the quantization error in the network's performance; and the effects of sparsifying binary kernels. Next, we compare our proposed method to other state-of-the-art techniques using the well established CIFAR-10 and CIFAR-100 [18] datasets. Preliminary results on ImageNet [31] are also discussed. All our code has been made publicly available3.
Footnote 3: github.com/robustml-eurecom/SBNN
### Ablation Studies
**Experimental setup.** We use a ResNet-18 binarized model trained on CIFAR-10 as backbone architecture. We train the networks for 300 epochs, with batch
size of \(512\), learning rate of \(1e-3\), and standard data augmentation techniques (random crops, rotations, horizontal flips and normalization). We use an Adam optimizer and the cosine annealer for updating the learning rate as suggested in [24] and we follow the binarization strategy of IR-Net [29].
#### 4.1.3 Generalization properties.
We compare the performance of the proposed generic binary domain to other binary domains used by BNNs by assessing the networks' generalization capabilities when the sparsity ratio is \(95\%\). For this experiment, we use the \(\{-\beta,+\beta\}\) domain from [30] with no sparsity constraints as the baseline. Additionally, we consider the same domain with a \(95\%\) sparsity constraint and the \(\{\alpha,\beta\}\) domain obtained optimizing \(\tau\) and \(\phi\) according to Eq. (15) with the \(95\%\) sparsity constraint. Table 1 reports the obtained results in terms of top-1 accuracy and accuracy loss w.r.t. the BNN baseline model (\(\Delta\)). When we impose the \(95\%\) sparsity constraint with the \(\{-\beta,+\beta\}\) domain, the accuracy drop w.r.t. to the baseline is \(2.98\%\). Using the \(\{\alpha,\beta\}\) domain, the loss goes down to \(2.47\%\), nearly \(0.5\%\) better than the \(\{-\beta,+\beta\}\) domain. The results suggest that a more general domain leads to improved generalization capabilities.
#### 4.1.4 Impact of the quantization error
We investigate the impact of the quantization error in the SBNN generalization. To this end, we compare the proposed quantization technique (Sec. 3.3) with the strategy of learning \(\Omega\) via back-propagation. We denote this approach Learned \(\{\alpha,\beta\}\) (Table 1). The obtained results show that with the learning of the parameters the accuracy loss w.r.t. the BNN baseline decreases down to \(-0.09\%\), thus \(2.38\%\) better than when \(\tau\) and \(\phi\) are analytically obtained with Eq. (15). This result implies that the quantization error is one of the sources of accuracy degradation when mapping real-valued weights to any binary domain, but it is not the only source. Indeed, activations are also quantized. Moreover, errors are propagated throughout the network. Learning \(\Omega\) can partially compensate for these other error sources.
#### 4.1.5 Effects of network sparsification
We investigate the effects of network sparsification and how they can be leveraged to reduce the binary operations (BOPs) required in SBNNs. In Section 4.1.4, we showed that our binary domain is more adept at learning sparse network representations compared to the standard binary domain. This allows us to increase the sparsity of SBNNs while maintaining a desired level of accuracy. When the sparsity is sufficii
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Domain & Sparsity constraint Top-1 Accuracy & \(\Delta\) \\ \hline \hline Baseline & & 88.93\% & / \\ \(\{-\beta,+\beta\}\)[30] & 95\% & 85.95\% & -2.98\% \\ \(\{\alpha,\beta\}\) & 95\% & 86.46\% & -2.47\% \\ Learned \(\{\alpha,\beta\}\) & 95\% & 88.84\% & -0.09\% \\ \hline \end{tabular}
\end{table}
Table 1: Role of the binary domain and the quantization error when sparsifying BNNs. Experiments performed on CIFAR-10 with a binarized ResNet-18 model.
tional kernels can be entirely removed from the network, which further reduces the BOPs required for SBNNs. Additionally, convolutional kernels with only a single binary weight equal to 1 do not require a convolution to be performed, as these kernels simply remove certain elements from the input.
To illustrate this effect, we plotted the distribution of binary kernels for the 5th, 10th, and 15th layers of a binarized ResNet-18 model (Fig. 2). The first column shows the distribution when no sparsity constraints are imposed, while the second and third columns show the distribution for sparsity levels of 95% and 99%, respectively. The kernels are grouped based on their Hamming weights, which is the number of non-zero elements in each \(\{0,1\}^{3\times 3}\) kernel. The plots suggest that increasing the sparsity of SBNNs results in a higher number of kernels with Hamming weights of 0 and 1.
### Benchmark
**CIFAR-10.** We compare our method against state-of-the-art methods over a binarized ResNet-18 model using CIFAR-10. Namely, we consider: STQ [28], Slimming [37], Dual-P [7], Subbit [36], IR-Net [29] and our method with learned \(\tau\) and \(\phi\), for different sparsity constraints. We use the IR-Net as BNN baseline to be compressed. We use the experimental setup described in Sec. 4.1 with some modifications. We extend the epochs to 500 as in [36], and we use a MixUp strategy [42]. In the original IR-Net formulation [29], the training setup is missing. We use our setup to train it, achieving the same accuracy as in [29].
Table 2 reports the obtained results in terms of accuracy (Acc.), accuracy loss w.r.t. the IR-Net model (\(\Delta\)), and BOPs reduction (BOPs PR). For our SBNN, we estimate BOPs PR by counting the number of operations which are not computed from the convolutional kernels with Hamming weight 0 and 1. For
Figure 2: Percentage of binary kernels for various Hamming weights of a binarized Resnet-18 model over CIFAR-10 for different sparsity constraints. The 5-th, 10-th and 15-th layers are shown in the top, middle and bottom rows, respectively.
other methods, we refer the reader to the original publications. We assess our method at different levels of sparsity, in the range 50 to 99%. For SBNNs we also report the percentage of SBNN's convolutional kernels with Hamming weight 0 (\(K_{0}\)) and with Hamming weight 1 (\(K_{1}\)).
The results suggest that our method is competitive with other more complex pruning strategies. Moreover, our method reports similar accuracy drops w.r.t. state-of-the-art Subbit and Dual-P for similar BOPs PR. However, we need to point out that Subbit and Dual-P results refer to BOPs PR on FPGA, where SBNN can take advantage of sparse operations (Section 3.5) also for the kernels with larger Hamming weights than 0 and 1, because on FPGA all operations involving 0-valued weights can be skipped. For instance, the use of sparse operations on the SBNN 95% allows to remove \(\approx\) 84.9% BOPs.
**CIFAR-100.** We compare our method in the more challenging setup of CIFAR-100, with 100 classes and 500 images per class, against two state-of-the-art methods: STQ [28], and Subbit [36]. We use ReActNet-18 [25] as the backbone architecture, using a single training step and no teacher. We train for 300 epochs with the same setup used for CIFAR-10 with Mixup augmentation. As no previous results for this setup have been reported for ReActNet-18 and Subbit, for a fair comparison, we trained them from scratch using our setup. We report the same metrics used for CIFAR-10, plus the the reduction of binary parameters (BParams PR). For our SBNN, we estimate BParams PR as follows. For each kernel we use 2 bits to differentiate among zero Hamming weight kernels, one Hamming weight kernels and all the other kernels. Then, we add 4 bits to the kernels with Hamming weight 1 to represent the index position of their 1-valued bit, whereas we add 9 bits for all the other kernels with Hamming weight larger
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Method** & **Acc.** & \(\mathbf{\Delta}\) & **BOPs PR** & \(\mathbf{K_{0}}\) & \(\mathbf{K_{1}}\) \\ \hline IR-Net & 91.50\% & / & / & / & / \\ STQ & 86.56\% & -5.50\% & -40.0\% & / & / \\ Slimming & 89.30\% & -2.20\% & -50.0\% & / & / \\ Dual-P (2\(\rightarrow\)1) & 91.02\% & -0.48\% & -70.0\% & / & / \\ Dual-P (3\(\rightarrow\)1) & 89.81\% & -1.69\% & -80.6\% & / & / \\ Dual-P (4\(\rightarrow\)1) & 89.43\% & -2.07\% & -85.4\% & / & / \\ Subbit 0.67-bits & 91.00\% & -0.50\% & -47.2\% & / & / \\ Subbit 0.56-bits & 90.60\% & -0.90\% & -70.0\% & / & / \\ Subbit 0.44-bits & 90.10\% & -1.40\% & -82.3\% & / & / \\ SBNN 50\% **[our]** & 91.70\% & +0.20\% & -11.1\% & 5.6\% & 6.8\% \\ SBNN 75\% **[our]** & 91.71\% & +0.21\% & -24.5\% & 30.7\% & 15.9\% \\ SBNN 90\% **[our]** & 91.16\% & -0.24\% & -46.5\% & 61.8\% & 15.5\% \\ SBNN 95\% **[our]** & 90.94\% & -0.56\% & -63.2\% & 77.1\% & 11.8\% \\ SBNN 96\% **[our]** & 90.59\% & -0.91\% & -69.7\% & 81.0\% & 10.1\% \\ SBNN 97\% **[our]** & 90.71\% & -0.79\% & -75.7\% & 84.8\% & 8.7\% \\ SBNN 98\% **[our]** & 89.68\% & -1.82\% & -82.5\% & 89.3\% & 6.5\% \\ SBNN 99\% **[our]** & 88.87\% & -2.63\% & -88.7\% & 94.6\% & 3.3\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Evaluation of kernel removal for different pruning targets using a binarized Resnet-18 model on CIFAR-10.
than 1, which are their original bits. For the other methods, please refer to their work for their estimate of BParams PR.
Table 3 reports the obtained results for the different methods and our SBNN for various sparsity targets. We can see that our pruning method is more effective in reducing both the BOPs and the parameters than Subbit. It allows to remove 79.2% of kernels, while increasing the original accuracy by 0.79% w.r.t. the ReActNet-18 baseline. Instead, we observe nearly 1% accuracy drop for a Subbit network for a similar BOPs reduction. Moreover, our method allows to remove nearly 15% more binary parameters.
**ImageNet.** We assess our proposed SBNN trained with target sparsity of 75% and 90% on ImageNet. We compare them with state-of-the-art BNNs, namely: XNOR-Net [30], Bi-RealNet-18 [26] and ReActNet-18, ReActNet-A [25] and Subbit [36]. Moreover, we also report the accuracy of the full-precision ResNet-18 [13] and MobileNetV1 [14] models, as a reference. We use a ReActNet-A [25] as SBNN's backbone with its MobileNetV1 [14] inspired topology and with the distillation procedure used in [25], whereas in Subbit [36] they used ReActNet-18 as backbone. One of the limitations of Subbit [36] is that their method cannot be applied to the pointwise convolutions of MobileNetV1 [14]. Due to GPUs limitations, during our training, we decreased the batch size to 64. For a fair comparison, we retrained the original ReActNet-A model with our settings.
Table 4 reports the results in terms of accuracy (Acc). We also include the number of operations (OPs) to be consistent with other BNNs assessment on ImageNet. For BNNs, OPs are estimated by the sum of floating-point operations (FLOPs) plus BOPs rescaled by a factor 1/64 [30, 26, 25]. We assume sparse operations on FPGA to estimate BOPs for SBNN.
We observe that BOPs are the main contributors to ReActNet-A's OPs (Table 4), thus decreasing them largely reduces the OPs. This, instead, does not
\begin{table}
\begin{tabular}{c c c c c c} \hline
**Method** & **Acc.** & \(\mathbf{\Delta}\) & **BOPs PR BParams PR** & \(\mathbf{K_{0}}\) & \(\mathbf{K_{1}}\) \\ \hline ReActNet-18\({}^{*}\) & 62.79\% & / & / & / & / \\ STQ & 57.72\% & -5.05\% & -36.1\% & -36.1\% & - \\ Subbit 0.67-bits\({}^{*}\) & 62.60\% & -0.19\% & -47.2\% & -33.3\% & / / \\ Subbit 0.56-bits\({}^{*}\) & 62.07\% & -0.72\% & -70.0\% & -44.4\% & / / \\ Subbit 0.44-bits\({}^{*}\) & 61.80\% & -0.99\% & -82.3\% & -55.6\% & / / \\ SBNN 50\% **[our]** & 63.03\% & +0.24\% & -11.1\% & / & 5.6\% & 6.8\% \\ SBNN 95\% **[our]** & 63.33\% & +0.54\% & -66.2\% & -59.9\% & 72.9\% & 16.6\% \\ SBNN 96\% **[our]** & 63.04\% & +0.25\% & -67.3\% & -63.7\% & 78.9\% & 12.6\% \\ SBNN 97\% **[our]** & 62.41\% & -0.38\% & -73.4\% & -66.8\% & 82.9\% & 11.1\% \\ SBNN 98\% **[our]** & 63.58\% & +0.79\% & -79.2\% & -70.3\% & 88.1\% & 8.0\% \\ SBNN 99\% **[our]** & 62.23\% & -0.57\% & -87.8\% & -74.0\% & 93.6\% & 4.7\% \\ \hline \multicolumn{6}{l}{\({}^{*}\) our implementation.} \\ \end{tabular}
\end{table}
Table 3: Evaluation of kernel removal for different pruning targets using a ReActNet-18 model on CIFAR-100.
hold for ReActNet-18, which may explain why Subbit is not effective in reducing OPs of its baseline. Our method instead is effective even for less severe pruning targets and it requires less than \(3.4\times\) OPs w.r.t. state-of-the-art ReActNet-A model, while incurring in an acceptable generalization loss between \(1.9-3.4\%\).
## 5 Conclusions
We have presented sparse binary neural network (SBNN), a novel method for sparsifying BNNs that is robust to simple pruning techniques by using a more general binary domain. Our approach involves quantizing weights into a general \(\Omega=\{\alpha,\beta\}\) binary domain that is then expressed as 0s and 1s at the implementation stage. We have formulated the SBNN method as a mixed optimization problem, which can be solved using any state-of-the-art BNN training algorithm with the addition of two parameters and a regularization term to control sparsity.
Our experiments demonstrate that SBNN outperforms other state-of-the-art pruning methods for BNNs by reducing the number of operations, while also improving the baseline BNN accuracy for severe sparsity constraints. Future research can investigate the potential of SBNN as a complementary pruning technique in combination with other pruning approaches. In summary, our proposed SBNN method provides a simple yet effective solution to improve the efficiency of BNNs, and we anticipate that it will be a valuable addition to the field of binary neural network pruning.
|
2307.04340 | Crystal Structure Generation with Autoregressive Large Language Modeling | The generation of plausible crystal structures is often the first step in
predicting the structure and properties of a material from its chemical
composition. Quickly generating and predicting inorganic crystal structures is
important for the discovery of new materials, which can target applications
such as energy or electronic devices. However, most current methods for crystal
structure prediction are computationally expensive, slowing the pace of
innovation. Seeding structure prediction algorithms with quality generated
candidates can overcome a major bottleneck. Here, we introduce CrystaLLM, a
methodology for the versatile generation of crystal structures, based on the
autoregressive large language modeling (LLM) of the Crystallographic
Information File (CIF) format. Trained on millions of CIF files, CrystaLLM
focuses on modeling crystal structures through text. CrystaLLM can produce
plausible crystal structures for a wide range of inorganic compounds unseen in
training, as demonstrated by ab initio simulations. The integration with
predictors of formation energy permits the use of a Monte Carlo Tree Search
algorithm to improve the generation of meaningful structures. Our approach
challenges conventional representations of crystals, and demonstrates the
potential of LLMs for learning effective 'world models' of crystal chemistry,
which will lead to accelerated discovery and innovation in materials science. | Luis M. Antunes, Keith T. Butler, Ricardo Grau-Crespo | 2023-07-10T04:48:40Z | http://arxiv.org/abs/2307.04340v3 | # Crystal Structure Generation with Autoregressive Large Language Modeling
###### Abstract
The generation of plausible crystal structures is often an important step in the computational prediction of crystal structures from composition. Here, we introduce a methodology for crystal structure generation involving autoregressive large language modeling of the Crystallographic Information File (CIF) format. Our model, CrystaLLM, is trained on a comprehensive dataset of millions of CIF files, and is capable of reliably generating correct CIF syntax and plausible crystal structures for many classes of inorganic compounds. Moreover, we provide general and open access to the model by deploying it as a web application, available to anyone over the internet. Our results indicate that the model promises to be a reliable and efficient tool for both crystallography and materials informatics.
## 1 Introduction
The _in silico_ search for new materials often involves the exploration of a space of compositions in a chemical system, and the investigation of various predicted structural phases in that space (see [1] and [2] for examples). To predict the structures of unknown materials, a Crystal Structure Prediction (CSP) approach is often employed, which attempts to derive the ground state crystal structure for a given chemical composition under specific physical conditions. CSP approaches are relatively computationally expensive, typically involving _ab initio_ techniques. They often begin with the generation of candidate structures. Examples are the AIRSS [3, 4] and USPEX [5] approaches. Initializing the search space with sensible structures increases the likelihood of success, and decreases the amount of computation required. It is therefore expected that effective Crystal Structure Generation (CSG) tools would help accelerate the prediction of structures using CSP methods.
Increasingly, techniques from Machine Learning (ML) and data science are being used to solve problems in materials science. [6] In particular, generative modelling approaches based on autoencoder architectures and generative adversarial networks (GANs) [7] have been used to generate crystal structures. [8, 9, 10] Indeed, generative modelling has become commonplace, an outcome catalyzed by astounding advancements in the computational
generation of images, audio and natural language over the last several years. [11] The Large Language Model (LLM), backed by the Transformer architecture [12], is the approach behind state-of-the-art performance on natural language processing tasks. This approach begins with a generative pre-training step, which is autoregressive in nature, involving the unsupervised task of predicting the next token given a sequence of preceding tokens. [13] When such models are scaled to billions of parameters, their effectiveness becomes quite remarkable, as tools such as ChatGPT [14] demonstrate.
The LLM approach has recently been used in the context of materials science. [15, 16, 17] However, these attempts have been focused on either training and tuning the model for natural language tasks, and utilizing the model in natural language generation scenarios involving chemical subject matter, or training the model on a corpus of expanded chemical compositions for the purposes of generating unseen compositions. An alternate perspective, which we present here, is to train the model on textual representations of inorganic crystal structures, such as the Crystallographic Information File (CIF) format, rather than on corpora of natural language, or chemical compositions alone.
The motivation for this perspective originates from two conjectures: The first states that a sequence of symbols (i.e. tokens) is an appropriate representation modality for many predictive tasks (including those involving chemical structure). The idea of representing any domain with a sequence of tokens may at first seem counter-intuitive. However, consider that even images can be represented this way, and be subject to the autoregressive language modelling of pixels [18]. This challenges the notion that domain-specific representations, such as graphs for chemical structure, are necessary for superior performance. The second conjecture states that LLMs learn more than simply "surface statistics" and the conditional probability distribution of tokens. Indeed, autoregressive pre-training involving next-token prediction may result in learning an effective _world model_: an internalized causal model of the processes generating the target phenomena. A model which simply learns spurious correlations in the data is less desirable, as it may have greater difficulty in generalizing beyond the training distribution. Recent studies have demonstrated that LLMs trained on sequences of board game play (e.g. Chess and Othello) do indeed track the state of the board, and probes of the internal activations of the model reveal the existence of representations of various abstract concepts specific to the domain. [19, 20] We therefore asked whether a model trained to predict the 3-dimensional coordinates of atoms, digit-by-digit, could learn the chemistry implicit in crystal structures, and generate unseen structures, borrowing from its model of the world of atoms.
As such, we herein describe the CrystalLM model, a tool for CSG trained on an extensive corpus of CIF files representing the structures of millions of inorganic solid-state materials. Unlike small molecule organic compounds, the generative modelling of inorganic crystals presents unique challenges: the structures are complex and periodic, are not readily described by simple graphs, and are imbued with different forms of symmetry. Moreover, they can be constructed from more than 100 different elements. Even so, the model is capable of reliably generating correct CIF syntax and physically plausible crystal structures for many classes of inorganic compounds.
Methods
The following terminology is used in the remainder of the document:
A _formula_, or _reduced composition_, refers to the empirical formula, or formula unit, which is the simplest, whole-number ratio of atoms in the compound. An example of a formula is Ba\({}_{2}\)MnCr.
A _cell composition_ is a chemical formula referring to the total number of atoms of each type in the unit cell of a crystal. It represents the chemical formula of the compound as it would appear in the crystal structure, which might contain multiple formula units. An example of a cell composition is Ba\({}_{6}\)Mn\({}_{3}\)Cr\({}_{3}\).
### Dataset
The dataset was assembled by obtaining structures from the Materials Project [21], the OQMD [22], and NOMAD [23], which were originally optimized using density functional theory (DFT) simulations. In total, approximately 3.6 million structures were obtained. This dataset consists of compounds containing anywhere from 1 to 10 elements, with most consisting of 3 or 4 elements. The elements up to and including atomic number 94 are present, with the exception of polonium, astatine, radon, francium, and radium. The dataset contains roughly 800,000 unique formulas, and 1.2 million unique cell compositions. When paired with space groups, there are 2.3 million unique cell composition-space group pairs. To choose between duplicate structures containing the same cell composition and space group, the structure with the lowest volume per formula unit was selected. The 2.3 million structures in this dataset were converted to CIF files using the pymatgen library [24], and were used for training. The CIF files were created with the pymatgen option for symmetry finding tolerance set to 0.1 A. All floating point numbers in the files were rounded to 4 decimal places. The dataset was split randomly into train, validation, and test sets, such that the training set consisted of about 2.2 million CIF files, the validation set 35,000 CIF files, and the test set 10,000 CIF files.
### Tokenization
The dataset of CIF files was tokenized prior to training. The vocabulary consisted of CIF tags, space group symbols, element symbols, numeric digits, and various punctuation symbols, for a total of 371 symbols. After tokenization, the training set consisted of 768 million tokens.
### Generative Pre-training
The generative pre-training step requires a vocabulary, \(\mathcal{V}\), and an ordered list of tokens \(\mathcal{U}=(u_{1},...,u_{n})\), with \(u_{i}\in\mathcal{V}\). We want to maximize the following likelihood:
\[\mathcal{L}(\theta;\mathcal{U})=\sum_{i}\log P(u_{i}|u_{i-c},...,u_{i-1};\theta) \tag{1}\]
where \(c\) is the size of a context window, \(P\) is the conditional probability distribution to be modelled, and \(\theta\) the parameters of a neural network. We therefore minimize \(\mathcal{J}(\theta;\mathcal{U})=-\mathcal{L}\), using stochastic gradient descent to adjust the parameters. We use a multi-layer
Transformer decoder [25] for the neural network, as described in [13]. Our model consists of 25 million parameters, with 8 layers, 8 attention heads, and an embedding size of 512. We decay the learning rate from \(10^{-3}\) to \(10^{-4}\) over the course of training, and use a batch size of 32.
### Evaluation
To evaluate the generative capabilities of the model, we define two scenarios where the model is tasked with generating the compounds of the held-out test set. The first scenario, which we name the Cell Composition-only scenario, involves prompting the model with each cell composition in the test set, and having it generate up to a maximum of 3000 tokens. The model is prompted with only the first line of a CIF file, which consists of the data block header, containing the cell composition of the structure specified in the rest of the file. The second scenario, which we name the Cell Composition+Space Group scenario, is similar to the first, except that the model is prompted with both the cell composition and space group, for each entry in the test set. Moreover, we perform the generation 3 separate times for each entry.
To assess how well the model performed in the first scenario, we check if a generated CIF file is consistent in terms of space group, if it is consistent in terms of the atom site multiplicity, and if the generated bond lengths are reasonable. To check if the generated structure is consistent with the printed space group, we use the SpacegroupAnalyzer class of the pymatgen library, which uses the spglib library [26]. To check if bond lengths are reasonable, we first use a Voronoi-based nearest-neighbour algorithm in pymatgen to define which atoms are bonded together; then, we establish expected bond lengths based on the electronegativity difference between the bonded atoms, and their ionic or covalent radii. We classify a structure as having reasonable bond lengths if all the detected bond lengths are within 30% of the corresponding expected bond lengths.
The goal of the second evaluation scenario is to establish how often the model can recover the unseen structures of the test set, when prompted with a cell composition and space group. To determine whether a generated structure matches the structure in the test set, we use the pymatgen StructureMatcher class, which performs a structural similarity assessment of two crystals. We use a fractional length tolerance of 0.2, a site tolerance of 0.3 A, and an angle tolerance of 5 degrees, which are the default values in pymatgen. Both structures are reduced to primitive cells before matching, and are scaled to equivalent volume.
### DFT Calculations
For the pyrochlore case study, a small number of DFT calculations were performed using VASP, following as closely as possible the settings used in the OQMD project (where most of the pyrochlore structures seen in training were taken from). For example, the recommended PAW potential was used for each element: Zr_sv for zirconium, Hf_pv for hafnium, Lu_3 for lutetium, Pr_3 for praseodymium, Ce_3 for cerium (for the remaining elements, the name of the PAW potential simply matched the element's symbol). The Perdew-Burke- Ernzerhof (PBE) exchange-correlation functional [27], in the generalized-gradient approximation, was used in all calculations. Hubbard (PBE+U) corrections were
applied for transition metal elements with unfilled d levels (\(U_{\mathrm{eff}}\)=3.8 eV for Mn and 3.1 eV for V). Although the cell parameters reported here correspond to the conventional cubic cell with 8 formula units, the DFT calculations were performed using the primitive cell with two formula units, and sampling of the reciprocal space corresponding to that primitive cell was performed using a 7x7x7 grid, as done for all pyrochlore calculations in the OQMD project.
## 3 Results
### Assessment of Generation Quality
To assess the quality of the model's generated structures, we considered two scenarios, as discussed in section 2.4. The Cell Composition-only scenario involves prompting the model with the first line of the test set CIF file only (which specifies the cell composition), whereas the Cell Composition+Space Group scenario involves prompting the model from the first line of the test set CIF file to the line specifying the space group (inclusive). The fraction of generated structures that are consistent in terms of space group, atom site multiplicity, and have reasonable bond lengths are presented in Table 1.
The generated CIF files of the Cell Composition+Space Group scenario were compared to the corresponding CIF files of the test set using a structure matching algorithm (as discussed in section 2.4). The fraction of matching structures is presented in Table 2. The _Reduced Unseen_ column represents the results for formulas that were not seen in training with any \(Z\).
We further examined how closely the generated cell parameters resembled the actual cell parameters, for the cases where there was a structural match. We took the first matching structure for samples that had at least one generated structure matching the test set structure, and measured the \(R^{2}\) and mean absolute error (MAE) for the true
\begin{table}
\begin{tabular}{l c c} \hline & Cell Composition-only & Cell Composition+Space Group \\ \hline Space group consistent & 98.7\% & 98.8\% \\ Atom site multiplicity consistent & 99.1\% & 99.2\% \\ Bond lengths reasonable & 76.1\% & 75.6\% \\ \hline \end{tabular}
\end{table}
Table 1: Quality assessment metrics for the generated CIF files that used the cell compositions of the test set as the basis for the prompts. Note that the Cell Composition+Space Group scenario involved 3 generations per test set entry, and thus represents 3 times as many generated CIF files.
\begin{table}
\begin{tabular}{l c c} \hline & All & Reduced Unseen \\ \hline At least 1 match within 3 attempts & 88.1\% & 86.3\% \\ All 3 attempts matching & 67.4\% & 70.0\% \\ Matched on 1st attempt & 78.4\% & 78.7\% \\ \hline \end{tabular}
\end{table}
Table 2: Structure matching results for the Cell Composition+Space Group scenario.
versus generated cell lengths, the true versus generated (i.e. printed) volume, and the implied (from cell parameters) versus generated volume. The results are presented in Table 3 and Figure 1.
### Generalizing to Unseen Scenarios
To further examine the model's ability to generalize to unseen scenarios, we prompted the model with various formulas, and examined its output. The results are presented in Figure 2.
An example of the model generalizing to a formula that had been seen in training, but with different space groups, is presented in Figure 2a. The formula, Ba\({}_{2}\)MnCr, was in the held-out test set, with the _R\(\bar{3}\)m_ space group. That combination of formula and space group had not been seen in training. The model generated a structure matching the one in the test set on the first attempt, when the space group was provided.
\begin{table}
\begin{tabular}{l c c} \hline Unit Cell Parameters & R\({}^{2}\) & MAE \\ \hline cell lengths (vs true) & 0.994 & 0.125 \\ volume (vs true) & 0.937 & 7.173 \\ volume (vs implied) & 1.000 & 0.6497 \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of generated cell parameters to ground truth for matching structures of the Cell Composition+Space Group scenario.
Figure 1: True vs. generated cell lengths for matching structures of the Cell Composition+Space Group scenario.
The model also demonstrated the ability to generate plausible structures for formulas not seen in training with any \(Z\). An example is the quaternary compound CsCuTePt. This compound was not in the training set, but was in the held-out test set (with \(Z\)=4). The model generated a structure matching the one in the test set, in the _F\(\bar{4}\)3m_ space group, on the third attempt when the space group was provided. The generated structure is presented in Figure 2b.
Finally, in Figure 2c is the generated structure of YbMn\({}_{6}\)Sn\({}_{6}\)[28], an example of the model generalizing to structural motifs with atoms not seen in training. This formula was not seen in training for any \(Z\), and was not in the held-out test set. However, ZrMn\({}_{6}\)Sn\({}_{6}\) was seen in training, in the _P6/mmm_ space group. The model generated a structure in the same space group on the first attempt, without the space group being provided. The generated structure matched the ZrMn\({}_{6}\)Sn\({}_{6}\) structure, with Yb substituted for Zr, and with cell parameters and atomic coordinates adjusted accordingly. This demonstrates the model performing a structure prediction by analogy procedure, as commonly used by materials scientists for discovery [29, 30], despite never having been provided with the procedure to do this.
### Generating Known Structural Classes
The CrystaLLM model was trained on an extensive collection of the various structural classes known to inorganic chemistry. We thus investigated its ability to generate unseen members of these classes. We focused on classes of binary, ternary and quaternary compounds.
#### 3.3.1 Rutiles
Rutiles are a class of binary compounds that adopt a tetragonal unit cell, in the \(P4_{2}\)/mmm space group (\(Z\)=2), as is seen in TiO\({}_{2}\), from which this class of materials adopts its name. The general formula for rutile oxides is MO\({}_{2}\), where M is a metallic species in the +4 oxidation state. Rutile fluorides are also known, where the metal is in the +2 oxidation state.
The model's training dataset consisted of essentially all of the rutilies one might expect to be able to find in nature. Therefore, to test the model's ability to generate unseen rutilies, we requested the generation of theoretically possible, but unlikely compounds, such as AuO\({}_{2}\). With gold in a highly unlikely +4 oxidation state, AuO\({}_{2}\) is not expected to be formed under most conditions. However, the model was able to imagine what the structure of such a compound might be (when the space group is provided). While TiO\({}_{2}\) has cell parameters \(a\)=4.594A, \(c\)=2.959A, the generated rutile gold variant has \(a\)=4.838A \(c\)=3.429A, reflecting the increased volume occupied by the larger gold atoms (Figure 3a).
#### 3.3.2 Spinels
The spinels are a group of ternary compounds with the general formula AB\({}_{2}\)X\({}_{4}\), where A is a cation in the +2 oxidation state, B is a cation in the +3 oxidation state, and X, normally a chalcogen, is an anion. Spinels form cubic close-packed structures, with eight tetrahedral, and four octahedral sites, normally in the _Fd\(\bar{3}\)m_ space group.
To explore the model's ability to generate unseen spinels, we selected two samarium spinels: Sm\({}_{2}\)BO\({}_{4}\), which was present in the held out test set, and the thiospinel Sm\({}_{2}\)BS\({}_{4}\), which was absent from both the training and test sets. The model was able to generate the expected spinel structures for both compounds when the cell composition and space group were provided (Figures 3b and 3c). During training, the model encountered a number of different oxy-, thio-, and selenospinels, and this likely contributed to its ability to generate these two compounds.
#### 3.3.3 Elpasolites
The elpasolites are quaternary compounds with the general formula ABC\({}_{2}\)X\({}_{6}\). The A and C species are typically alkali metal cations in the +1 oxidation state, B is usually a transition metal cation in the +3 oxidation state, and X is a halogen anion. The elpasolites are often referred to as "double perovskites", since their structures are related to perovskites by the doubling of their unit cell dimensions, and the replacement of the M\({}^{2+}\) cation with alternating M\({}^{+}\) and M\({}^{3+}\) cations. Elpasolites crystallize in the _Fm\(\bar{3}\)m_ space group, and are the most common quaternary crystal system reported in the Inorganic Crystal Structure Database (ICSD) [31]. We wondered if the CrystaLLM model could generate elpasolites not seen during training.
We selected two elpasolites from the held-out test, that were not seen in training: the fluoride KRb\({}_{2}\)TiF\({}_{6}\) and the iodide K\({}_{2}\)AgMoI\({}_{6}\). The model was able to generate the correct elpasolite structure when the cell composition and space group was provided (Figures 3d and 3e).
Figure 2: **a** The generated structure of Ba\({}_{2}\)MnCr. Color scheme: Ba = green, Mn = purple, Cr = blue. Cell parameters: \(a\): 3.778 Å, \(b\): 3.778 Å, \(c\): 27.503 Å, \(\alpha\): 90.0\({}^{\circ}\), \(\beta\): 90.0\({}^{\circ}\), \(\gamma\): 120.0\({}^{\circ}\). **b** The generated structure of CsCuTePt. Color scheme: Cs = purple, Cu = blue, Te = gold, Pt = white. Cell parameters: \(a\): 7.153 Å, \(b\): 7.153 Å, \(c\): 7.153 Å, \(\alpha\): 90.0\({}^{\circ}\), \(\beta\): 90.0\({}^{\circ}\), \(\gamma\): 90.0\({}^{\circ}\). **c**, The generated structure of YbMn\({}_{6}\)Sn\({}_{6}\). Color scheme: Yb = green, Mn = magenta, Sn = grey. Cell parameters: \(a\): 5.488 Å, \(b\): 5.488 Å, \(c\): 8.832 Å, \(\alpha\): 90.0\({}^{\circ}\), \(\beta\): 90.0\({}^{\circ}\), \(\gamma\): 120.0\({}^{\circ}\). ZrMn\({}_{6}\)Sn\({}_{6}\) possessed the same structure, but with the following cell parameters: \(a\): 5.364 Å, \(b\): 5.364 Å, \(c\): 8.933 Å, \(\alpha\): 90.0\({}^{\circ}\), \(\beta\): 90.0\({}^{\circ}\), \(\gamma\): 120.0\({}^{\circ}\).
#### 3.3.4 Pyrochlores
The general formula for the pyrochlores is A\({}_{2}\)B\({}_{2}\)O\({}_{7}\), where A, a trivalent cation, and B, a tetravalent cation, are either rare-earths or transition metals (other oxidation states, e.g. combining monovalent and pentavalent cations, are also possible, but we focus here on the trivalent/tetravalent pyrochlores). Pyrochlores crystallize in the _Fd\(\bar{3}\)m_ space group
(\(Z\)=8). There are many combinations of A and B that are possible for this structure, by using lanthanide ions, actinide ions, and Y(III) for the A species, and various transition metal ions, as well as Ti(IV), Zr(IV), and Hf(IV) for the B species. We investigated whether CrystalLM could generate valid pyrochlore structures for any unseen combinations, and whether it could estimate reasonable cell parameters in line with the trends observed for the pyrochlore series, as the cell parameters are expected to be correlated with the ionic radii of the A and B cations.
We created a space of pyrochlores consisting of 144 compounds by producing different combinations of A and B species. Of these, 54 were seen in training. We selected 10 compounds from among the 90 not seen in training, and attempted 3 generations with the model, for each. The cell composition and space group were included in the prompt. All generations resulted in valid pyrochlore structures (Table 4).
We subsequently performed DFT relaxation calculations on the first generated structure for each of the 10 compounds. One case, Ce\({}_{2}\)V\({}_{2}\)O\({}_{7}\), was problematic and was excluded from further analysis. This result isn't very surprising, since both Ce and V are pathological elements in DFT settings. The DFT-derived value of the cell parameter for each of the 10 compounds is plotted against the mean generated value in Figure 4. A good agreement exists between the DFT-derived and generated cell lengths, with an \(R^{2}\) of 0.62 and MAE of 0.08 A being exhibited.
\begin{table}
\begin{tabular}{l l} \hline Formula & Cell Length (Å) \\ \hline Ce\({}_{2}\)Hf\({}_{2}\)O\({}_{7}\) & 10.751 \(\pm\) 0.072 \\ Ce\({}_{2}\)Mn\({}_{2}\)O\({}_{7}\) & 10.500 \(\pm\) 0.215 \\ Ce\({}_{2}\)V\({}_{2}\)O\({}_{7}\) & 10.533 \(\pm\) 0.093 \\ La\({}_{2}\)Mn\({}_{2}\)O\({}_{7}\) & 10.210 \(\pm\) 0.069 \\ La\({}_{2}\)V\({}_{2}\)O\({}_{7}\) & 10.482 \(\pm\) 0.059 \\ Lu\({}_{2}\)Hf\({}_{2}\)O\({}_{7}\) & 10.298 \(\pm\) 0.084 \\ Lu\({}_{2}\)Zr\({}_{2}\)O\({}_{7}\) & 10.454 \(\pm\) 0.119 \\ Pr\({}_{2}\)Mn\({}_{2}\)O\({}_{7}\) & 10.398 \(\pm\) 0.085 \\ Pr\({}_{2}\)V\({}_{2}\)O\({}_{7}\) & 10.514 \(\pm\) 0.058 \\ Pr\({}_{2}\)Hf\({}_{2}\)O\({}_{7}\) & 10.798 \(\pm\) 0.058 \\ \hline \end{tabular}
\end{table}
Table 4: Values of mean generated cell length for the selected pyrochlores not seen in training, over 3 generation attempts.
### Problematic Cases
While the model seems capable of generating structures for many different classes of inorganic crystals, it does nonetheless have difficulty in certain cases. All of the cases appear to involve systems that are rare, and under-represented in the training dataset. For example, the model was generally unable to generate a structure for Mg\({}_{7}\)Pt\({}_{4}\)Ge\({}_{4}\), the structure of which was reported recently to exist in the \(P6_{3}mc\) space group (\(Z\)=2). [32] In this case, there were only 38 examples of 7:4:4 systems in the training dataset, none contained Mg or Pt, and none were in the \(P6_{3}mc\) space group.
The current version of the model also seems to struggle with generating phosphates, sulfates, carbonates, and organic-inorganic hybrid structures. Examples include carbonate hydroxide minerals, such as Co\({}_{2}\)CO\({}_{3}\)(OH)\({}_{2}\)[33] and Cu\({}_{2}\)CO\({}_{3}\)(OH)\({}_{2}\) (malachite). While present in the dataset, they belong to a group of analogous structures for which there are only a handful of examples. While the model can generate Ca\({}_{5}\)(PO\({}_{4}\))\({}_{3}\)(OH) (hydroxyapatite), it generally fails to generate a valid structure for Mn\({}_{4}\)(PO\({}_{4}\))\({}_{3}\). A common theme is the appearance of multiple oxyanions, which can give rise to more complex arrangements of atoms, for which the model may not have seen enough examples. In contrast, the model can generate compounds of the perovskite class reliably. However, over 5,000 examples of the ABX\({}_{3}\) (X=O,F) system in the \(Pm\bar{3}\,m\) space group were seen in training.
Future versions of the model will consider strategies for addressing these occurrences of class imbalance.
Figure 4: The generated vs. DFT-derived value of the cell parameter \(a\) for selected pyrochlores not in the dataset. The error bars represent the \(\pm\) standard deviation of the value of the \(a\) cell parameter for the three generation attempts (all of which resulted in the pyrochlore structure), while the \(y\)-coordinate of the points represents the mean value of the cell parameter across the three attempts.
### The CrystaLLM.com Web Application
To allow for general and open access to the CrystaLLM model, we make it available through a web application, available at [https://crystallm.com](https://crystallm.com). The user of the application is presented with a text field requiring a formula to be entered. Optionally, they may provide the number of formula units (\(Z\)) and the desired space group (Figure 5). Once they press the Generate button, a request is sent to a GPU server which has the model in memory. The request is converted into a prompt, and the generated contents are returned to the user. If no \(Z\) is provided, we scan through \(Z\) values of 1, 2, 3, 4, 6, and 8, and return the first valid structure generated by the model. We validate the generated structure using the same procedure described in the Methods section, checking that the generated structure is consistent in terms of the printed space group, and other elements of the CIF file. If no valid structure can be found, the user is presented with an informative error message, including the option to view the generated content. Requests typically take several seconds to process, but can take longer if no \(Z\) is provided and the model has trouble finding an appropriate \(Z\) value. Generated structures are displayed in a web browser-based 3D structure viewer provided by the Crystal Toolkit framework, upon which the front-end of the web application is built. [34]
By making the model easily accessible, we hope to contribute a potentially useful tool to the materials structure research community. We also hope to receive feedback from users that may help improve future versions of the model.
Figure 5: A screenshot of the crystallm.com web application.
Discussion & Conclusion
Here, we have shown that LLMs of the CIF format are able to generate inorganic crystal structures for a variety of known classes. Indeed, the model is able to produce valid and sensible arrangements of atoms in 3-dimensional space by generating _xyz_ coordinates digit-by-digit. The model also seems to have captured the relationship between space group symbols and the symmetries inherent in the structures it generates.
We chose to build a language model of the CIF format (instead of a simplified format, for example, which might include a minimal vocabulary) for several reasons. First, the CIF format is not particularly verbose. The model learns the grammatical structure of the format fairly quickly. We can thus avoid having to devise an intermediate format that requires inter-conversion between more common formats, which could also be error prone. Second, we believe that having the model learn to generate the more redundant parts of the CIF format, such as the cell volume, and \(Z\), which are inferable from prior inputs, helps the model to perform better overall.
While the model can generate sensible structures, this does not by itself make it suitable, as is, for CSP. Just as natural language LLMs, such as GPT-3 and -4, are not suitable chatbots without further fine-tuning, the CrystaLLM model will also need to be fine-tuned for more advanced tasks. Fine-tuning involves an additional and separate training step, where the model's parameters are adjusted in the context of a different task. This may also involve altering the model's output layer, such as to make it suitable for a regression task, for example. Models can be fine-tuned using a variety of techniques, but supervised learning and reinforcement learning [35] are most common. One might use reinforcement learning, for example, when a task is not clearly defined as a supervised learning problem. When fine-tuning natural language LLMs for chatbot applications, it is common to use Reinforcement Learning from Human Feedback (RLHF). [36, 37] With RLHF, the idea is to gather data from human annotators to be used to train a reward model, which scores generated text according to its desirableness. The reward model is then used as part of a reinforcement learning-based tuning of the LLM. In CSP, one would like to produce ground-state structures (for some given physical conditions). One could thus imagine an analogous procedure where CrystaLLM is fine-tuned for the goal of generating low-energy structures, via feedback from an external evaluator of the generated structure's energy. We call this _Reinforcement Learning from Thermodynamic Feedback_ (RLTF). This procedure would also require a reward model, and such a model should ideally provide a timely estimate of a structure's energy. This excludes time-consuming approaches such as DFT. A viable approach could make use of a separate machine learning-based model of formation energy, such as one based on ALIGNN. [38] Indeed, neural network potentials have been used to accelerate the prediction of crystal structures. [39]
There are several limitations with the current approach. First, none of the structures of the dataset have site-occupancy disorder (fractional site occupancies). Therefore, CrystaLLM cannot generate disordered structures, and may not successfully generate structures for combinations of cell composition and space group that imply a disordered structure. An example is K\({}_{2}\)NaTiOF\({}_{5}\), which is reported to be an elpasolite, in the _Fm\(\bar{3}\)m_ space group (\(Z\)=4), with F and O species sharing the same crystal site [40]. Another limitation is that the CIF files of the dataset were not all created using the same level
of theory. The training set is derived from a combination of DFT sources using different settings, functionals, etc., which may make it difficult for the model, in some instances, to learn a consistent relationship between cell composition and detailed structure. [41]
Nevertheless, we believe that CrystaLLM will be a useful tool for CSG and materials informatics. We plan to explore fine-tuning the model for physical property prediction tasks, such as the prediction of lattice thermal conductivity, where experimental data is relatively scarce. [42] The architecture of the model allows it to be fine-tuned for either composition-based or structure-based prediction tasks. This implies that CrystaLLM may be the basis for a general-purpose materials informatics model, which can be used for generative tasks, and fine-tuned for property prediction tasks that require either composition or structure. If the model is able to transfer what it has learned about the world of atoms to these various predictive problems, it may prove to be a quite flexible tool relevant to many aspects of materials chemistry.
## 5 Note
During development of the CrystaLLM model, we became aware of a pre-print by Flam-Shepherd and Aspuru-Guzik that describes the use of autoregressive large language modelling for molecular and crystal structure generation. [43] While the fundamental idea of generating the coordinates of atomic systems token-by-token is the same, our work differs in the following ways: 1, we focus exclusively on the generation of the crystal structures of inorganic materials; 2, we train the model directly on CIF files and CIF syntax, with a vocabulary consisting of CIF tags and space group symbols, in addition to atomic symbols and numeric digits; 3, we use a much larger and custom dataset consisting of millions of CIF files for training the model; 4, our model is symmetry-aware, and supports the generation of structures in specified space groups and for specific numbers of formula units. In summary, we develop a model specifically for the purposes of material structure generation, which produces syntactically valid and physically sensible CIF files as an output.
## 6 Data Availability
The structures used in the experiments described in this work were obtained from the Materials Project ([https://materialsproject.org/](https://materialsproject.org/)), the OQMD ([https://oqmd.org/](https://oqmd.org/)), and NOMAD ([https://nomad-lab.eu/](https://nomad-lab.eu/)). All structures were made available by those sources under the Creative Commons Attribution 4.0 License. [44]
## 7 Acknowledgements
This work was partially supported by computational resource donations from Amazon Web Services through the AWS Activate program, obtained with assistance from the Communitech Hub. For the DFT calculations, we used the Young supercomputer facility via the UK Materials and Molecular Modelling Hub, which is partially funded by EPSRC (EP/T022213/1, EP/W032260/1).
## 8 Author Contributions
L.M.A. conceived the project, performed the experiments, and drafted the manuscript. L.M.A. and R.G.-C. designed the experiments. R.G-C. carried out the DFT calculations for the pyrochlore case study. R.G.-C. and K.T.B. supervised and guided the project. All authors reviewed, edited and approved the manuscript.
|
2304.12714 | Nonuniqueness of solutions to the $L_p$ chord Minkowski problem | This paper explores the nonuniqueness of solutions to the $L_p$ chord
Minkowski problem for negative $p.$ The $L_p$ chord Minkowski problem was
recently posed by Lutwak, Xi, Yang and Zhang, which seeks to determine the
necessary and sufficient conditions for a given finite Borel measure such that
it is the $L_p$ chord measure of a convex body, and it includes the chord
Minkowski problem and the $L_p$ Minkowski problem. | Yuanyuan Li | 2023-04-25T10:56:42Z | http://arxiv.org/abs/2304.12714v1 | # Nonuniqueness of solutions to the \(L_{p}\) chord Minkowski problem
###### Abstract
This paper explores the nonuniqueness of solutions to the \(L_{p}\) chord Minkowski problem for negative \(p\). The \(L_{p}\) chord Minkowski problem was recently posed by Lutwak, Xi, Yang and Zhang, which seeks to determine the necessary and sufficient conditions for a given finite Borel measure such that it is the \(L_{p}\) chord measure of a convex body, and it includes the chord Minkowski problem and the \(L_{p}\) Minkowski problem.
## 1 Introduction
The central objects in study of convex geometry are convex bodies. A convex body in \(n\)-dimensional Euclidean space \(\mathbb{R}^{n}\), is a compact convex set with non-empty interior. The Brunn-Minkowski theory is a study of convex bodies which centers around the study of geometric functionals and the differential of these functionals. When geometric invariants arise as geometric functionals of convex bodies, geometric measures are often viewed as differentials of geometric invariants. One of the cornerstones of the Brunn-Minkowski theory is the Minkowski problem. It is a problem of priscribing geometric measure generated by convex bodies, which is concerned about necessary and sufficient conditions for a given measure such that it arises as the measure generated by a convex body. The most studied Minkowski-type problem is the classical Minkowski problem, which focuses on the surface area measures of convex bodies. For a comprehensive discussion on the Minkowski problem and its resolution, we recommend readers consulting Pogorelov [20] and Cheng-Yau [8].
Recently, a new family of geometric measures were introduced by Lutwak-XYZ[16] by studying of a variational formula regarding intergral geometric invariants of convex bodies called chord integrals. Let \(K\in\mathcal{K}^{n}\) where \(\mathcal{K}^{n}:=\{\text{all convex bodies in }\mathbb{R}^{n}\}\), the \(q\)th chord integral \(I_{q}(K)\) is defined by
\[I_{q}(K)=\int_{\mathcal{L}^{n}}|K\cap\ell|^{q}d\ell,\]
where \(\mathcal{L}^{n}\) denotes the Grassmannian of \(1\)-dimensional affine subspace of \(\mathbb{R}^{n},\)\(|K\cap\ell|\) denotes the length of the chord \(K\cap\ell,\) and the integration is with respect to Haar measure on the affine Grassmannian \(\mathcal{L}^{n},\) which is normalized to be a probability measure when restricted to rotations and to be \((n-1)\)-dimensional Lebesgue measure when restricted to parallel translations.
\[I_{1}(K)=V(k),\quad I_{0}(K)=\frac{\omega_{n-1}}{n\omega_{n}}S(K),\quad I_{n+1 }(K)=\frac{n+1}{\omega_{n}}V(K)^{2},\]
where \(\omega_{n}\) denotes the volume of \(n\)-dimensional unit ball. Note that \(I_{q}(B_{n})=\frac{2^{q}\omega_{n}\omega_{n+q-1}}{\omega_{q}},\) where \(B_{n}\) is the n-dimensional unit ball. One can see from the above fomula that the chord integrals include the convex body's volume and surface area as two special cases. These are Crofton's volume formula, Cauchy's integral formula for surface area, and the Poincare-Hadwiger formula, respectively (see [[21], [25]]).
The chord measures and the Minkowski problems associated with chord measures were posed in [16]. They showed that the chord measures are the differentials of chord integrals and completely solved the chord Minkowski problem except for the critical case of the Christoffel-Minkowski problem. The \(q\)th chord measure is a finite Borel measure on \(\mathbb{S}^{n-1}\) defined by
\[F_{q}(K,\eta)=\frac{2q}{\omega_{n}}\int_{\nu_{K}^{-1}(\eta)}\tilde{V}_{q-1}(K,z)d\mathcal{H}^{n-1}(z),\text{ Borel }\eta\subset\mathbb{S}^{n-1},\]
where \(\widetilde{V}_{q-1}(K,z)\) is the \(q-1\) th dual quermassintegral with respect to \(z.\)(See (2.1).)
\[F_{0}(K,\cdot)=\frac{(n-1)\omega_{n-1}}{n\omega_{n}}S_{n-2}(K,\cdot),\quad F_ {1}(K,\cdot)=S_{n-1}(K,\cdot),\]
where \(S_{i}(K,\cdot)\) is the \(i\)th order area measure of \(K.\) Once chord measures are constructed, the \(L_{p}\) chord measures follow naturally by extensions. For \(K\in\mathcal{K}_{o}^{n}\) and \(p\in\mathbb{R},\) the \(L_{p}\) chord measures are defined by
\[F_{p,q}(K,\eta)=\frac{2q}{\omega_{n}}\int_{\nu_{K}^{-1}(\eta)}(z\cdot\nu_{K}( z))^{1-p}\tilde{V}_{q-1}(K,z)d\mathcal{H}^{n-1}(z),\text{ Borel }\eta\subset\mathbb{S}^{n-1}.\]
When \(p=0,\) it is the cone-chord measure. When \(q=1,\)\(F_{p,1}(K,\cdot)\) is the \(L_{p}\) surface area measure. When \(q=0,\)\(F_{p,0}(K,\cdot)\) is the \(L_{p}-(n-2)\)th area measure.
The \(L_{p}\)-Minkowski problem was first formulated and studied by Lutwak in [17]. It has been rapidly attracting much attention; Lutwak introduced the important \(L_{p}\) surface area measure and its associated Minkowski problem in the \(L_{p}\) Brunn-Minkowski theory. Many cases of the \(L_{p}\) Minkowski problem have been solved. The logarithmic Minkowski problem is one of the most central Minkowski type problems and is the problem of characterizing the cone-volume measure; see B\(\ddot{o}\)r\(\ddot{o}\)czky, Lutwak, Yang and Zhang [[2], [4]], Zhu [[30],[3]],
Stancu [[22], [23]], Gage [12], Xi and Leng [28], Firey [11], Andrews [1], Chen, Huang, Li and Liu [9], Chen, Feng, Liu [10], [[29], [5]] and reference therein. The centro-affine Minkowski problem is unsolved, see [7]. For more classical Brunn-Minkowski theory and its recent developments, we suggest readers to Schneider's book [24].
The \(L_{p}\) chord Minkowski problem posed by Xi-LZY [16] is a problem of prescribing the \(L_{p}\) chord measure: Given a finite Borel measure \(\mu\) on \(\mathbb{S}^{n-1},p\in\mathbb{R},\) and \(q\geqslant 0.\) Asking what are the necessary and sufficient conditions for \(\mu\) such that \(\mu\) is the \(L_{p}\) chord measure of a convex body \(K\in\mathcal{K}_{o}^{n},\) namely
\[F_{p,q}(K,\cdot)=\mu \tag{1.1}\]
when \(p=1,\) it is the chord Minkowski problem. When \(q=1,\) it is the \(L_{p}\) Minkowski problem. When \(\mu\) has a density \(f\) that is an integrable nonnegative function on \(\mathbb{S}^{n-1},\) equation(1.1) becomes a new type of Monge-Ampere equation on \(\mathbb{S}^{n-1}\):
\[\det(\nabla^{2}h+hI)=\frac{h^{p-1}f}{\widetilde{V}_{q-1}([h],\bar{\nabla}h)}, \text{ on }\mathbb{S}^{n-1}, \tag{1.2}\]
where \(\nabla^{2}h\) is the covariant differentiation of \(h\) with respect to an orthonormal frame on \(\mathbb{S}^{n-1},\) we look for a solution \(h\) which is the support function for some nondegenerate convex body. We can extend \(h\) to \(\mathbb{R}^{n}\) via homogeneity and \(\bar{\nabla}h\) is the Euclidean gradient of \(h\) in \(\mathbb{R}^{n},\) and \(\widetilde{V}_{q-1}([h],\bar{\nabla}h)\) is the \((q-1)\)th dual quermassintegral of the Wulff-shape \([h]\) of \(h\) with repect to the point \(\bar{\nabla}h.\)
In their paper[16], Lutwak-XYZ gave a sufficient condition for the symmetric case of the chord log-Minkowski problem by studying the delicate concentration properties of cone-chord measures. Shortly thereafter, Xi, Yang, Zhang and Zhao [26] solved the \(L_{p}\) chord Minkowski problem for \(p>1\) and for \(0<p<1\) under the symmetric condition, where the origin symmetry played a crucial role in the case of \(0\leqslant p<1.\) More recently, Xi, Guo and Zhao solved the \(L_{p}\) chord Minkowski problem when \(0\leqslant p<1,\) without any symmetry assumptions. We solve the \(L_{p}\) chord Minkowski problem when \(p<0,\) without any symmetry assumptions. Lately, we solved the \(L_{p}\) chord Minkowski problem in the case of discrete measures whose supports are in general position for negative \(p\) and \(q>0.\) As for general Borel measure with a density, we also give a proof but need \(p\in(-n,0)\) and \(n+1>q\geqslant 1,\) without any symmetry assumptions.
The aim of this paper is to establish some nonuniqueness for the \(L_{p}\) chord Minkowski problem for \(p<0<q.\) As far as we know, this is the first result towards uniqueness and nonuniqueness of solution to the \(L_{p}\) chord Minkowski problem. Our main theorem is as following:
**Theorem 1.1**.: _For \(p<0,2<q<n+1,\) there exists a positive function \(f\in C^{\infty}(\mathbb{S}^{n-1})\) such that \((\ref{eq:L_p})\) admits at least two different solutions._
**Remark 1.2**.: _We have to note that the method in this paper is inspired by X.-J. Wang et al[14] and Q.-R. Li et al[18]. But the situation in our case is more complicated, since the dual quermassintegral \(\widetilde{V}_{q-1}(K,z)\) is a nonlocal term in the integrand of \(I_{q}(K).\) And we also note that the \(q-1\) th dual quermassintegral \(\widetilde{V}_{q}(K,z)\) of \(K\) with respect to \(z\in\partial K\) is more delicate than the \(q-1\) th dual quermassintegral \(\widetilde{V}_{q}(K)\) of \(K\in\mathcal{K}_{o}^{n}.\)_
To prove the Theorem 1.1, we need to find at least two different solutions. One is constructed from the solution of classic Minkowski problem with a special right-hand-side, such that the solution to (1.2) obtained in this way has its \(q-\)th chord integral as small as we want. The other solution is from a new existence result for equation (1.2) by the variational method. Before we state the result, we first need to introduce some notations.
**Definition 1.3**.: _A function \(f:\mathbb{S}^{n-1}\rightarrow\mathbb{R}\) is called rotationally symmetric if it satisfies_
\[f(Ax^{\prime},x_{n})=f(x^{\prime},x_{n}),\quad\forall x=(x^{\prime},x_{n})\in \mathbb{S}^{n-1}\text{ and }A\in O(n-1),\]
_where \(x^{\prime}=(x_{1},\cdots,x_{n-1})\) and \(O(\cdot)\) denotes the orthogonal group._
Then, here comes our new existence result of solutions to (1.2).
**Theorem 1.4**.: _For \(p<0,1\leqslant q<n+1,\) and \(\alpha,\beta\) satisfying_
\[\alpha>\max\{1-n,1-n+\frac{2-n-q}{n+q-1}p\},\] \[\beta>\max\{-1,-1-\frac{p}{n+q-1}\}.\]
_If \(f\) is a nonnegative, rotationally symmetric, even function on \(\mathbb{S}^{n-1}\) and satisfies_
\[f\leqslant C\left|x^{\prime}\right|^{\alpha}\left|x_{n}\right|^{\beta},\|f\|_{ L^{1}(\mathbb{S}^{n-1})}>0 \tag{1.3}\]
_for some positive constant \(C.\) Then there exists a rotationally symmetric even solution to (1.2). Moreover, we have its chord integral as follows:_
\[I_{q}(K)\geqslant c>0,\]
_where \(c\) depends only on \(n,p,q,\alpha,\beta.\)_
**Remark 1.5**.: _In our previous paper, we solved the \(L_{p}\) chord Minkowski problem for general Borel measure with a density \(f\), without any symmetry assumptions, but need \(p\in(-n,0)\) and \(n+1>q\geqslant 1.\) Here, when \(f\) satisfies the conditions in Theorem 1.4, we can obtain the existence result for all \(p<0.\) While the proof of the two existence results follows similar approaches, we would like to emphasize that the proof of \(C^{0}\) estimate is totally different and more difficult._
The remainder of this paper is structured as follows. In Section 2, we present fundamental concepts in the theory of convex bodies and integral geometry. In Section 3, we will construct a smooth, positive function \(f,\) which is rotationally symmetric in \(x_{1},\cdots,x_{n-1}\) and even, such that (1.2) has a solution with small chord integral. In Section 4, we establish a variational solution and prove Theorem 1.4. In Section 5, we prove Theorem 1.1 based on the existence result in Theorem 1.4 and the solution constructed in Section 3.
## 2 Preliminaries
In this section, our objective is to establish notations and gather relevant results from the literature that will be necessary for the subsequent analysis.
We denote \(x\cdot y\) as the standard inner product of \(x,y\in\mathbb{R}^{n},\) and write \(|x|=\sqrt{x\cdot x}\) for the Euclidean norm of \(x.\) We write \(\mathbb{S}^{n-1}\) as \((n-1)\)-dimension unit sphere of \(\mathbb{R}^{n},\) and denote \(\mathcal{H}^{n-1}\) as the \((n-1)\)-dimensional spherical Lebesgue measure. Denote \(\mathcal{K}^{n}\) for the collection of all convex bodies in \(\mathbb{R}^{n}\) and \(\mathcal{K}_{o}^{n}\) for the subset of \(\mathcal{K}^{n}\) that contains the origin in the interior.
Let \(\Omega\subset\mathbb{S}^{n-1}\) be a closed set of the unit sphere, not lying in a closed hemisphere, and a positive continuous function \(h:\mathbb{S}^{n-1}\rightarrow\mathbb{R}\) is given.(Only the values of h on \(\Omega\) will be needed, but without loss of generality we may assume that \(h\) is defined on all of \(\mathbb{S}^{n-1}.\)) The Wulff shape of \(h\) is defined by
\[[h]=\{x\in\mathbb{R}^{n}:x\cdot u\leqslant h(u)\text{ for all }u\in\mathbb{S}^{n-1}\}.\]
Let \(K\in\mathcal{K}^{n},\)\(h(v)=h_{K}(v)=\max\{v\cdot x,x\in K\},\)\(\rho(u)=\rho_{K}(u)=\max\{\lambda:\lambda u\in K\}\) are the support function and the radial function of convex body \(K\) defined from \(\mathbb{S}^{n-1}\rightarrow\mathbb{R}.\) We write the support hyperplane of \(K\) with the outer unit normal \(v\) as
\[H_{K}(v)=\left\{x\in\mathbb{R}^{n}:x\cdot v=h(v)\right\},\]
the half-space \(H^{-}(K,v)\) in direction \(v\) is defined by
\[H_{K}^{-}(v)=\left\{x\in\mathbb{R}^{n}:x\cdot v\leqslant h(v)\right\}.\]
Denote \(\partial K\) as the boundary of \(K,\) that is, \(\partial K=\{\rho_{K}(u)u:u\in\mathbb{S}^{n-1}\}.\) The spherical image \(\nu=\nu_{K}:\partial K\rightarrow\mathbb{S}^{n-1}\) is given by
\[\nu(x)=\{v\in\mathbb{S}^{n-1}:x\in H_{K}(v)\},\]
let \(\sigma_{K}\subset\partial K\) denote the set of all points \(x\in\partial K,\) such that the set \(\nu_{K}(x)\) contains more than one element. Fortunately, we have \(\mathcal{H}^{n-1}(\sigma_{K})=0\) (see [24, page 84] ) and the radial
Gauss image \(\alpha=\alpha_{K}\) and the reverse radial Gauss image \(\alpha^{*}=\alpha_{K}^{*}\) are respectively defined by
\[\alpha(\omega)=\{\nu(\rho_{K}(u)u):u\in\omega\},\alpha^{*}(\omega)=\{u\in{ \mathbb{S}}^{n-1}\nu(\rho_{K}(u)u)\in\omega\}.\]
Let \(K\in{\mathcal{K}}^{n}\), for \(z\in\operatorname{int}K\) and \(q\in{\mathbb{R}}\), the \(q\) th dual quermassintegral \(\widetilde{V}_{q}(K,z)\) of \(K\) with respect to \(z\) is defined by
\[\widetilde{V}_{q}(K,z)=\frac{1}{n}\int_{S^{n-1}}\rho_{K,z}(u)^{q}\ \mathrm{d}u \tag{2.1}\]
where \(\rho_{K,z}(u)=\max\{\lambda>0:z+\lambda u\in K\}\) is the radial function of \(K\) with respect to \(z\). When \(z\in\partial K,\widetilde{V}_{q}(K,z)\) is defined in the way that the integral is only over those \(u\in S^{n-1}\) such that \(\rho_{K,z}(u)>0\). In another word,
\[\widetilde{V}_{q}(K,z)=\frac{1}{n}\int_{\rho_{K,z}(u)>0}\rho_{K,z}(u)^{q}\ \mathrm{d}u,\,\text{whenever}\ z\in\partial K.\]
In this case, for \({\mathcal{H}}^{n-1}\)-almost all \(z\in\partial K\), we have
\[\widetilde{V}_{q}(K,z)=\frac{1}{2n}\int_{S^{n-1}}X_{K}(z,u)^{q}\ \mathrm{d}u\]
where the parallel \(X\)-ray of \(K\) is the nonnegative function on \({\mathbb{R}}^{n}\times S^{n-1}\) defined by
\[X_{K}(z,u)=|K\cap(z+{\mathbb{R}}u)|,\quad z\in{\mathbb{R}}^{n},\quad u\in S^{ n-1}.\]
When \(q>0\), the dual quermassintegral is the Riesz potential of the characteristic function, that is,
\[\widetilde{V}_{q}(K,z)=\frac{q}{n}\int_{K}|x-z|^{q-n}\ \mathrm{d}x\]
Note that this immediately allows for an extension of \(\widetilde{V}_{q}(K,\cdot)\) to \({\mathbb{R}}^{n}\). An equivalent definition via radial function can be found in [16]. By a change of variables, we obtain:
\[\widetilde{V}_{q}(K,z)=\frac{q}{n}\int_{K-z}|y|^{q-n}\ \mathrm{d}y\]
since when \(q>0\), the integrand \(|y|^{q-n}\) being locally integrable, it can be inferred that the dual quermassintegral \(\widetilde{V}_{q}(K,z)\) is continuous in \(z\). Let \(K\in{\mathcal{K}}^{n}\). The \(X\)-ray \(X_{K}(x,u)\) and the radial function \(\rho_{K,z}(u)\) are related as follows:
\[X_{K}(x,u)=\rho_{K,z}(u)+\rho_{K,z}(-u),\quad\text{ when }\quad K\cap(x+{ \mathbb{R}}u)=K\cap(z+{\mathbb{R}}u)\neq\varnothing.\]
When \(z\in\partial K\), then either \(\rho_{K,z}(u)=0\) or \(\rho_{K,z}(-u)=0\) for almost all \(u\in S^{n-1}\), and thus
\[X_{K}(z,u)=\rho_{K,z}(u),\quad\text{ or }X_{K}(z,u)=\rho_{K,z}(-u),\quad z\in \partial K,\]
for almost all \(u\in S^{n-1}\). Then, the chord integral \(I_{q}(K)\) can be represented as follows:
\[I_{q}(K)=\frac{1}{n\omega_{n}}\int_{S^{n-1}}\int_{u^{\perp}}X_{K}(x,u)^{q}\ \mathrm{d}x\ \mathrm{d}u,\quad q\geq 0.\]
An elementary property of the functional \(I_{q}\) is its homogeneity. If \(K\in\mathcal{K}^{n}\) and \(q\geq 0\), then
\[I_{q}(tK)=t^{n+q-1}I_{q}(K),\]
for \(t>0\). By compactness of \(K\), it is simple to see that the chord integral \(I_{q}(K)\) is finite whenever \(q\geq 0\). Let \(K\in\mathcal{K}^{n}\) and \(q>0\), the chord measure \(F_{q}(K,\cdot)\) is a finite Borel measure on \(S^{n-1}\), which can be expressed as:
\[F_{q}(K,\eta)=\frac{2q}{\omega_{n}}\int_{v^{-1}(\eta)}\widetilde{V}_{q-1}(K,z )\mathrm{d}\mathcal{H}^{n-1}(z),\quad\text{ for each Borel }\eta\subset S^{n-1}.\]
The mapping \(v_{K}\) of \(K\) is almost everywhere defined on \(\partial K\) with respect to the \((n-1)\)-dimensional Hausdorff measure, owing to the convexity of \(K\). The chord measure \(F_{q}(K,\cdot)\) is significant as it is obtained by differentiating the chord integral \(I_{q}\) in a certain sense, as shown in (2.2). It is evident that the chord measure \(F_{q}(K,\cdot)\) is absolutely continuous with respect to the surface area measure \(S_{n-1}(K,\cdot)\). In [16, Theorem 4.3], it was demonstrated that:
\[I_{q}(K)=\frac{1}{n+q-1}\int_{s^{n-1}}h_{K}(v)\mathrm{d}F_{q}(K,v)\]
When \(q>0\), a useful integral formula demonstrated in [16, Lemma 5.3] is
\[2n\int_{\partial K}\widetilde{V}_{q-1}(K,z)g\left(v_{K}(z)\right)\mathrm{d} \mathcal{H}^{n-1}(z)=\int_{s^{n-1}}\int_{\partial K}X_{K}(z,u)^{q-1}g\left(v_ {K}(z)\right)\mathrm{d}\mathcal{H}^{n-1}(z)\mathrm{d}u,\]
for any \(g\in C\left(S^{n-1}\right)\). Therefore, for each \(K\in\mathcal{K}^{n}\), we have
\[\int_{S^{n-1}}g(v)\mathrm{d}F_{q}(K,v) =\frac{q}{n\omega_{n}}\int_{S^{n-1}}\int_{\partial K}X_{K}(z,u)^{ q-1}g\left(v_{K}(z)\right)\mathrm{d}\mathcal{H}^{n-1}(z)\mathrm{d}u\] \[=\frac{q}{n\omega_{n}}\int_{S^{n-1}}\int_{S^{n-1}}X_{K}\left( \rho_{K}(w)w,u\right)^{q-1}h_{K}\left(\alpha_{K}(w)\right)^{-1}\] \[\rho_{K}(w)^{n}g\left(\alpha_{K}(w)\right)\mathrm{d}w\ \mathrm{d}u.\]
Here, we denote \(\rho_{K}=\rho_{K,o}\). For each \(p\in\mathbb{R}\) and \(K\in\mathcal{K}_{o}^{n}\), the \(L_{p}\) chord measure \(F_{p,q}(K,\cdot)\) is defined as follows:
\[\mathrm{d}F_{p,q}(K,v)=h_{K}(v)^{1-p}\ \mathrm{d}F_{q}(K,v)\]
and we have an important property of \(F_{p,q}\), its homogeneity, namely
\[F_{p,q}(tK,\cdot)=t^{n+q-p-1}F_{p,q}(K,\cdot)\]
for each \(t>0\).
From Theorem 2.2 in [26], we know that if \(K_{i}\in\mathcal{K}_{o}^{n}\to K_{0}\in\mathcal{K}_{o}^{n},\) then the chord measure \(F_{q}(K_{i},\cdot)\) converges to \(F_{q}(K,\cdot)\) weakly. Hence, one can immediately obtain that
\[F_{p,q}(K_{i},\cdot)\to F_{p,q}(K,\cdot)\text{ weakly.}\]
It was shown in [16] that the differential of the chord integral \(I_{q}\) with respect to the \(L_{p}\) Minkowski combinations leads to the \(L_{p}\) chord measure: for \(p\neq 0,\)
\[\frac{\mathrm{d}}{\mathrm{d}t}\Big{|}_{t=0}I_{q}\left(K+_{p}t\cdot L\right)= \frac{1}{p}\int_{S^{n-1}}h_{L}^{p}(v)\mathrm{d}F_{p,q}(K,v),\]
where \(K+_{p}t\cdot L\) is the \(L_{p}\) Minkowski combination between \(K\) and \(L.\)
Since we will use a variational method to solve the \(L_{p}\) chord Minkowski problem, the variational formula for chord integral is crucial and it is the key to transforming the Minkowski problem into the Lagrange equation of an optimization problem.
**Theorem 2.1** (Theorem 5.5 in [16]).: _Let \(q>0,\) and \(\Omega\) be a compact subset of \(\mathbb{S}^{n-1}\) that is not contained in any closed hemisphere. Suppose that \(g:\Omega\rightarrow(0,\infty)\) is a family of continuous functions given by_
\[h_{t}=h_{0}+tg+o(t,\cdot),\]
_for each \(t\in(-\delta,\delta)\) for some \(\delta>0.\) Here \(o(t,\cdot)\in C(\Omega)\) and \(o(t,\cdot)/v\) tends to \(0\) uniformly on \(\Omega\) as \(t\to 0.\) Let \(K_{t}\) be the Wulff shape generated by \(h_{t}\) and \(K\) be the Wulff shape generated by \(h_{0}.\) Then,_
\[\frac{d}{dt}\big{|}_{t=0}I_{q}(K_{t})=\int_{\Omega}g(v)dF_{q}(K,v). \tag{2.2}\]
See also in [26, Theorem 2.1].
To solve the maximization problem posed in Section 4, delicate estimates for chord integrals are needed. We collect the following lemma obtained in [16].
**Lemma 2.2** (lemma 7.3 [16]).: _Suppose \(q\in(1,n+1)\) is not an integer. If \(E\) is the ellipsoid in \(\mathbb{R}^{n}\) given by_
\[E=E\left(a_{1},\ldots,a_{n}\right)=\left\{x\in\mathbb{R}^{n}:\frac{(x\cdot e_{ 1})^{2}}{a_{1}^{2}}+\cdots+\frac{(x\cdot e_{n})^{2}}{a_{n}^{2}}\leq 1\right\}\]
_with \(0<a_{1}\leq a_{2}\leq\cdots\leq a_{n}\leq 1\), then for any real \(q\) and integer \(m\) such that \(1\leq m<q<m+1\leq n+1\),_
\[I_{q}(E)\leq c_{q,m,n}\left(a_{1}\cdots a_{m}\right)^{2}a_{m}^{q-m-1}a_{m+1} \cdots a_{n}\]
_where \(c_{q,m,n}\) is a constant that depends only on \(q\) and \(n\) (since \(m=\lfloor q\rfloor\) ) and is given by_
\[c_{q,m,n}=\begin{cases}\frac{2^{q-n+2}q(q-1)\omega_{n-1}^{2}}{(q-n)(q-n+1)n \omega_{n}}&m=n\\ \frac{2^{n-m+3}q(q-1)(n-m)\omega_{m-1}^{2}\omega_{n-m}^{2}}{(m+1-q)(q-m)(q-m+1 )n\omega_{n}}&m<n.\end{cases}\]
As for \(q\in\{1,\cdots,n\}\), we shall deduce the same form of estimate. First, we recall a significant inequality presented in the following lemma, which was obtained in [16]:
**Lemma 2.3** (claim 8.1 [16]).: _If \(K\in\mathcal{K}_{o}^{n}\) and \(1\leqslant r<s\), then_
\[I_{r}(K)\leqslant c(s,r)V(K)^{1-\frac{r-1}{s-1}}I_{s}(K)^{\frac{r-1}{s-1}},\]
_with \(c(s,r)=rs^{-\frac{r-1}{s-1}}\)._
This inequality follows from a simple argument using jensen's inequality. And one immediately obtains the desired estimate for chord integrals when \(q\) is an integer.
## 3 Structural solution
Let \(0<\epsilon<\frac{1}{2}\), \(M_{\epsilon}\in GL(n)\) be given by
\[M_{\epsilon}=diag(\epsilon,\cdots,\epsilon,1)=\left(\begin{array}{cc} \epsilon I&0\\ 0&1\end{array}\right),\]
where \(I\) is the unit \((n-1)\times(n-1)\) matrix.
Consider the following equation:
\[\det\left(\nabla^{2}h+hI\right)(x)=\left|x^{\prime}\right|^{\alpha}\left|x_{n }\right|^{\beta}\left|M_{\epsilon}x\right|^{\gamma-\beta},\quad x\in\mathbb{ S}^{n-1}. \tag{3.1}\]
We can choose appropriate indices \(\alpha\), \(\beta\) and \(\gamma\), in this section, we need \(2<q\leqslant 1+n,-1<\gamma<-1-\frac{p}{n+q-1}\), \(\alpha\) and \(\beta\) are nonnegative and satisfying the assumptions of Theorem 1.4. As its right-hand side is even with respect to the origin and satisfies the necessary condition
\[\int_{S^{n-1}}x_{k}\left|x^{\prime}\right|^{\alpha}\left|x_{n}\right|^{\beta} \left|M_{\epsilon}x\right|^{\gamma-\beta}=0\]
for all \(1\leqslant k\leqslant n\), this classical Minkowski problem exists a solution \(h_{\epsilon}\), which is unique up to translation by [8].
Let \(h_{\epsilon}\) be the unique solution such that its associated convex body \(K_{h_{\epsilon}}\) centred at the origin. We have to note that \(K_{h_{\epsilon}}\) is rotationally symmetric and even and the positive constants \(C,\tilde{C}\) and \(c_{i},C_{i}\) in the following context depend only on \(n,p,q,\alpha,\beta\), and \(\gamma\) but independent of \(\epsilon\).
**Lemma 3.1**.: _Let \(\gamma>-1.\) There exists a positive constant \(C\), independent of \(\epsilon\in(0,\frac{1}{2})\), such that_
\[C^{-1}\leq h_{\epsilon}\leq C\text{ on }\quad\mathbb{S}^{n-1}. \tag{3.2}\]
Proof.: By (3.1), \(-1<\gamma\), one has
\[\operatorname{area}\left(\partial K_{h_{\epsilon}}\right) =\int_{\mathbb{S}^{n-1}}\det\left(\nabla^{2}h+hI\right)(x)dx\] \[=\int_{\mathbb{S}^{n-1}}\left|x^{\prime}\right|^{\alpha}\left|x_{ n}\right|^{\beta}\left(\epsilon^{2}\left|x^{\prime}\right|^{2}+\left|x_{n}\right|^{2 }\right)^{\frac{\gamma-\beta}{2}}dx\] \[\leqslant\int_{\mathbb{S}^{n-1}}\left(\epsilon^{2}\left|x^{ \prime}\right|^{2}+\left|x_{n}\right|^{2}\right)^{\gamma/2}dx\] \[\leqslant\left\{\begin{array}{ll}\int_{\mathbb{S}^{n-1}}\left|x _{n}\right|^{\gamma/2}\;\;\mathrm{d}x&\text{ when }-1<\gamma<0\\ \int_{\mathbb{S}^{n-1}}\;\;\mathrm{d}x&\text{ when }0\leqslant\gamma\end{array}\right.\] \[\leq C,\]
where \(C\) is a positive constant depending on \(n,\gamma\) but independent of \(\epsilon\). By John's lemma,
\[\frac{1}{n}E_{\epsilon}\subset K_{h_{\epsilon}}\subset E_{\epsilon}\]
where \(E_{\epsilon}\) is the minimum ellipsoid of \(K_{h_{\epsilon}}\). Then, this implies that
\[\frac{1}{n}h_{E_{\epsilon}}\leq h_{\epsilon}\leq h_{E_{\epsilon}}\quad\text{ on }\quad\mathbb{S}^{n-1},\]
where \(h_{E_{\epsilon}}\) is the support function of \(E_{\epsilon}\). Since \(K_{h_{\epsilon}}\) is rotationally symmetric and even, we have that \(E_{\epsilon}\) is also rotationally symmetric and even and centred at the origin. Let \(r_{1\epsilon},\cdots,r_{n\epsilon}\) be the lengths of the semi-axes of \(E_{\epsilon}\) along the \(x_{1},\cdots,x_{n}\) axes. Then, \(r_{1\epsilon}=\cdots=r_{n-1;\epsilon}\) and
\[h_{E_{\epsilon}}(x)=\sqrt{r_{1\epsilon}^{2}\left|x^{\prime}\right|^{2}+r_{n \epsilon}^{2}x_{n}^{2}},\quad\forall x\in\mathbb{S}^{n-1}.\]
By (3.1), one has
\[\operatorname{Vol}(K_{h_{\epsilon}}) =\frac{1}{n}\int_{\mathbb{S}^{n-1}}h_{\epsilon}\det\left(\nabla ^{2}h_{\epsilon}+h_{\epsilon}I\right)(x)dx\] \[=\frac{1}{n}\int_{\mathbb{S}^{n-1}}h_{\epsilon}\left|x^{\prime} \right|^{\alpha}\left|x_{n}\right|^{\beta}\left(\epsilon^{2}\left|x^{\prime} \right|^{2}+\left|x_{n}\right|^{2}\right)^{\frac{\gamma-\beta}{2}}dx\] \[\geqslant\frac{1}{n}\int_{\mathbb{S}^{n-1}}h_{\epsilon}\left|x^{ \prime}\right|^{\alpha}\left|x_{n}\right|^{\beta}\left(\epsilon^{2}\left|x^{ \prime}\right|^{2}+\left|x_{n}\right|^{2}\right)^{\frac{\gamma}{2}}dx\] \[\geqslant\frac{1}{n}\left\{\begin{array}{ll}\int_{\mathbb{S}^ {n-1}}h_{\epsilon}(x)\left|x^{\prime}\right|^{\alpha}\left|x_{n}\right|^{\beta }\mathrm{d}x&\text{ when }\quad-1<\gamma<0,\\ \int_{\mathbb{S}^{n-1}}h_{\epsilon}(x)\left|x^{\prime}\right|^{\alpha}\left|x _{n}\right|^{\beta+\gamma}\;\;\mathrm{d}x&\text{ when }\quad\gamma\geqslant 0.\end{array}\right.\]
Since
\[h_{\epsilon}(x)\geq\frac{1}{\sqrt{2}n}\left(r_{1\epsilon}\left|x^{\prime} \right|+r_{n\epsilon}\left|x_{n}\right|\right),\quad\forall x\in\mathbb{S}^{n- 1}.\]
Recall that \(\alpha,\beta\) are nonnegative, then we have
\[\operatorname{Vol}(K_{h_{\epsilon}}) \geqslant\frac{1}{n}\left\{\begin{array}{ll}\int_{\mathbb{S}^ {n-1}}\frac{1}{\sqrt{2}n}\left(r_{1\epsilon}\left|x^{\prime}\right|+r_{n \epsilon}\left|x_{n}\right|\right)\left|x^{\prime}\right|^{\alpha}\left|x_{n} \right|^{\beta}\mathrm{d}x&\text{ when }\quad-1<\gamma<0,\\ \int_{\mathbb{S}^{n-1}}\frac{1}{\sqrt{2}n}\left(r_{1\epsilon}\left|x^{\prime} \right|+r_{n\epsilon}\left|x_{n}\right|\right)\left|x^{\prime}\right|^{ \alpha}\left|x_{n}\right|^{\beta+\gamma}\;\;\mathrm{d}x&\text{ when }\quad\gamma\geqslant 0.\end{array}\right.\] \[\geqslant c(r_{1\epsilon}+r_{n\epsilon}),\]
therefore, by the isoperimetric inequality, we obtain that
\[\operatorname{vol}\left(K_{h_{\epsilon}}\right)\leq C_{n}\operatorname{area} \left(\partial K_{h_{\epsilon}}\right)^{\frac{n}{n-1}}\leq C,\]
and hence
\[\max h_{\epsilon}<r_{1\epsilon}+r_{n\epsilon}\leqslant c\text{Vol}(K_{h_{ \epsilon}})\leqslant C. \tag{3.3}\]
On the other hand, we have
\[\operatorname{vol}\left(K_{h_{\epsilon}}\right) \leq\operatorname{vol}\left(E_{\epsilon}\right)\] \[=\kappa_{n}r_{1\epsilon}^{n-1}r_{n\epsilon}\] \[\leq C_{n}\left(\max h_{\epsilon}\right)^{n-1}\cdot\min h_{\epsilon}\]
where \(\kappa_{n}\) is the volume of the unit ball in \(\mathbb{R}^{n}\). Since
\[\max h_{\epsilon}\leqslant c\text{Vol}(K_{h_{\epsilon}})\leq C\left(\max h_{ \epsilon}\right)^{n-1}\cdot\min h_{\epsilon}\]
namely
\[1\leq C\left(\max h_{\epsilon}\right)^{n-2}\cdot\min h_{\epsilon}\]
By (3.3), we obtain the bound of \(h_{\epsilon}\) from below. Now, the proof of this lemma is completed.
Define
\[H_{\epsilon}(x):=\epsilon^{\frac{n-p-4-\gamma+q}{n-p+q-1}}\left|M_{\epsilon}^ {-1}x\right|\cdot h_{\epsilon}\left(\frac{M_{\epsilon}^{-1}x}{\left|M_{ \epsilon}^{-1}x\right|}\right),\quad x\in\mathbb{S}^{n-1}. \tag{3.4}\]
**Lemma 3.2**.: _The function \(H_{\epsilon}\) satisfies the equation_
\[\text{det}(\nabla^{2}H_{\epsilon}+H_{\epsilon})=\frac{H_{\epsilon}^{p-1}f_{ \epsilon}}{\bar{V}_{q-1}([H_{\epsilon}],\bar{\nabla}H_{\epsilon})},\text{ on }\mathbb{S}^{n-1},\]
_where_
\[f_{\epsilon}(x):=h_{\epsilon}\left(x_{\epsilon}\right)^{1-p}\left|X^{\prime} \right|^{\alpha}\left|x_{n}\right|^{\beta}\left|N_{\epsilon}X\right|^{-\gamma- \alpha-n-p}\frac{1}{n}\int_{S^{n-1}}\left|N_{\epsilon}y\right|^{q-1-n}\rho_{ K_{h_{\epsilon}},\bar{\nabla}h_{\epsilon}(x_{\epsilon})}^{q-1}(y)dy,\]
_and_
\[x_{\epsilon}=\frac{M_{\epsilon}^{-1}x}{\left|M_{\epsilon}^{-1}x\right|}\quad \text{ and }\quad N_{\epsilon}=\epsilon M_{\epsilon}^{-1}=\left(\begin{array}{cc}I& 0\\ 0&\epsilon\end{array}\right).\]
Proof.: Let
\[u_{\epsilon}(x):=\left|M_{\epsilon}^{-1}x\right|\cdot h_{\epsilon}\left( \frac{M_{\epsilon}^{-1}x}{\left|M_{\epsilon}^{-1}x\right|}\right)\]
By the invariance of the quantity \(h_{\epsilon}^{n+1}\det\left(\nabla^{2}h_{\epsilon}+h_{\epsilon}I\right)\) under linear transformations, see [[7], Proposition 7.1] or formula (2.12) in [19], we have
\[\det\left(\nabla^{2}u_{\epsilon}+u_{\epsilon}I\right)(x)=\det\left(\nabla^{2}h_ {\epsilon}+h_{\epsilon}I\right)\left(\frac{M_{\epsilon}^{-1}x}{\left|M_{ \epsilon}^{-1}x\right|}\right)\cdot\frac{\left(\det M_{\epsilon}^{-1}\right)^{ 2}}{\left|M_{\epsilon}^{-1}x\right|^{n+1}}. \tag{3.5}\]
Since
\[x_{\epsilon}=\frac{M_{\epsilon}^{-1}x}{\left|M_{\epsilon}^{-1}x\right|}=\frac{ \left(\epsilon^{-1}x^{\prime},x_{n}\right)}{\left|M_{\epsilon}^{-1}x\right|}= \frac{\left(x^{\prime},\epsilon x_{n}\right)}{\left|N_{\epsilon}x\right|}.\]
By (3.1), we then have
\[\det\left(\nabla^{2}h_{\epsilon}+h_{\epsilon}I\right)\left(\frac{ M_{\epsilon}^{-1}x}{\left|M_{\epsilon}^{-1}x\right|}\right) =\frac{\left|x^{\prime}\right|^{\alpha}}{\left|N_{\epsilon}x\right| ^{\alpha}}\cdot\frac{\left|\epsilon x_{n}\right|^{\beta}}{\left|N_{\epsilon} x\right|^{\beta}}\cdot\left(\frac{1}{\left|M_{\epsilon}^{-1}x\right|}\right)^{ \gamma-\beta}\] \[=\frac{\left|x^{\prime}\right|^{\alpha}}{\left|N_{\epsilon}x \right|^{\alpha}}\cdot\frac{\left|\epsilon x_{n}\right|^{\beta}}{\left|N_{ \epsilon}x\right|^{\beta}}\cdot\left(\frac{\epsilon}{\left|N_{\epsilon}x\right| }\right)^{\gamma-\beta}\] \[=\epsilon^{\gamma}\left|x^{\prime}\right|^{\alpha}\left|x_{n} \right|^{\beta}\left|N_{\epsilon}x\right|^{-\gamma-\alpha}.\]
Applying this into (3.5), we obtain
\[\det\left(\nabla^{2}u_{\epsilon}+u_{\epsilon}I\right)(x) =\epsilon^{\gamma}\left|x^{\prime}\right|^{\alpha}\left|x_{n} \right|^{\beta}\left|N_{\epsilon}x\right|^{-\gamma-\alpha}\cdot\frac{\left( \epsilon^{1-n}\right)^{2}\epsilon^{n+1}}{\left|N_{\epsilon}x\right|^{n+1}}\] \[=\epsilon^{\gamma+3-n}\left|x^{\prime}\right|^{\alpha}\left|x_{n }\right|^{\beta}\left|N_{\epsilon}x\right|^{-\gamma-\alpha-n-1}.\]
By the definition of \(u_{\epsilon}\), we have
\[u_{\epsilon}(x)=\epsilon^{-1}\left|N_{\epsilon}x\right|\cdot h_{ \epsilon}\left(x_{\epsilon}\right),\] \[\left(\bar{\nabla}u_{\epsilon}\right)(x)=M_{\epsilon}^{-T}\left( \bar{\nabla}h_{\epsilon}\right)(x_{\epsilon})=\epsilon^{-1}N_{\epsilon} \left(\bar{\nabla}h_{\epsilon}\right)(x_{\epsilon})\]
then
\[\rho_{K_{u_{\epsilon}},\bar{\nabla}u_{\epsilon}(x)}(y) =\max\{\lambda\in\mathbb{R}:\lambda y\in K_{u_{\epsilon}}-\bar{ \nabla}u_{\epsilon}(x)\}\] \[=\max\{\lambda\in\mathbb{R}:\lambda y\in M_{\epsilon}^{-1}K_{h_{ \epsilon}}-M_{\epsilon}^{-1}M_{\epsilon}\bar{\nabla}u_{\epsilon}(x)\}\] \[=\max\{\lambda\in\mathbb{R}:M_{\epsilon}^{-1}M_{\epsilon}\lambda y \in M_{\epsilon}^{-1}(K_{h_{\epsilon}}-M_{\epsilon}\bar{\nabla}u_{\epsilon}(x))\}\] \[=\rho_{K_{h_{\epsilon}},M_{\epsilon}(\bar{\nabla}u_{\epsilon}(x)) }(M_{\epsilon}(y))\] \[=\rho_{K_{h_{\epsilon}},\bar{\nabla}h_{\epsilon}(x_{\epsilon})) }(M_{\epsilon}(y))\]
Hence
\[\widetilde{V}_{q-1}\left(K_{u_{\epsilon}},\bar{\nabla}u_{\epsilon} (x)\right) =\frac{1}{n}\int_{\mathbb{S}^{n-1}}\left(\rho_{K_{u_{\epsilon}}, \bar{\nabla}u_{\epsilon}(x)}(u)\right)^{q-1}du\] \[=\frac{1}{n}\int_{\mathbb{S}^{n-1}}\left(|M_{\epsilon}u|^{-1} \cdot\rho_{K_{h_{\epsilon}},\bar{\nabla}h_{\epsilon}(x_{\epsilon})}\left( \frac{M_{\epsilon}(u)}{|M_{\epsilon}(u)|}\right)\right)^{q-1}du\] \[=\frac{\epsilon^{2-q}}{n}\int_{\mathbb{S}^{n-1}}|N_{\epsilon}y|^{ q-1-n}\,\rho_{K_{h_{\epsilon}},\bar{\nabla}h_{\epsilon}(x_{\epsilon})}^{q-1}(y)dy,\]
where we apply the integration by substitution
\[y=\frac{M_{\epsilon}(u)}{|M_{\epsilon}(u)|}=\frac{\epsilon u^{ \prime},u_{n}}{|M_{\epsilon}(u)|}\] \[u=\frac{M_{\epsilon}^{-1}y}{\left|M_{\epsilon}^{-1}y\right|}= \frac{\frac{1}{\epsilon}(y^{\prime},\epsilon y_{n})}{\left|M_{\epsilon}^{-1}y \right|}.\]
then
\[u_{\epsilon}^{1-p}\widetilde{V}_{q-1}([u_{\epsilon}],\bar{\nabla }u_{\epsilon})\text{det}(\nabla^{2}u_{\epsilon}+u_{\epsilon}) =\epsilon^{p-q+\gamma-n+4}\left|N_{\epsilon}(x)\right|^{-\gamma- \alpha-n-p}h_{\epsilon}^{1-p}(x_{\epsilon})\left|x^{\prime}\right|^{\alpha}|x _{n}|^{\beta}\] \[\frac{1}{n}\int_{\mathbb{S}^{n-1}}\left|N_{\epsilon}y\right|^{q- 1-n}\rho_{K_{h_{\epsilon}},\bar{\nabla}h_{\epsilon}(x_{\epsilon})}^{q-1}(y)dy,\]
Let \(H_{\epsilon}(x):=\epsilon^{\frac{n-p-4-\gamma+q}{n-p+q-1}}u_{\epsilon}\), we have
\[H_{\epsilon}^{1-p}\widetilde{V}_{q-1}([H_{\epsilon}],\bar{ \nabla}H_{\epsilon})\text{det}(\nabla^{2}H_{\epsilon}+H_{\epsilon}I) =\left|N_{\epsilon}(x)\right|^{-\gamma-\alpha-n-p}h_{\epsilon}^{1 -p}(x_{\epsilon})\left|x^{\prime}\right|^{\alpha}|x_{n}|^{\beta}\] \[\frac{1}{n}\int_{\mathbb{S}^{n-1}}\left|N_{\epsilon}y\right|^{q- 1-n}\rho_{K_{h_{\epsilon}},\bar{\nabla}h_{\epsilon}(x_{\epsilon})}^{q-1}(y)dy,\]
Since \(|y^{\prime}|\leqslant|N_{\epsilon}y|=\sqrt{|y^{\prime}|^{2}+\epsilon|y_{n}|^{2 }}\leqslant 1,q\leqslant 1+n\) we have
\[\frac{1}{n}\int_{\mathbb{S}^{n-1}}\left|N_{\epsilon}y\right|^{q- 1-n}\rho_{K_{h_{\epsilon}},\bar{\nabla}h_{\epsilon}(x_{\epsilon})}^{q-1}(y)dy \geqslant\frac{1}{n}\int_{\mathbb{S}^{n-1}}\rho_{K_{h_{\epsilon}}, \bar{\nabla}h_{\epsilon}(x_{\epsilon})}^{q-1}(y)dy\] \[\geqslant c.\]
The second inequality is due to the uniform boundness of \(h_{\epsilon}\), (3.2). Indeed, Since
\[\int_{\mathbb{S}^{n-1}}\rho_{K_{h_{\epsilon}},\bar{\nabla}h_{\epsilon}(x_{ \epsilon})}^{q-1}(y)dy=\frac{1}{2}\int_{S^{n-1}}X_{K_{h_{\epsilon}}}(\bar{ \nabla}h_{\epsilon}(x_{\epsilon}),y)^{q-1}dy\]
where the parallel \(X\)-ray of \(K_{h_{\epsilon}}\) is the nonnegative function on \(\mathbb{R}^{n}\times S^{n-1}\) defined by
\[K_{h_{\epsilon}}(z,u)=|K_{h_{\epsilon}}\cap(z+\mathbb{R}u)|,\quad z\in\mathbb{ R}^{n},\quad u\in S^{n-1}.\]
We can choose a Borel set \(Z\subset\mathbb{S}^{n-1}\) satisfying \(|Z|\geqslant c\), here, \(c\) is a universal constant independent with \(\epsilon\), and \(\forall y\in Z\) we have
\[X_{K_{h_{\epsilon}}}(\bar{\nabla}h_{\epsilon}(x_{\epsilon}),y)\geqslant c\]
for some uniform constant \(c\). Indeed, in dimension \(2\), since \(K_{h_{\epsilon}}\) is pinched between two bounded balls, \(\forall z\in\partial K_{h_{\epsilon}}\), there is a Borel set \(Z\subset\mathbb{S}^{1}\) with \(\arcsin(\frac{\sqrt{3}c}{2C})\leqslant|Z|\leqslant\frac{2\pi}{3}\) such that \(y\in Z,X_{K_{h_{\epsilon}}}(z,y)\geqslant c\), here, the constant \(c(C)\) is the radius of the inner(outer) ball of \(K_{\epsilon}.\) The higher dimensional case is analogous, just do a rotation.
Then combining with (3.2), we have \(f_{\epsilon}(x)\geqslant c_{1}\left|X^{\prime}\right|^{\alpha}\left|x_{n}\right|^{ \beta}\left|N_{\epsilon}X\right|^{-\gamma-\alpha-n-p}.\) Hence
\[\int_{\mathbb{S}^{n-1}}f_{\epsilon}(x)dx \geqslant c_{1}\int_{\mathbb{S}^{n-1}}\left|x^{\prime}\right|^{ \alpha}\left|x_{n}\right|^{\beta}\left|N_{\epsilon}x\right|^{-\gamma-\alpha-n- p}dx\] \[=2c_{1}\omega_{n-2}\int_{0}^{\frac{\pi}{2}}\sin\theta^{\alpha} \cos\theta^{\beta}\left(\sin\theta^{2}+\epsilon^{2}\cos\theta^{2}\right)^{- \frac{\gamma+\alpha+n+p}{2}}\sin\theta^{n-2}d\theta\] \[\geqslant c_{2}\int_{0}^{\frac{\pi}{4}}\theta^{\alpha+n-2}\left( \theta^{2}+\epsilon^{2}\right)^{-\frac{\gamma+\alpha+n+p}{2}}d\theta\] \[+c_{2}\int_{\frac{\pi}{4}}^{\frac{\pi}{2}}(\frac{\pi}{2}-\theta )^{\beta}\left(1+\epsilon^{2}(\frac{\pi}{2}-\theta)^{2}\right)^{-\frac{\gamma +\alpha+n+p}{2}}d\theta\] \[\geqslant c_{2}\int_{0}^{\frac{\pi}{4}}t^{\beta}\left(1+\epsilon^ {2}t^{2}\right)^{-\frac{\gamma+\alpha+n+p}{2}}dt\] \[\geqslant c_{2}\left\{\begin{array}{ll}\int_{0}^{\frac{\pi}{4}}t^ {\beta}2^{-\frac{\gamma+\alpha+n+p}{2}}dt&\text{ when }0\leqslant\gamma+\alpha+n+p\\ \int_{0}^{\frac{\pi}{4}}t^{\beta}dt&\text{ when }0>\gamma+\alpha+n+p \end{array}\right.\] \[\geqslant c,\]
where the third inequality is due to the integration by substitution \(t=\frac{\pi}{2}-\theta\), and we throw the first item; the fourth inequality is because that \(0<\epsilon<1/2,1\leqslant 1+\epsilon^{2}t^{2}<2\).
On the other hand, by (3.2), we have
\[f_{\epsilon}(x)\leqslant C\left|X^{\prime}\right|^{\alpha}\left|x_{n}\right|^ {\beta}\left|N_{\epsilon}X\right|^{-\gamma-\alpha-n-p}\frac{1}{n}\int_{S^{n-1 }}\left|N_{\epsilon}y\right|^{q-1-n}\rho_{K_{h_{\epsilon}},\bar{\nabla}h_{ \epsilon}(x_{\epsilon})}^{q-1}(y)dy,\]
first, we estimate the integral, since \(|y^{\prime}|\leqslant|N_{\epsilon}y|=\sqrt{|y^{\prime}|^{2}+\epsilon|y_{n}|^{ 2}}\leqslant 1\), and by (3.2) we have
\[\int_{S^{n-1}}\left|N_{\epsilon}y\right|^{q-1-n}\rho_{K_{h_{ \epsilon}},\bar{\nabla}h_{\epsilon}(x_{\epsilon})}^{q-1}(y)dy \leqslant C\int_{S^{n-1}}\left|N_{\epsilon}y\right|^{q-1-n}dy\] \[\leqslant C2\omega_{n-2}\int_{0}^{\frac{\pi}{2}}\left(\sin\theta ^{2}+\epsilon^{2}\cos\theta^{2}\right)^{\frac{q-1-n}{2}}\sin\theta^{n-2}d\theta,\]
Since
\[\int_{0}^{\frac{\pi}{2}}\left(\sin\theta^{2}+\epsilon^{2}\cos \theta^{2}\right)^{\frac{q-1-n}{2}}\sin\theta^{n-2}d\theta \leqslant C\int_{0}^{\frac{\pi}{4}}\theta^{n-2}\left(\theta^{2}+ \epsilon^{2}\right)^{(q-1-n)/2}d\theta\] \[+C\int_{\frac{\pi}{4}}^{\frac{\pi}{2}}\left(1+\epsilon^{2}(\frac {\pi}{2}-\theta)^{2}\right)^{(q-1-n)/2}d\theta\] \[\leqslant C\left[\int_{0}^{\epsilon}\theta^{n-2}\epsilon^{q-1-n}d \theta+\int_{\epsilon}^{\frac{\pi}{4}}\theta^{q-3}d\theta+\frac{\pi}{4}\right]\] \[\leqslant C\left\{\begin{array}{ll}\frac{\epsilon^{q-2}}{n-1}+ \frac{\pi}{4}+C&\text{ when }q>2\\ \frac{1}{n-1}+\frac{\pi}{4}+|\log\epsilon|&\text{ when }q=2\\ c\epsilon^{q-2}&\text{ when }q<2\end{array}\right.\] \[\leqslant C\left\{\begin{array}{ll}C&\text{ when }q>2\\ |\log\epsilon|&\text{ when }q=2\\ \epsilon^{q-2}&\text{ when }q<2\end{array}\right.\]
Since we have \(2<q\leqslant n+1\), then
\[f_{\epsilon}(x) \leqslant C\left|x^{\prime}\right|^{\alpha}\left|x_{n}\right|^{ \beta}\left|N_{\epsilon}X\right|^{-\gamma-\alpha-n-p}\] \[\leqslant C\left\{\begin{array}{ll}\left|x^{\prime}\right|^{ \alpha}\left|x_{n}\right|^{\beta}&\text{when }0>\gamma+\alpha+n+p\\ \left|x^{\prime}\right|^{-\gamma-p-n}\left|x_{n}\right|^{\beta}&\text{when }0 \leqslant\gamma+\alpha+n+p\end{array}\right.\]
and \(\left\|f_{\epsilon}\right\|_{L^{1}(\mathbb{S}^{n-1})}\geqslant c.\) When \(0\leqslant\gamma+\alpha+n+p\), we need \(-1<\gamma<-1-\frac{p}{n+q-1}\), then \(-\gamma-p-n\) satisfies the condition of \(\alpha\) in Theorem 1.4. Hence, \(f_{\epsilon}\) satisfies (1.3). Now, we need to estimate the chord integral of \(K_{H_{\epsilon}}\).
**Lemma 3.3**.: _We have the estimate for the \(q-\)th chord integral of \(K_{H_{\epsilon}}\) as follows:_
\[I_{q}(K_{H_{\epsilon}})\leqslant C\left\{\begin{array}{ll}\epsilon^{2-\frac {3+\gamma}{n-p+q-1}(q+n-1)}&\text{when }q>2\\ \epsilon^{2-\frac{3+\gamma}{n-p+1}(n+1)}|\log\epsilon|&\text{when }q=2\\ \epsilon^{q-\frac{3+\gamma}{n-p+q-1}(q+n-1)}&\text{when }q<2\end{array}\right.\]
Proof.: Since
\[I_{q}(K_{H_{\epsilon}})=\frac{2q}{(n+q-1)\omega_{n}}\int_{\partial K_{H_{ \epsilon}}}H_{\epsilon}(\nu(z))\widetilde{V}_{q-1}(K_{H_{\epsilon}},\bar{ \nabla}H_{\epsilon}(\nu(z)))\mathrm{d}\mathcal{H}^{n-1}(z),\]
and recall the definition of \(H_{\epsilon}\) (3.4) and by caculation above, we have
\[\widetilde{V}_{q-1}\left(K_{H_{\epsilon}},\bar{\nabla}H_{\epsilon }(x)\right) =\epsilon^{\frac{n-p-4-\gamma+q}{n-p+q-1}(q-1)}\widetilde{V}_{q-1 }\left(K_{u_{\epsilon}},\bar{\nabla}u_{\epsilon}(x)\right)\] \[=\frac{\epsilon^{2-q+\frac{n-p-4-\gamma+q}{n-p+q-1}(q-1)}}{n}\int _{\mathbb{S}^{n-1}}\left|N_{\epsilon}y\right|^{q-1-n}\rho_{K_{h_{\epsilon}}, \bar{\nabla}h_{\epsilon}(x_{\epsilon}))}^{q-1}(y)dy\] \[\leqslant C\epsilon^{2-q+\frac{n-p-4-\gamma+q}{n-p+q-1}(q-1)} \left\{\begin{array}{ll}1&\text{when }q>2\\ \left|\log\epsilon\right|&\text{when }q=2\\ \epsilon^{q-2}&\text{when }q<2\end{array}\right.\]
and combining the definition of \(u_{\epsilon}\), we have
\[\text{Vol}(K_{H_{\epsilon}})=\epsilon^{\frac{n-p-4-\gamma+q}{n-p+q-1}n+1-n} \text{Vol}(K_{h_{\epsilon}})\]
Since \(\text{Vol}(K_{h_{\epsilon}})\leqslant C\), then we have
\[\text{Vol}(K_{H_{\epsilon}})\leqslant C\epsilon^{\frac{n-p-4-\gamma+q}{n-p+q-1 }n+1-n}.\]
Now, we can estimate the chord integral
\[I_{q}(K_{H_{\epsilon}}) \leqslant C\epsilon^{2-q+\frac{n-p-4-\gamma+q}{n-p+q-1}(q-1)} \text{Vol}(K_{H_{\epsilon}})\left\{\begin{array}{ll}1&\text{when }q>2\\ \left|\log\epsilon\right|&\text{when }q=2\\ \epsilon^{q-2}&\text{when }q<2\end{array}\right.\] \[\leqslant C\left\{\begin{array}{ll}\epsilon^{2-\frac{3+\gamma}{ n-p+q-1}(q+n-1)}&\text{when }q>2\\ \epsilon^{2-\frac{3+\gamma}{n-p+1}(n+1)}|\log\epsilon|&\text{when }q=2\\ \epsilon^{q-\frac{3+\gamma}{n-p+q-1}(q+n-1)}&\text{when }q<2\end{array}\right.\]
The proof of this lemma is completed.
**Remark 3.4**.: _When \(q>2\), choose \(-1<\gamma<-\frac{2p}{n+q-1}-1.\) Then_
\[I_{q}(K_{H_{\epsilon}})\leqslant\epsilon^{2-\frac{3+\gamma}{n-p+q-1}(q+n-1)} \to 0,\text{ as }\epsilon\to 0^{+}.\]
_When \(q=2\), choose \(-1<\gamma<-\frac{2p}{n+1}-1.\) Then_
\[I_{q}(K_{H_{\epsilon}})\leqslant\epsilon^{2-\frac{3+\gamma}{n-p+1}(n+1)}|\log \epsilon|\to 0,\text{ as }\epsilon\to 0^{+}.\]
_When \(0<q<2\), \(p\in(-\infty,-\frac{(2-q)(n+q-1)}{q}),\) choose \(-1<\gamma<-\frac{qp}{n+q-1}+q-3.\) Then_
\[I_{q}(K_{H_{\epsilon}})\leqslant\epsilon^{q-\frac{3+\gamma}{n-p+q-1}(q+n-1)} \to 0,\text{ as }\epsilon\to 0^{+}.\]
## 4 Variational solution
This section is dedicated to solving the \(L_{p}\) chord Minkowski problem in a variational method. Let \(C_{re}^{+}(\mathbb{S}^{n-1})\) denote the set of rotationally symmetric, even, and positive continuous functions on \(\mathbb{S}^{n-1}.\) Let \(f\in C_{re}^{+}(\mathbb{S}^{n-1})\) satisfies (1.3), and real numbers \(\alpha,\beta\) are as in Theorem 1.4. We consider the maximization problem
\[\sup_{h\in C_{re}^{+}(\mathbb{S}^{n-1})}\left\{\Phi_{p}(h):=\int_{\mathbb{S}^{ n-1}}f(x)h(x)^{p},I_{q}(K_{h})=1\right\}.\]
**Lemma 4.1**.: _Let \(h_{i}\in C_{re}^{+}(\mathbb{S}^{n-1})\) be a maximizing sequence, then there exists some uniform constant \(C\) such that_
\[\frac{1}{C}\leqslant h_{i}\leqslant C\text{ as }i\rightarrow+\infty. \tag{4.1}\]
Proof.: Since \(h_{i}\in C_{re}^{+}(\mathbb{S}^{n-1})\) is a maximizing sequence; that is \(I_{q}(K_{i})=1\) and
\[\lim_{i\rightarrow\infty}\Phi_{p}(h_{i})=\sup_{h\in C_{re}^{+}(\mathbb{S}^{n- 1})}\left\{\Phi_{p}(h):I_{q}(K_{h})=1\right\}\]
where \(K_{i}\) is the convex body uniquely determined by \(h_{i}.\) By John' lemma, we have \(\frac{1}{n}E_{i}\subset K_{i}\subset E_{i}.\) y. Since \(K_{i}\) is rotationally symmetric and even, \(E_{i}\) is also rotationally symmetric and even. Therefore, the centre of \(E_{i}\) is at the origin and there exists a unique rotationally symmetric matrix \(A_{i}\) of the form
\[A_{i}=\left(\begin{array}{ccccc}r_{i}a_{i}^{\frac{1}{n}}&&&\\ &\ddots&&\\ &&r_{i}a_{i}^{\frac{1}{n}}&\\ &&&r_{i}a_{i}^{\frac{1-n}{n}}\end{array}\right),\text{ where }r_{i}>0,a_{i}>0 \text{ are constants},\]
such that
\[h_{E_{i}}(x) =\left|A_{i}x\right|\quad\text{ on }\mathbb{S}^{n-1},\] \[\rho_{E_{i}}(u) =\left|A_{i}^{-1}u\right|^{-1}\text{ on }\mathbb{S}^{n-1}\] \[\text{Vol}(E_{i})=\kappa_{n}r_{i}^{n}.\]
Suppose to the contrary that
\[\max_{x\in\mathbb{S}^{n-1}}h_{i}(x)\rightarrow+\infty\text{ or }\min_{x\in \mathbb{S}^{n-1}}h_{i}(x)\to 0,\]
as \(i\rightarrow+\infty.\) Since we have
\[\frac{1}{n}\left|A_{i}x\right|\leq h_{i}(x)\leq\left|A_{i}X\right|,\quad \forall x\in\mathbb{S}^{n-1}\]
which implies that
\[r_{i}a_{i}^{\frac{1}{n}}\rightarrow+\infty\text{ or }r_{i}a_{i}^{\frac{1}{n}} \to 0\text{ or }r_{i}a_{i}^{\frac{1-n}{n}}\rightarrow+\infty\text{ or }r_{i}a_{i}^{\frac{1-n}{n}}\to 0\]
as \(i\rightarrow+\infty,\) which must cause one of the four cases
\[r_{i}\rightarrow+\infty\quad r_{i}\to 0\quad a_{i}\rightarrow+\infty \quad a_{i}\to 0.\]
Since
\[h_{i}(x)\geq\frac{1}{n}h_{E_{i}}(X)=\frac{1}{n}\left|A_{i}x\right|\text{ on }\mathbb{S}^{n-1},\]
and \(p<0,\) from the assumption of \(f\) we have
\[\Phi_{p}(h_{i}) =\int_{\mathbb{S}^{n-1}}f(x)h_{i}(x)^{p}dx\] \[\leqslant n^{-p}\int_{\mathbb{S}^{n-1}}f(x)\left|A_{i}x\right|^{p}dx\] \[\leqslant Cn^{-p}\int_{\mathbb{S}^{n-1}}\left|x^{\prime}\right|^{ \alpha}\left|x_{n}\right|^{\beta}\left|A_{i}x\right|^{p}dx\] \[:=Cn^{-p}F(A_{i})\]
we compute in the spherical coordinates as follows:
\[F(A_{i})= r_{i}^{p}a_{i}^{\frac{p}{n}}\int_{\mathbb{S}^{n-1}}\left|x^{ \prime}\right|^{\alpha}\left|x_{n}\right|^{\beta}\left(\left|x^{\prime}\right| ^{2}+a_{i}^{-2}x_{n}^{2}\right)^{\frac{p}{2}}\text{ d}x\] \[= 2r_{i}^{p}a_{i}^{\frac{p}{n}}\omega_{n-2}\int_{0}^{\frac{\pi}{2} }\sin^{\alpha}\theta\cos^{\beta}\theta\left(\sin^{2}\theta+a_{i}^{-2}\cos^{2} \theta\right)^{\frac{p}{2}}\sin^{n-2}\theta\text{d}\theta\] \[\leq Cr_{i}^{p}a_{i}^{\frac{p}{n}}\int_{\frac{\pi}{4}}^{\frac{\pi}{2}} \theta^{\alpha+n-2}\left(\theta^{2}+a_{i}^{-2}\right)^{\frac{p}{2}}\text{ d}\theta\] \[+Cr_{i}^{p}a_{i}^{\frac{p}{n}}\int_{\frac{\pi}{4}}^{\frac{\pi}{2} }\left(\frac{\pi}{2}-\theta\right)^{\beta}\left(1+a_{i}^{-2}\left(\frac{\pi} {2}-\theta\right)^{2}\right)^{\frac{p}{2}}\text{ d}\theta.\]
Case 1: When \(a_{i}>3\). We can further estimate these following integrals:
\[\int_{0}^{\frac{\pi}{4}}\left(\theta^{2}+a_{i}^{-2}\right)^{\frac{p }{2}}\theta^{\alpha+n-2}\ \mathrm{d}\theta \leq\int_{0}^{\frac{1}{a_{i}}}a_{i}^{-p}\theta^{\alpha+n-2}\ \mathrm{d}\theta+\int_{\frac{1}{a_{i}}} ^{\frac{\pi}{4}}\theta^{p}\theta^{\alpha+n-2}\ \mathrm{d}\theta\] \[=\frac{a_{i}^{-p-\alpha-n+1}}{\alpha+n-1}+\int_{\frac{1}{a_{i}}} ^{\frac{\pi}{4}}\theta^{p+\alpha+n-2}\ \mathrm{d}\theta\] \[\leq C\begin{cases}1,&\text{if }p+\alpha+n-1>0,\\ \log a_{i},&\text{if }p+\alpha+n-1=0,\\ a_{i}^{-p-\alpha-n+1},&\text{if }p+\alpha+n-1<0,\end{cases}\]
and applying that \(\beta>-1\), we have
\[\int_{\frac{\pi}{4}}^{\frac{\pi}{2}}\left(\frac{\pi}{2}-\theta \right)^{\beta}\left(1+a_{i}^{-2}\left(\frac{\pi}{2}-\theta\right)^{2}\right) ^{\frac{p}{2}}\ \mathrm{d}\theta =\int_{0}^{\frac{\pi}{4}}t^{\beta}\left(1+a_{i}^{-2}t^{2}\right) ^{\frac{p}{2}}\ \mathrm{d}t\] \[\leq C\int_{0}^{\frac{\pi}{4}}t^{\beta}\mathrm{d}t\] \[\leq C.\]
Then we obtain
\[F(A_{i})\leq Cr_{i}^{p}a_{i}^{\frac{p}{n}}\begin{cases}1,&\text{if }p+\alpha+n-1>0, \\ \log a_{i},&\text{if }p+\alpha+n-1=0,\\ a_{i}^{-p-\alpha-n+1},&\text{if }p+\alpha+n-1<0.\end{cases}\]
Case 2: When \(1/3\leqslant a_{i}\leqslant 3.\) Since \(|A_{i}x|=r_{i}a_{i}^{\frac{1}{n}}(|x^{\prime}|^{2}+a_{i}^{-2}x_{n}^{2})^{ \frac{1}{2}},\) by the assumption of \(a_{i}\) and \(\alpha>1-n,\beta>-1\), we have
\[Cr_{i}\leqslant|A_{i}x|\leqslant\tilde{Cr}_{i},\quad\forall x\in\mathbb{S}^{ n-1}.\]
Then
\[Cr_{i}^{p}\leqslant F(A_{i})\leqslant\tilde{Cr}_{i}^{p},\quad\forall x\in \mathbb{S}^{n-1}.\]
Case 3: When \(a_{i}<1/3\), since we have
\[F(A_{i})=2r_{i}^{p}a_{i}^{\frac{p}{n}-p}\omega_{n-2}\int_{0}^{\frac{\pi}{2}} \sin^{\alpha}\theta\cos^{\beta}\theta\left(a_{i}^{2}\sin^{2}\theta+\cos^{2} \theta\right)^{\frac{p}{2}}\sin^{n-2}\theta\mathrm{d}\theta.\]
Then, applying \(\alpha+n-1>0\), we have
\[F(A_{i}) \leq Cr_{i}^{p}a_{i}^{\frac{p}{n}-p}\left[\int_{0}^{\frac{\pi}{4} }\theta^{\alpha+n-2}\left(a_{i}^{2}\theta^{2}+1\right)^{\frac{p}{2}}\ \mathrm{d}\theta+\int_{\frac{\pi}{4}}^{\frac{\pi}{2}}\left(\frac{\pi}{2}- \theta\right)^{\beta}\left(a_{i}^{2}+\left(\frac{\pi}{2}-\theta\right)^{2} \right)^{\frac{p}{2}}\ \mathrm{d}\theta\right]\] \[\leq Cr_{i}^{p}a_{i}^{\frac{p}{n}-p}\left[1+\int_{0}^{\frac{\pi}{ 4}}t^{\beta}\left(a_{i}^{2}+t^{2}\right)^{\frac{p}{2}}\ \mathrm{d}t\right].\]
\[\int_{0}^{\frac{\pi}{4}}t^{\beta}\left(a_{i}^{2}+t^{2}\right)^{\frac{p}{2}}\ \mathrm{d}t \leq\int_{0}^{a_{i}}t^{\beta}a_{i}^{p}\ \mathrm{d}t+\int_{a_{i}}^{\frac{\pi}{4}}t^{\beta}t^{p}\ \mathrm{d}t\] \[=\frac{a_{i}^{p+\beta+1}}{\beta+1}+\int_{a_{i}}^{\frac{\pi}{4}}t^{ \beta+p}\ \mathrm{d}t\] \[\leq C\begin{cases}1,&\text{ if }\beta+p+1>0,\\ |\log a_{i}|,&\text{ if }\beta+p+1=0,\\ a_{i}^{\beta+p+1,}&\text{ if }\beta+p+1<0.\end{cases}\]
Then we obtain
\[F(A_{i})\leq Cr_{i}^{p}a_{i}^{\frac{p}{n}-p}\begin{cases}1,&\text{ if }\beta+p+1>0,\\ |\log a_{i}|,&\text{ if }\beta+p+1=0,\\ a_{i}^{\beta+p+1},&\text{ if }\beta+p+1<0.\end{cases}\]
**Claim 4.2**.: _If one of the four caes \(r_{i}\to+\infty\quad r_{i}\to 0\quad a_{i}\to+\infty\quad a_{i}\to 0\) occurs, then \(F(A_{i})\to 0\)._
Proof.: Since \(I_{q}(K_{i})=1\), and \(\frac{1}{n}E_{i}\subset K_{i}\subset E_{i}\), We have
\[(\frac{1}{n})^{n+q-1}I_{q}(E_{i})=I_{q}(\frac{1}{n}E_{i})\leqslant I_{q}(K_{i })\leqslant I_{q}(E_{i}).\]
If we set \(\Lambda=n^{q+n-1}\), then
\[\Lambda^{-1}\leq I_{q}(E_{i})\leq\Lambda\]
By 2.2, when \(q\in(1,n+1)\) is not an integer, we have
\[I_{q}(E_{i}) =\left\{\begin{array}{ll}\left(r_{i}a_{i}^{\frac{1}{n}}\right) ^{n+q-1}I_{q}\left(\left(r_{i}a_{i}^{\frac{1}{n}}\right)^{-1}E_{i}\right)& \text{ when }a_{i}>1\\ \left(r_{i}a_{i}^{\frac{1-n}{n}}\right)^{n+q-1}I_{q}\left(\left(r_{i}a_{i}^{ \frac{1-n}{n}}\right)^{-1}E_{i}\right)&\text{ when }a_{i}\leqslant 1\end{array}\right.\] \[\leqslant\left\{\begin{array}{ll}c_{q,m,n}\left(r_{i}a_{i}^{ \frac{1}{n}}\right)^{n+q-1}a_{i}^{-2}&\text{ when }a_{i}>1\\ c_{q,m,n}\left(r_{i}a_{i}^{\frac{1-n}{n}}\right)^{n+q-1}a_{i}^{n+q-2}&\text{ when }a_{i}\leqslant 1\end{array}\right.\] \[\leqslant c_{q,m,n}\left(r_{i}a_{i}^{\frac{1}{n}}\right)^{n+q-1}a_{ i}^{-1}\]
where the last inequality is due to \(a_{i}>1\), which leads to \(a_{i}^{-2}<a_{i}^{-1}\).
When \(q\in(1,n+1)\) is an integer, we can choose a proper \(q^{\prime}\in(q,q+1)\), by 2.3, we have
\[I_{q}(E_{i}) \leqslant c(q^{\prime},q)V(E_{i})^{1-\frac{q-1}{q^{\prime}-1}}I_{q^ {\prime}}(E_{i})^{\frac{q-1}{q^{\prime}-1}}\] \[\leqslant\omega_{n}c(q^{\prime},q)c_{q^{\prime},q,n}\left(r_{i}a_{ i}^{\frac{1}{n}}\right)^{n+q-1}a_{i}^{-1}.\]
On the other hand,
\[I_{q}(E_{i})=\frac{1}{n\omega_{n}}\int_{S^{n-1}}\int_{E_{i}|u^{\perp}}X_{E_{i}}(x, u)^{q}\ \mathrm{d}x\ \mathrm{d}u,\quad q\geq 0.\]
where \(E_{i}|u^{\perp}\) denotes the projection of \(E_{i}\) onto \(u^{\perp}.\) For \(q>1,\) we have
\[I_{q}(E_{i})\geqslant\frac{1}{n\omega_{n}}\int_{S^{n-1}}V(E_{i})^{q}V_{n-1}(E _{i}|u^{\perp})^{1-q}du\]
Indeed, H\(\ddot{o}\)lder inequality gives
\[\frac{1}{V_{n-1}(E_{i}|u^{\perp})}\int_{E_{i}|u^{\perp}}X_{E_{i}}(x,u)^{q}\ \mathrm{d}x\geqslant\left(\frac{1}{V_{n-1}(E_{i}|u^{\perp})}\int_{E_{i}|u^{ \perp}}X_{E_{i}}(x,u)\ \mathrm{d}x\right)^{q}=\left(\frac{V(E_{i})}{V_{n-1}(E_{i}|u^{\perp})} \right)^{q}.\]
Recall that \(V(E_{i})=\kappa_{n}r_{i}^{n}.\)
If \(a_{i}>1,\)\(r_{i}a_{i}^{\frac{1}{n}}>r_{i}a_{i}^{\frac{1-n}{n}},\) it follows that \(V_{n-1}(E_{i}|u^{\perp})\leqslant\kappa_{n-1}\left(r_{i}a_{i}^{\frac{1}{n}} \right)^{n-1}.\)
Hence,
\[I_{q}(E_{i}) \geqslant\frac{\kappa_{n-1}^{1-q}}{n\kappa_{n}^{1-q}}\left(r_{i} a_{i}^{\frac{1}{n}}\right)^{n-1}\left(r_{i}a_{i}^{\frac{1-n}{n}}\right)^{q}\] \[=\frac{\kappa_{n-1}^{1-q}}{n\kappa_{n}^{1-q}}\left(r_{i}a_{i}^{ \frac{1}{n}}\right)^{q+n-1}a_{i}^{-q}\]
If \(a\leqslant 1,\)\(r_{i}a_{i}^{\frac{1}{n}}\leqslant r_{i}a_{i}^{\frac{1-n}{n}},\) it follows that \(V_{n-1}(E_{i}|u^{\perp})\leqslant\kappa_{n-1}\left(r_{i}a_{i}^{\frac{1}{n}} \right)^{n-2}r_{i}a_{i}^{\frac{1-n}{n}}.\)
Hence,
\[I_{q}(E_{i})\geqslant\frac{\kappa_{n-1}^{1-q}}{n\kappa_{n}^{1-q}}\left(r_{i} a_{i}^{\frac{1}{n}}\right)^{q+n-1}a_{i}^{-1}.\]
In conlusion,
\[\left(r_{i}a_{i}^{\frac{1}{n}}\right)^{q+n-1}a_{i}^{-q}\leqslant C,\left(r_{i }a_{i}^{\frac{1}{n}}\right)^{q+n-1}a_{i}^{-1}\geqslant c\quad a>1\]
\[c\leqslant\left(r_{i}a_{i}^{\frac{1}{n}}\right)^{q+n-1}a_{i}^{-1}\leqslant C \quad a\leqslant 1\]
where the constants \(c,C\) only depend on \(n,q.\) Therefore, we shall prove the claim case by case as follows:
In case 1: \(a>3,\) then \(\left(r_{i}a_{i}^{\frac{1}{n}}\right)^{p}\leqslant ca^{\frac{p}{n+q-1}}.\) Then we obtain
\[F(A)\leq C\begin{cases}a^{\frac{p}{n+q-1}},&\text{ if }p+\alpha+n-1>0,\\ a^{\frac{p}{n+q-1}}\log a,&\text{ if }p+\alpha+n-1=0,\\ a^{\frac{p}{n+q-1}-p-\alpha-n+1},&\text{ if }p+\alpha+n-1<0.\end{cases}\]
By the assumptions in Theorem 1.4, we observe that the power of \(a\) is negative. If one of \(r_{i}\to+\infty\quad r_{i}\to 0\quad a_{i}\to+\infty\) occurs, then \(F(A_{i})\to 0.\)
In case 2: If \(1/3\leqslant a\leqslant 1\), then \(ca^{\frac{1}{n+q-1}-\frac{1}{n}}\leqslant r_{i}\leqslant Ca^{\frac{1}{n+q-1}- \frac{1}{n}};\) If \(1\leqslant a\leqslant 3\), then \(ca^{\frac{1}{n+q-1}-\frac{1}{n}}\leqslant r_{i}\leqslant Ca^{\frac{q}{n+q-1}- \frac{1}{n}}.\) That is \(c\leqslant r_{i}\leqslant C.\) Then we obtain
\[F(A)\leqslant\tilde{C}a^{\frac{p}{n+q-1}-\frac{p}{n}}\leqslant C.\]
None of the four cases \(r_{i}\rightarrow+\infty\quad r_{i}\to 0\quad a_{i}\rightarrow+\infty\quad a _{i}\to 0\) can occur.
In case 3: \(a<1/3\), then \(\left(r_{i}a_{i}^{\frac{1}{n}}\right)^{p}a_{i}^{-p}\leqslant ca^{\frac{p}{n+q- 1}-p}.\) Then we obtain
\[F(A)\leq C\begin{cases}a^{\frac{p}{n+q-1}-p},&\text{if }\beta+p+1>0,\\ a^{\frac{p}{n+q-1}-p}|\log a|,&\text{if }\beta+p+1=0,\\ a^{\frac{p}{n+q-1}-p}+\beta+p+1,&\text{if }\beta+p+1<0.\end{cases}\]
By the assumptions in Theorem 1.4, we observe that the power of \(a\) is positive. If one of \(r_{i}\rightarrow+\infty\quad r_{i}\to 0\quad a_{i}\to 0\) occurs, then \(F(A_{i})\to 0.\)
This claim means that if one of the four cases \(r_{i}\rightarrow+\infty,r_{i}\to 0,a_{i}\rightarrow+\infty,a_{i}\to 0\) occurs, we have that
\[\Phi_{p}(h_{i})\to 0,\quad\text{ as }\quad i\rightarrow+\infty\]
However, taking \(h\equiv r_{0}\) for some \(r_{0}\) such that \(I_{q}(B_{r_{0}})=1.\) Indeed, since \(I_{q}(B_{r_{0}})=r_{0}^{n+q-1}I_{q}(B_{1}),\) we can choose \(r_{0}=\frac{\omega_{q}}{2^{q}\omega_{n}\omega_{n+q-1}}.\) Hence, we have
\[\sup_{h\in C_{re}^{+}(\mathbb{S}^{n-1})}\left\{\Phi_{p}(h):I_{q}( K_{h})=1\right\} \geqslant \int_{\mathbb{S}^{n-1}}fr_{0}^{p}\] \[\geqslant C_{1}\|f\|_{L^{1}(\mathbb{S}^{n-1})}>0,\]
which is a contradiction. Therefore, \(\{h_{i}\}\) has uniformly positive upper and lower bounds.
**Lemma 4.3**.: _The maximization problem has a solution \(h.\)_
Proof.: Let \(h_{i}\in C_{re}^{+}(\mathbb{S}^{n-1})\) be a maximizing sequence; that is \(I_{q}(K_{i})=1\) and
\[\lim_{i\rightarrow\infty}\Phi_{p}(h_{i})=\sup_{h\in C_{re}^{+}(\mathbb{S}^{n- 1})}\left\{\Phi_{p}(h):I_{q}(K_{h})=1\right\}\]
where \(K_{i}\) is the convex body uniquely determined by \(h_{i}.\) By 4.1 we have \(c\leqslant h_{i}\leqslant C\) as \(i\) big enough. By the Blaschke selection theorem, there is a subsequence of \(\{h_{i}\}\) that uniformly converges to some support function \(h\) on \(\mathbb{S}^{n-1}\). Note that \(h\) is also rotationally symmetric and even on \(\mathbb{S}^{n-1}\), satisfying (4.1), \(I_{q}(K_{h})=1\), and
\[\Phi_{p}(h)=\lim_{i\rightarrow+\infty}\Phi_{p}(h_{i})=\sup_{h\in C_{re}^{+}( \mathbb{S}^{n-1})}\left\{\Phi_{p}(h):I_{q}(K_{h})=1\right\}\]
Hence, \(h\) is a solution to the maximization problem.
**Theorem 4.4**.: _Let \(h\) be the maximizer obtained from lemma 4.3. Then \(h\) is a generalized solution of_
\[\text{det}(\nabla^{2}h+hI)=\frac{fh^{p-1}}{C\widetilde{V}_{q-1}([h],\bar{\nabla} h)},\]
_where \(C=\frac{2q}{(n+q-1)\omega_{n}}\int fh^{p}.\)_
Proof.: Let \(h\) be the maximizer obtained from lemma 4.3, we have that \(1/C\leqslant h\leqslant C\) by (4.1). For any given rotationally symmetric and even \(g\in C(\mathbb{S}^{n-1}),\) let
\[K_{t}=\{x\in\mathbb{R}^{n}:x\cdot u\leqslant(h+tg)(u)\}\]
for sufficiently small \(\delta>0\) such that \(|t|<\delta,\) we have \(h+tg>0,\) and \(h_{t}\in C_{re}^{+}(\mathbb{S}^{n-1}),\) where \(h_{t}\) is the support function of \(K_{t}.\) Note that \(h_{0}=h,K_{0}=K.\) Let \(\lambda(t)=I_{q}(K_{t})^{-\frac{1}{n+q-1}},\) then \(I_{q}(\lambda(t)K_{t})=1,\) and \(\lambda^{\prime}(0)=-\frac{1}{n+q-1}\int_{\mathbb{S}^{n-1}}g(v)dF_{q}(K,v).\) Denote
\[\Psi_{p}(t)=\Phi_{p}(\lambda(t)h_{t}).\]
As \(h\) is a maximizer, the function \(t\mapsto\Psi_{p}(t)\) attains its maximum at \(t=0\). However, \(h_{t}\) may not be differentiable at \(t=0.\) Let
\[\psi_{p}(t)=\Phi_{p}(\lambda(t)(h+tg)).\]
Since \(K_{t}\) is the Wulff shape of \(h+tg,\) we have \(h_{t}\leqslant h+tg.\) Since \(f\) nonnegative, \(\lambda(t)>0,p<0\) and
\[\lambda(t)h_{t}\leqslant\lambda(t)(h+tg),\]
thus we have
\[\Psi_{p}(0)\geqslant\Psi_{p}(t)\geqslant\psi_{p}(t).\]
Since \(\Psi_{p}(0)=\psi_{p}(0),\)\(\psi_{p}(t)\) also attains its maximum at \(t=0.\) Hence we have
\[\lim_{t_{k}\to 0^{+}}\frac{\psi_{p}(t_{k})-\psi_{p}(0)}{t_{k}}\leqslant 0,\]
for any convergent subsequence \(\{t_{k}\}.\) Therefore, we have
\[\int_{S^{n-1}}ph(x)^{p-1}f(x)\left(\lambda^{\prime}(0)h+g\right)\leqslant 0,\]
since \(p<0\) and we can also replace \(g\) by \(-g,\) it follows that
\[\int fh(x)^{p-1}g=\frac{1}{n+q-1}\int fh(x)^{p}\int gdF_{q}(K,\cdot).\]
Since \(g\in C(\mathbb{S}^{n-1})\) is arbitrary, we conclude that
\[\text{det}(\nabla^{2}h+hI)=\frac{fh^{p-1}}{C\widetilde{V}_{q-1}([h],\bar{ \nabla}h)},\]
where \(C=\frac{2q}{(n+q-1)\omega_{n}}\int fh^{p}.\)
Now, we can prove Theorem 1.4. Let \(h\) be the maximizer obtained from lemma 4.3, after a suitable scaling \(\alpha h\) solves (1.2) with \(\alpha=C^{\frac{1}{n+q-p-1}}\). Denote \(h_{0}=\alpha h\), \(K_{0}\) is its corresponding convex body. We already know that \(h_{0}\) is a generalized solution to (1.2). Now we want to estimate the chord integral of \(K_{0}.\) Since \(I_{q}(K_{0})=C^{\frac{n+q-1}{n+q-p-1}}I_{q}(K)=C^{\frac{n+q-1}{n+q-p-1}}\), we only need to estimate the bound of \(C\) from below. Applying (4.1), \(h\) has uniform bound from above. Hence, we have
\[C =\frac{2q}{(n+q-1)\omega_{n}}\int fh^{p}\] \[\geqslant c\frac{2q}{(n+q-1)\omega_{n}}\|f\|_{L^{1}(\mathbb{S}^{n -1})}\] \[>0.\]
The proof of Theorem 1.4 is completed.
## 5 Proof to Theorem 1.1
In this section, we aim to prove Theorem 1.1. It's a result which follows easily from Section 3 and Section 4.
Proof.: When \(p<0\) and \(2<q<n+1\), for any given \(\epsilon\in(0,1/2)\), let
\[f_{\epsilon}(x):=h_{\epsilon}\left(x_{\epsilon}\right)^{1-p}\left|X^{\prime} \right|^{\alpha}\left|x_{n}\right|^{\beta}\left|N_{\epsilon}X\right|^{-\gamma -\alpha-n-p}\frac{1}{n}\int_{S^{n-1}}|N_{\epsilon}y|^{q-1-n}\rho_{K_{h_{ \epsilon}},\bar{\nabla}h_{\epsilon}(x_{\epsilon})}^{q-1}(y)dy,\]
where \(h_{\epsilon},x_{\epsilon},\alpha,\beta,\gamma,N_{\epsilon}\) are as in Section 3. And we already know \(H_{\epsilon}\) is a generalized solution to equation (1.2) with \(f\) replaced by \(f_{\epsilon}.\) From the analysis of \(f_{\epsilon}\) in Section 3, we know \(f_{\epsilon}\) satisfies the condition (1.3). Hence, by Theorem 1.4, we obtain a variational solution \(h_{0}\), which is also a generalized solution to equation (1.2) with \(f\) replaced by \(f_{\epsilon},\) and the \(q\)-th chord integral of its corresponding convex body has a uniform positive bound from below. That is
\[I_{q}(K_{h_{0}})\geqslant c>0,\]
where \(c\) depends only on \(n,p,q,\alpha,\beta\) and \(\gamma\) and is independent with \(\epsilon.\)
However, by Remark 3.4, we can choose \(\epsilon\) small enough such that
\[I_{q}(K_{H_{\epsilon}})<\frac{c}{2},\]
here \(c\) is the same as above. Hence, \(H_{\epsilon}\) and \(h_{0}\) are different solutions to equation (1.2).
Therefore, the proof of Theorem 1.1 is completed. |
2310.01191 | Unveiling Symmetries Patterns: A Study of Circular and Linear Harmonic
Oscillator Chains | The purpose of this article is the study of the symmetries in a circular and
linear harmonic oscillator chains system, and consequently use them as a means
to find the eigenvalues of these configurations. Furthermore, a hidden
$\mathbb{Z}_2$ group structure arises in both problems, showing how a
degenerate spectrum in the circular case is attributable to the specific
geometry producing a $\mathbb{Z}_N$ symmetry. | Edoardo Spezzano, Alberto Iommi | 2023-10-02T13:28:43Z | http://arxiv.org/abs/2310.01191v1 | # Unveiling Symmetries Patterns: A Study of Circular and Linear Harmonic Oscillator Chains
###### Abstract
The purpose of this article is the study of the symmetries in a circular and linear harmonic oscillator chains system, and consequently use them as a means to find the eigenvalues of these configurations. Furthermore, a hidden \(\mathbb{Z}_{2}\) group structure arises in both problems, showing how a degenerate spectrum in the circular case is attributable to the specific geometry producing a \(\mathbb{Z}_{N}\) symmetry.
## 1 Introduction
The investigation of normal modes in complex physical systems has long fascinated physicists and mathematicians alike. These normal modes represent natural oscillatory configurations, shedding light on its dynamic behavior and underlying symmetries. This article delves into two fundamental problems in classical physics, each offering profound insights into the world of oscillations.
The objective of this work is to explore and analyze the frequency spectra associated with two classical systems: the harmonic oscillator chains in both circular and linear set-ups. In each scenario, identical masses are interconnected by springs, all possessing the same spring constant. Despite their apparent differences, these configurations share common ground in the context of oscillation theory.
By analysing the frequency spectra of these systems, we aim to deepen our understanding of their intrinsic behavior, unveil the underlying symmetries, and explore their significance in the broader landscape of classical physics.
Problem Statement
Before we delve into the analysis of frequency spectra, we have to establish the fundamental mathematical equations and boundary conditions valid for both problems. We begin by examining the system's Lagrangian [1, 2]:
\[\mathcal{L}=\sum_{i=1}^{N}\frac{1}{2}m\dot{x}_{i}^{2}-\mathcal{U}_{\text{int}} \tag{1}\]
where \(x_{i}\) represent the coordinates of the respective particles and \(\mathcal{U}_{\text{int}}\) represents the interaction potential. We can obtain the equations of motion using the Euler-Lagrange equations, resulting in:
\[m\ddot{x}_{i}=-\frac{\partial\mathcal{U}_{\text{int}}}{\partial x_{i}} \tag{2}\]
Assuming the potential energy to be quadratic in terms of \(x_{i}\), we can simplify the problem to the following form:
\[\ddot{x}_{i}=\omega_{0}^{2}H_{ij}x_{j},\quad\omega_{0}^{2}\equiv\frac{k}{m} \tag{3}\]
with \(k\) representing the spring constant and \(H\) a real \(N\times N\) matrix. The solutions to this equation are linear combinations of the normal modes, which correspond to oscillations at well-defined frequencies. In fact, we can express them in the form \(x_{i}(t)=x_{i}^{0}e^{i\omega t}\), which, when substituted into the previous equation, yields an eigenvalue equation:
\[\lambda x_{i}^{0}=H_{ij}x_{j}^{0},\quad\lambda\equiv-\frac{\omega^{2}}{\omega _{0}^{2}} \tag{4}\]
In the following sections of this article we will delve into the methods for finding the eigenvalues \(\lambda\).
## 3 Theoretical Methods and Analytical Framework
As we have seen, the key to our analysis lies on finding the eigenvalues of the matrix \(H\). We will try to find them on both cases, using a group theory approach.
### Normal Modes in a Circular Chain
The matrix \(H_{c}\) for a circular chain in \(N\) dimension takes the following form:
\[H_{c}=\begin{pmatrix}-2&1&0&\cdots&0&0&1\\ 1&-2&1&0&\cdots&0&0\\ 0&1&-2&1&0&\cdots&0\\ \vdots&\ddots&\ddots&\ddots&\ddots&\ddots&\vdots\\ 0&\cdots&0&1&-2&1&0\\ 0&0&\cdots&0&1&-2&1\\ 1&0&0&\cdots&0&1&-2\end{pmatrix}_{N\times N} \tag{5}\]
We can express the matrix \(H_{c}\) as:
\[H_{c}=T+T^{-1}-2\mathbb{1} \tag{6}\]
where \(T\) is defined as:
\[T=\begin{pmatrix}0&0&0&\cdots&0&0&1\\ 1&0&0&\cdots&0&0&0\\ 0&1&0&\cdots&0&0&0\\ \vdots&\ddots&\ddots&\ddots&\vdots&\vdots&\vdots\\ 0&\cdots&0&1&0&0&0\\ 0&0&\cdots&0&1&0&0\\ 0&0&0&\cdots&0&1&0\end{pmatrix}_{N\times N} \tag{7}\]
We can observe that \(T\) is a matrix who shifts mass positions by 1. Indeed, this operator corresponds to a symmetry of the problem, which can be demonstrated through the equation:
\[[H_{c},T]=0 \tag{8}\]
Furthermore, it is evident that this matrix constitutes a representation of the group \(\mathbb{Z}_{N}\)[3] (also commonly referred to as the _regular_ representation in the literature). Consequently, its eigenvalues are determined by \(\lambda^{N}=1\), resulting in \(\lambda_{k}=e^{i\frac{2\pi k}{N}}\) for \(k=0,\ldots,N-1\). Thus, the spectrum is given by:
\[\lambda=\lambda_{k}+\frac{1}{\lambda_{k}}-2 \tag{9}\]
and substituting the \(\lambda_{k}\) expression, we obtain:
\[\lambda=2\cos\frac{2k\pi}{N}-2=-4\sin^{2}\frac{k\pi}{N} \tag{10}\]
who leads to the well-known solution:
\[\omega(k)=2\omega_{0}\sin\frac{k\pi}{N},\quad k\in[-N+1,N-1] \tag{11}\]
We can observe that the formula implies a degeneracy on the spectrum caused by the presence of an additional symmetry in the problem. Specifically, one can see that the so-called exchange matrix [4]:
\[J=\begin{pmatrix}0&0&\cdots&0&1\\ 0&\cdots&0&1&0\\ \vdots&\cdots&\cdots&0&\vdots\\ 0&1&\cdots&\vdots&0\\ 1&0&\cdots&0&0\end{pmatrix}_{N\times N} \tag{12}\]
satisfies \([H_{c},J]=0\), but \([J,T]\neq 0\)1, showing that, as just said, the spectrum is necessarily degenerate (see Theorem [5]). Additionally, another intriguing property of this operator is that \(J^{2}=\mathbb{1}\) and so constitute a representation of the group \(\mathbb{Z}_{2}\). Taking all this result into account, we can see that the symmetry group in the case of a circular chain is \(\mathbb{Z}_{N}\times\mathbb{Z}_{2}\).
Footnote 1: More precisely, this occurs for \(N\geq 3\). Indeed for the case of \(N=2\), we have \(J=T\).
### Normal Modes in a Linear Chain
In the case of a linear chain, the matrix \(H_{l}\) takes the following form:
\[H_{l}=\begin{pmatrix}-2&1&0&\cdots&0&0&0\\ 1&-2&1&0&\cdots&0&0\\ 0&1&-2&1&0&\cdots&0\\ \vdots&\ddots&\ddots&\ddots&\ddots&\ddots&\vdots\\ 0&\cdots&0&1&-2&1&0\\ 0&0&\cdots&0&1&-2&1\\ 0&0&0&\cdots&0&1&-2\end{pmatrix}_{N\times N} \tag{13}\]
Similarly to the circular example, we can inquire whether the system exhibits symmetries or not. It is not difficult to see that in this case too, we have \([H_{l},J]=0\), indicating that the system possesses at least a \(\mathbb{Z}_{2}\) symmetry. To understand the full symmetry group present, initially notice that the following result holds:
**Lemma**.: _Given a \(N\times N\) matrix \(\mathcal{O}\), it holds:_
\[[H_{l},\mathcal{O}]=0\Leftrightarrow\mathcal{O}=\sum_{n=0}^{N-1}c_{n}U_{n} \left(\frac{\mathcal{H}}{2}\right),\quad\mathcal{H}=H_{l}+2\mathbb{1}\,, \tag{14}\]
_where the functions \(U_{n}(x)\) are the Chebyshev polynomials of the second kind [6]._
For a proof, refer to the appendix (4).
As a corollary of this lemma, we see that the symmetry group is certainly abelian. Therefore, by the classification theorem for finite abelian groups [7], our attention is drawn to \(\mathbb{Z}_{n}\). However, it becomes evident that we cannot have a symmetry group with \(n\geq 3\) because \([H_{l},T]\neq 0\). Thus, the only symmetry group is generated by \(J\), which is \(\mathbb{Z}_{2}\).
In conclusion, let us explore a method to determine the spectrum. This involves deriving the following recurrence relation found in the characteristic polynomials:
\[P_{N}(\lambda)=\lambda P_{N-1}(\lambda)-P_{N-2}(\lambda) \tag{15}\]
This recurrence relation is similar to the one satisfied by the Chebyshev polynomials cited above. Specifically, the connection between them is as follows:
\[P_{N}(x)=U_{N}\left(\frac{x}{2}\right) \tag{16}\]
From the above equation, we notice that solving the secular equation \(P_{N}(x)=0\) coincides with the roots of Chebyshev polynomials, which are given by:
\[x_{k}=\cos\left(\frac{k\pi}{N+1}\right),\quad k=0,\ldots,N-1 \tag{17}\]
Hence, the eigenvalues of our problem are determined as:
\[\lambda=2\left[\cos\left(\frac{k\pi}{N+1}\right)-1\right]=-4\sin^{2}\left( \frac{k\pi}{2(N+1)}\right) \tag{18}\]
leading to:
\[\omega(k)=2\omega_{0}\sin\left(\frac{k\pi}{2(N+1)}\right),\quad k\in[-N+1,N-1] \tag{19}\]
It is worth noting that the spectrum is very similar to that obtained in the case of the circular chain. This similarity is discussed in more detail in the next section.
### Note on the Anti-Commutator
Notably, the diagonal matrix:
\[S=\begin{pmatrix}1&0&0&0&\cdots&0\\ 0&-1&0&0&\cdots&0\\ 0&0&1&0&\cdots&0\\ 0&0&0&-1&\ddots&\vdots\\ \vdots&\vdots&\vdots&\ddots&\ddots&0\\ 0&0&0&\cdots&0&(-1)^{N+1}\end{pmatrix}_{N\times N} \tag{20}\]
in the case of a linear chain satisfies the following relationship:
\[\{H_{l},S\}=-4S. \tag{21}\]
So, due to the anti-commutation property, if \(\lambda\) is an eigenvalue of \(H\), then \(-4-\lambda\) is necessarily also an eigenvalue, as expressed by:
\[H_{l}(S\left|n\right\rangle)=(-\lambda-4)(S\left|n\right\rangle) \tag{22}\]
This relationship is also valid in the case of the circular chain with \(N\) even. In fact, it can be easily shown that \(\{H_{c,2N},S_{2N}\}=-4S_{2N}\). This result is anticipated by examining the two spectra given by the equations (10) and (18).
## 4 Conclusions
As for the circular chain, we identified a \(\mathbb{Z}_{n}\times\mathbb{Z}_{2}\) symmetry, simplifying the eigenvalue problem resolution. This symmetry facilitated a more streamlined approach, employing tailored mathematical tools. In the case of the linear chain, we uncovered a \(\mathbb{Z}_{2}\) symmetry and determined its spectrum, highlighting the crucial role of Chebyshev polynomials.
An intriguing feature emerges from the eigenvalues due to the inherent anti-symmetry relationship between the Hamiltonian operator (\(H\)) and the operator \(S\), introducing distinctive patterns into the eigenvalue spectra. We observed an analogy between the linear chain and the even case of the circular chain, arising from the anti-commutation.
## Appendix: Proof of the Lemma
To demonstrate 14, we first observe that proving the commutation of a given operator \(M\) is equivalent to proving it for \(\mathcal{H}=H_{l}+2\mathbb{1}\). This equivalence becomes evident from the following relation:
\[[\mathcal{H},M]=0\iff[H_{l},M]=0 \tag{23}\]
Now, considering the form of \(\mathcal{H}=\delta_{i-1,j}+\delta_{i+1,j}\) and the equation above, it follows that the matrix \(M\) must satisfy:
\[M_{i-1,j}+M_{i+1,j}=M_{i,j-1}+M_{i,j+1} \tag{24}\]
This property is noteworthy, in particular it implies that:
* \(M_{ij}=M_{ji}\), symmetry with respect to the main diagonal
* \(M_{ij}=M_{N+1-j,N+1-i}\), symmetry with respect to the antidiagonal
Both of these conditions can be proven by examining the previous equations at the respective matrix vertices.
Given these conditions, it is evident that the dimension of the vector space is significantly smaller than the space of symmetric matrices; in particular, it is easy to see that it is equal to \(N\).
At this point, the idea is to construct a basis for this vector space. It becomes evident that the previous equations impose strong constraints on the construction of matrices, leaving only one possible approach. For instance, when attempting to construct the first element of the basis as follows:
\[\begin{pmatrix}1&0&0&\cdots&0\\ *&*&*&*&*\\ *&*&*&*&*\\ *&*&*&*&*\end{pmatrix} \tag{25}\]
it is necessary to construct the identity matrix. The same holds true for the second element; in fact, aiming to construct:
\[\begin{pmatrix}0&1&0&\cdots&0\\ *&*&*&*&*\\ *&*&*&*&*\\ *&*&*&*&*\end{pmatrix} \tag{26}\]
one is constrained to create the matrix \(\mathcal{H}\). In general, it is not difficult to realize that, given \(N\), a potential basis is provided by the following set: \(\{P_{0}(\mathcal{H}),P_{1}(\mathcal{H}),\ldots,P_{N-1}(\mathcal{H})\}\), where
\[P_{N}(x)=\sum_{k=0}^{\lfloor N/2\rfloor}(-1)^{k}\binom{N-k}{k}x^{N-2k} \tag{27}\]
It turns out that this is actually a way to represent the second type of Chebyshev polynomials, and the connection is expressed as \(P_{n}(2x)=U_{n}(x)\). These polynomials are known for being a set of polynomials that are all independent of each other.
In summary, if one wishes to construct a matrix \(M\) that commutes with \(\mathcal{H}\), it must necessarily be written as:
\[M=\sum_{n=0}^{N-1}a_{n}P_{n}(\mathcal{H}) \tag{28}\]
This completes the proof in the \(\Rightarrow\) direction. The proof in the \(\Leftarrow\) direction is straightforward.
It is worth noting that \(P_{N}(\mathcal{H})=0\) as a consequence of the Cayley-Hamilton theorem [8] (see (15)), providing a consistency check to the lemma.
## Acknowledgments
We would like to extend our heartfelt gratitude to Daniel Loni for meticulously reading the article and offering invaluable feedback and insights. His careful review significantly enhanced the quality of our work.
|
2304.05268 | An Entity-based Claim Extraction Pipeline for Real-world Biomedical
Fact-checking | Existing fact-checking models for biomedical claims are typically trained on
synthetic or well-worded data and hardly transfer to social media content. This
mismatch can be mitigated by adapting the social media input to mimic the
focused nature of common training claims. To do so, Wuehrl & Klinger (2022)
propose to extract concise claims based on medical entities in the text.
However, their study has two limitations: First, it relies on gold-annotated
entities. Therefore, its feasibility for a real-world application cannot be
assessed since this requires detecting relevant entities automatically. Second,
they represent claim entities with the original tokens. This constitutes a
terminology mismatch which potentially limits the fact-checking performance. To
understand both challenges, we propose a claim extraction pipeline for medical
tweets that incorporates named entity recognition and terminology normalization
via entity linking. We show that automatic NER does lead to a performance drop
in comparison to using gold annotations but the fact-checking performance still
improves considerably over inputting the unchanged tweets. Normalizing entities
to their canonical forms does, however, not improve the performance. | Amelie Wührl, Lara Grimminger, Roman Klinger | 2023-04-11T15:07:24Z | http://arxiv.org/abs/2304.05268v1 | # An Entity-based Claim Extraction Pipeline for
###### Abstract
Existing fact-checking models for biomedical claims are typically trained on synthetic or well-worded data and hardly transfer to social media content. This mismatch can be mitigated by adapting the social media input to mimic the focused nature of common training claims. To do so, Wuhrl and Klinger (2022) propose to extract concise claims based on medical entities in the text. However, their study has two limitations: First, it relies on gold-annotated entities. Therefore, its feasibility for a real-world application cannot be assessed since this requires detecting relevant entities automatically. Second, they represent claim entities with the original tokens. This constitutes a terminology mismatch which potentially limits the fact-checking performance. To understand both challenges, we propose a claim extraction pipeline for medical tweets that incorporates named entity recognition and terminology normalization via entity linking. We show that automatic NER does lead to a performance drop in comparison to using gold annotations but the fact-checking performance still improves considerably over inputting the unchanged tweets. Normalizing entities to their canonical forms does, however, not improve the performance.
## 1 Introduction
Fact-checking models trained on synthetic, well-worded and atomic claims struggle to transfer to colloquial content Kim et al. (2021). There are multiple ways to address this problem: We can build custom datasets and models that verify medical content shared online Saakyan et al. (2021); Mohr et al. (2022); Sarrouti et al. (2021) and tackle related tasks Sundriyal et al. (2022); Dougrez-Lewis et al. (2022). Alternatively, we can adapt the input before addressing other fact-checking tasks. Bhatnagar et al. (2022) create claim summaries and find that this improves the detection of previously fact-checked claims. Similarly, Wuhrl and Klinger (2022) extract concise claims from user-generated text in an effort to mimic the focused, well-structured nature of the claims the fact-checking models were originally trained on. They find that this improves the accuracy of pretrained evidence-based fact-checking models in the biomedical domain.
However, the study by Wuhrl and Klinger (2022) is limited in two ways: (1) Their claim extraction method relies on gold-annotated, claim-related entities. For a realistic evaluation, such an oracle needs to be replaced by an entity recognizer. Only then it is possible measure the impact of potential error propagation which may ultimately render the method unfeasible. (2) The claim entities are represented by the original token sequence. This is problematic as medical mentions on Twitter potentially contain imprecise, abbreviated, or colloquial terminology. This is in contrast to the terminology in the original model input as well as the documents that we provide as evidence (cf. Table 1). We hypothesize that for a successful fact-check we need to close this gap by normalizing medical terminology in the input. Previous work suggested leveraging entity linking for evidence retrieval Nooralahzadeh and Ovrelid (2018); Taniguchi et al. (2018); Hanselowski et al. (2018) leading us to believe that it could also be beneficial for aligning claim and evidence.
We address both limitations and evaluate a real-world, fully-automatic claim extraction pipeline for
\begin{table}
\begin{tabular}{p{56.9pt} p{113.8pt} p{113.8pt}} \hline \hline & Claim & Evidence \\ \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & medicines causes blood cts & \begin{tabular}{} \end{tabular} \\ \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & pharmaceutical preparations &
\begin{tabular}{} \end{tabular} \\ & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Example claim represented with original and normalized entities together with evidence.
medical tweets which incorporates an entity recognizer. It only relies on the original text as input that contains the claim. We further evaluate the impact of an entity linker for normalizing entity mentions to canonical forms based on the Unified Medical Language System (UMLS, Bodenreider, 2004). Our pipeline improves the fact-checking performance over tasking models to check unchanged tweets. Normalizing entities to overcome the terminology mismatch does not improve fact-checking, potentially due to limitations of biomedical entity linking for social media.
## 2 Methods
Figure 1 visualizes our pipeline. It takes text as input and performs _named entity recognition_ and optionally term _normalization_ via entity linking. Each unique entity pair forms the building blocks for a potential claim (_claim candidate generation_). The _main claim detection_ identifies the core claim among the candidates that presumably represents the most important aspect of the text. The resulting claim is the input to the fact-checker. In our setting, we assume this to be a frozen pre-trained fact-checking model. We describe the modules in the following and the fact-checker in Section 3.2.
Ner.We use the SpaCy environment1 to train a custom NER model that detects medical entities. This framework relies on a transition-based parser (Lample et al., 2016) to predict entities in the input. In a preliminary study, we found that relying on an off-the-shelf model for biomedical NER, i.e., ScispaCy (Neumann et al., 2019), does not transfer to medical texts from social media. Refer to Appendix B.1 for a comparison of the two models.
Footnote 1: [https://spacy.io/api/architectures#TransitionBasedParser](https://spacy.io/api/architectures#TransitionBasedParser)
**Claim candidate generation.**Wuhrl and Klinger (2022a) propose two extraction methods, i.e., \(\text{condense}_{\text{seq}}\) and \(\text{condense}_{\text{triple}}\). The first represents the claim as the token sequence from the first entity to the last entity, while the second relies on gold-annotated causal relations which they use to build the claims. We use the sequence method \(\text{condense}_{\text{seq}}\) in our pipeline because both methods show on par performances (difference in 1pp F\({}_{1}\)) and, in contrast to \(\text{condense}_{\text{triple}}\), it does not require relation classification.
Following the \(\text{condense}_{\text{seq}}\) method, we therefore extract the sequence from the character onset of the first entity to the character offset of the second entity for all pairs of entities found by the NER module.
Entity linking.To normalize entities, we use the _EntityLinking_ component in ScispaCy (Neumann et al., 2019). This model compares an entity mention to concepts in an ontology and creates a ranked list of candidates, based on an approximate nearest neighbor search. For text normalization, we retrieve the canonical name of the top concept. For entities which could not be linked, we use the original mention instead. As the knowledge base, we use UMLS (Bodenreider, 2004).
Main claim detection.For tweets with more than two predicted entities, claim generation produces multiple claim candidates. To identify the claim to be passed to the fact-checking module, we train a text classifier to detect the main claim for a given input. We build on RoBERTArg2, a RoBERTA-based text classification model trained to label input texts as argument or non-argument. We fine-tune this model to classify texts as claim vs. non-claim and to fit the social media health domain. At inference time, the claim candidate with the highest probability for the claim class constitutes the main claim. We refer to this as _ner+core-claim_.
Footnote 2: [https://huggingface.co/chkla/roberta-argument](https://huggingface.co/chkla/roberta-argument)
Figure 1: Overview of the claim extraction pipeline. Input documents go through entity recognition (NER), normalization, claim candidate generation, main claim detection and fact-checking. Colored boxes represent the entities which we use to extract claim candidates. Note that we evaluate the normalization module separately from the evaluation of the rest of the pipeline (see §3).
## 3 Experiments
### Data
CoVert.We use the CoVert dataset (Mohr et al., 2022) to test our pipeline. It consists of medical tweets labeled with fact-checking verdicts (Supports, Refutes, not enough information) and associated evidence texts. We follow the same filtering and preprocessing as Wohrl and Klinger (2022) which leaves us with 264 tweets. For 13 tweets, the NER model predicts only one or no entities. In these cases, we cannot generate claim candidates thus we can only consider 251 claims.
Bear.We require an independent dataset to train the NER component. We find the Bear dataset (Wuhrl and Klinger, 2022) to be closest in domain and text type to the target data from CoVert. Bear provides 2100 tweets with a total of 6324 annotated medical entities from 14 entity classes. We use 80% of the data for training and 20% for testing the model.
Causal Claims.To build a classifier that identifies the core claims, we use the Causal Claims data from SemEval-2023 Task 8, Subtask 1.3 It consists of medical Reddit posts and provides span-level annotations for _Claim_, _Experience_, _Experience based claim_ and _Question_. Our goal is to differentiate claims from non-claims. Consequently, we extract all spans labeled as _Claim_ and _Experience based claim_ as positive instances for the claim class and use the remaining text spans as negative examples. This leads to 1704 claim and 6870 non-claim spans. We use a train/test split of 90/10%.
Footnote 3: [https://causalclaims.github.io/](https://causalclaims.github.io/)
### Evaluation
The fact-checking module serves as a by-proxy evaluation for the claim representations. Provided with a claim-evidence pair, the system predicts a fact-checking verdict that indicates if the evidence Supports or Refutes the claim. We assume that the fact-checker is a frozen model for which we adapt the claim input. To gauge the checkability of a particular input, we compare the performance for predicting the correct verdict when the model is presented with claims of this type. This follows the evaluation in Wuhrl and Klinger (2022).
The fact-checking models we employ stem from the MultiVerS architecture (Wadden et al., 2022).4 This framework is designed for scientific fact-verification and provides five models (_fever, fever_sci, scifact, covidfact, healthcare_), differing in training data. We report precision, recall and F\({}_{1}\) for predicting the correct fact-checking verdict (Supports, Refutes, not enough information) for a given claim-evidence pair.
Footnote 4: [https://github.com/dwadden/multivers](https://github.com/dwadden/multivers)
### Exp. 1: Impact of NER
In Exp. 1, we aim to understand the impact of automatic NER and main claim detection in the pipeline, instead of relying on gold-labeled entities.
Table 2 reports the results for our fully automatic claim extraction pipeline. Each column reports the performance for a specific type of input claim. _Full tweets_ is the performance as reported by Wuhrl and Klinger (2022) for the unchanged input tweets. The results denoted with \(\text{condense}_{\text{seq}}\) describe their results with gold annotations, to which we compare.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c} \hline \hline & \multicolumn{6}{c}{Input Claim} \\ \cline{2-13} & \multicolumn{4}{c}{Gold entities} & \multicolumn{6}{c}{Fully automatic (Ours)} \\ \cline{2-13} & \multicolumn{4}{c}{\(\text{condense}_{\text{seq}}\)} & \multicolumn{4}{c}{full tweets} & \multicolumn{4}{c}{ner\(+\)rand-ent-seq} & \multicolumn{4}{c}{ner\(+\)core-claim} \\ \cline{2-13} model & P & R & F\({}_{1}\) & \(\Delta_{\text{full}}\) & P & R & F\({}_{1}\) & P & R & F\({}_{1}\) & \(\Delta_{\text{full}}\) & P & R & F\({}_{1}\) & \(\Delta_{\text{full}}\) \\ \hline fever & 83.3 & 1.9 & 3.7 & +3.7 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & +0 & 100 & 0.4 & 0.8 & +0.8 \\ fever\_sci & 87.2 & 15.5 & 26.4 & +18.4 & 91.7 & 4.2 & 8.0 & 92.3 & 4.7 & 9.0 & +1.0 & 82.4 & 5.6 & 10.4 & +2.4 \\ scifact & 90.9 & 7.6 & 14.0 & +13.2 & 100 & 0.4 & 0.8 & 100 & 2.4 & 4.6 & +3.8 & 100 & 2.4 & 4.7 & +3.9 \\ covidfact & 55.6 & 28.4 & 37.6 & +29.7 & 30.8 & 4.5 & 7.9 & 53.3 & 9.4 & 16.1 & +8.2 & 58.1 & 14.3 & 23.0 & +15.1 \\ healthcare & 85.9 & 48.5 & 62.0 & +16.8 & 82.8 & 31.1 & 45.2 & 75.6 & 23.2 & 35.5 & -9.7 & 77.4 & 28.7 & 41.9 & \(-3.3\) \\ average & 80.6 & 20.4 & 28.7 & +16.3 & 61.1 & 8.0 & 12.4 & 64.2 & 7.9 & 13.0 & +0.6 & 83.6 & 10.1 & 16.2 & +3.8 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance (precision, recall and F\({}_{1}\)) of MultiVerS-based models (_fever, fever_sci, scifact, covidfact, healthcare_) on CoVert data. Model inputs are the full tweets, the entity-based sequence claims (\(\text{condense}_{\text{seq}}\)(Wührl and Klinger, 2022)), and claims from the fully automatic pipeline, _ner\(+\)rand-ent-seq_ and _ner\(+\)core-claim_. \(\Delta_{\text{full}}\) : difference in F\({}_{1}\) between the full tweet and performance for the respective input claim. We report the average across all models in the last row.
Our main results are in the last column (_ner\(+\)coreclaim_). To understand the impact of the main claim detection, we compare against a purely random selection of the main claim from all candidates in the tweet (_ner\(+\)rand-ent-seq_).
The rows correspond to the various fact-checking models. \(\Delta\) columns report the difference in \(\mathrm{F}_{1}\) between the performance of checking the full tweet and the respective claim representation.
_ner\(+\)core-claim_ shows an average performance of \(\mathrm{F}_{1}=\)16.2. The performance varies across the models. The _healthVer_ model performs the best (41.9\(\mathrm{F}_{1}\)). The average is considerably higher than using the full tweets (\(\Delta\)=3.8 pp \(\mathrm{F}_{1}\)). This improvement is consistent across all models, except for _healthVer_, presumably because it already shows a high performance for the original texts. To better understand the model behavior, we provide an analysis of its prediction in Appendix B.3. We see a particularly strong impact for the _covidfact_ model, with \(\Delta\)=15.1 pp. Despite this positive result, we see a performance drop when integrating entity recognition instead of building claim extraction on gold entity annotations. This decrease is not surprising since we expect some error propagation from an imperfect entity recognizer. Nevertheless, the results show that entity-based claim extraction also increases the fact-checking performance even under some error propagation throughout the real-world pipeline.
We further see that main claim detection is a required module - the performance for a randomly selected claim (_ner\(+\)rand-ent-seq_) is substantially lower. This indicates that using the same evidence and fact-checking model, not all potential claims in a tweet would receive the same verdict.
### Exp. 2: Impact of Entity Normalization
In Exp. 2, we investigate if it is beneficial to assimilate the linguistic realizations of medical mentions to the expected input of the fact-checking models. More specifically, we suggest normalizing entity strings in the input. In contrast to Exp. 1, in which we evaluate the overall pipeline, we focus on the aspect of the entities here and therefore do not make use of the core claim detection method or the entity recognizer. Instead we build on top of gold annotations and, consequently, employ \(\mathrm{condense}_{\text{triple}}\) described in Section 2.
We use entity linking for term normalization and use ScispaCy's entity linking functionality with _en_core_sci_sm_ as the underlying model (Neumann et al., 2019). For each (gold) entity, we use the canonical name of the concept with the highest linking score. Subsequently, we follow the \(\mathrm{condense}_{\text{triple}}\) method to represent claims.
Table 3 reports the results for claims built with non-normalized (_surface string_) vs. normalized entities (_normalized ent._). The results indicated as \(\mathrm{condense}_{\text{triple}}\)_surface string_ are analogue to the results in Wuhrl and Klinger (2022). We see that normalization does not have the desired effect: The verdict prediction performance drops across all of the fact-checking models (from 29.7 to 22.6 in avg. \(\mathrm{F}_{1}\)). We assume that this is, to a considerable extend, due to entity linking being a challenging task which leads to a limited performance of the employed linking module. We present an error analysis in Appendix B.4.
## 4 Conclusion & Future Work
We propose a fully automatic claim extraction pipeline that is capable of handling real-world medical content. We show that entity-based claim extraction has a positive effect on the performance of multiple fact-checking models - even after replacing the entity oracle with automatic NER. While we observe a negative impact of error propagation from NER and a performance drop as a result, fact-checking the extracted claims is more successful than checking unchanged tweets. Future research may therefore focus on improving the pipeline components as this clearly has the potential to further strengthen the verdict prediction performance. In particular, we expect an improved entity recognizer to have a considerable impact.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{6}{c}{\(\mathrm{condense}_{\text{triple}}\) Claims} \\ \cline{2-7} & \multicolumn{3}{c}{surface string} & \multicolumn{3}{c}{normalized ent.} \\ \cline{2-7} model & P & R & \(\mathrm{F}_{1}\) & P & R & \(\mathrm{F}_{1}\) \\ \hline fever & 81.8 & 3.4 & 6.5 & 75.0 & 1.1 & 2.2 \\ fever\_sci & 89.8 & 20.1 & 32.8 & 93.9 & 11.7 & 20.9 \\ scifact & 86.4 & 7.2 & 13.3 & 94.4 & 6.4 & 12.1 \\ covidfact & 65.0 & 30.3 & 41.3 & 61.8 & 20.8 & 31.2 \\ healthver & 79.7 & 41.7 & 54.7 & 85.7 & 31.8 & 46.4 \\ average & 80.5 & 20.5 & 29.7 & 82.2 & 14.4 & 22.6 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance (precision, recall and \(\mathrm{F}_{1}\)) of MultiVerS-based fact-checking models (_fever_, fever_sci, scifact, covidfact, healthver_) on CoVert claims built with non-normalized (surface string) vs. normalized entities. We report the average across all models in the last row.
Our work focuses on the biomedical domain and builds upon the assumption by Wuhrl and Klinger (2022) that claims in this domain are strongly centered around entities. Claims from other domains may share this property which could make entity-based claim extraction applicable for such claims as well. We leave the evaluation for future work.
We find that normalizing entity mentions does not improve the fact-checking performance. However, our analysis shows that the off-the-shelf linking module might be too unreliable. To fully gauge the potential of normalizing entities, future work needs to ensure correct mappings (creating gold links or building a reliable linker) before evaluating the downstream fact-checking performance.
## Acknowledgments
This research has been conducted as part of the FIBISS project which is funded by the German Research Council (DFG, project number: KL 2869/5-1). We thank the anonymous reviewers for their valuable feedback.
## Limitations
Our work focused on evaluating the impact of putting together a set of components to achieve a real-world system for fact-checking. For answering the research question at hand, the components offered themselves as appropriate choices. This being said, to some degree, the particular selection may limit the expressiveness of the experiments.
By instantiating the pipeline components with the set of models and underlying data that we chose, our findings are limited to this setting. However, the analysis that we provide in Appendix B dissects the pipeline results and allows us to draw more general conclusions about the impact of replacing individual components.
We propose that the main claim detection receives more attention in future research. This may mitigate the issue that this module is potentially the most in-transparent component. Compared to the NER, this task can be modeled in various ways. We rely on the output probabilities to identify the claim candidate the model is most confident about. While this is a straight-forward approach and we show that it works as intended, prediction probabilities - especially for deep models - may not always be a distinctive indicator of model confidence. To overcome this limitation, alternative ways of detecting the main claim should be evaluated.
## Ethical Considerations
A real-world fact-checking pipeline presents itself as a valuable tool. However, we advise against using the pipeline purely automatically that at this point in time. Unless they are used hand-in-hand with a human expert performing or supervising the fact-check, such systems are not reliable enough yet.
Potential issues are the result of the inherent opaqueness of sophisticated automatic analysis pipelines. In the system that we propose, it is important that the impact of each module needs to explain itself to the user. While there is recent work on explainability particularly in the area of fact checking, this work did not yet focus on entity-based approaches. It is important that a user can clearly understand which claim in a statement is checked and which risks potential error propagation might lead to. Therefore, before deploying such systems for fully automatic filtering or labeling of problematic messages in a social media content, there needs to be more research on explainability and transparency of such systems.
|
2307.04426 | The Brezis-Nirenberg problem in 4D | The problem
\begin{equation}
\label{bn}
-\Delta u=|u|^{4\over n-2}u+\lambda V u\ \hbox{in}\ \Omega,\ u=0\ \hbox{on}\
\partial\Omega
\end{equation}
where $\Omega$ is a bounded regular domain in $\mathbb R^n$, $\lambda\in
\mathbb R$ and $V\in C^0(\overline \Omega),$ that was introduced by Brezis and
Nirenberg in their famous paper, where they address the existence of positive
solutions in the autonomous case, i.e. the potential $V$ is constant. Since
then, a huge amount of work has been done. In the following we will make a
brief history highlighting the results which are much closer to the problem we
wish to study in the present paper. | Angela Pistoia, Serena Rocci | 2023-07-10T09:02:44Z | http://arxiv.org/abs/2307.04426v1 | # The Brezis-Nirenberg problem in 4D
###### Abstract.
We address the existence of blowing-up solutions for the Brezis-Nirenberg problem in 4D.
Key words and phrases:Brezis-Nirenberg problem, blow-up solutions, Ljapunov-Schmidt construction 2020 Mathematics Subject Classification: Primary: 35J25. Secondary: 35B09 The authors are partially supported by the group GNAMPA of the Istituto Nazionale di Alta Matematica (INdAM)
###### Abstract
We consider the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-upup solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-upup solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the three-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the three-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the three-up solutions of the following two-up solutions of the following two-up solutions of the three-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the three-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the three-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the three-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the following two-up solutions of the three-
approaches zero.
More precisely, if \(\tau\) denotes the Robin function, our main result reads as follows.
**Theorem 1.1**.: _Let \(\xi_{0}\in\Omega\) be a non-degenerate critical point of \(f(\xi):=\frac{\tau(\xi)}{V(\xi)}\) with \(V(\xi_{0})>0\). If \(\epsilon\) is small enough there exists a solution of problem (1.2) which blows-up at the point \(\xi_{0}\) as \(\varepsilon\to 0\)._
The proof relies on a classical Ljapunov-Schmidt procedure. The setting of the problem (see Section 2) and the reduction process (see Section 3) can be carried out as usual. However, the last step in the procedure needs new ideas. In fact, the rate of the error term is not small enough to argue as in the higher dimensional case and the reduced problem is solved using some local Pohozaev identities (see Section 4).
Remarkably, differently from solutions with one blow-up point, in the case of multiple blow-up points it is harder to derive the concentration speed. Indeed, it seems that they appear at the second order expansion and so to catch them a more accurate description of the ansatz is needed. This will be the topic of a forthcoming paper in collaboration with Monica Musso.
## 2. Setting of the problem
### The bubbles
All the positive solutions to the limit problem
\[-\Delta U=U^{3}\ \text{in}\ \mathbb{R}^{4}\]
are the so called _bubbles_ (see [2, 20])
\[U_{\delta,\xi}(x)=\frac{1}{\delta}U\left(\frac{x-\xi}{\delta}\right),\ x,\xi \in\mathbb{R}^{4},\ \delta>0\]
where
\[U(x):=\mathfrak{c}\frac{1}{1+|x|^{2}},\ \mathfrak{c}:=2\sqrt{2}.\]
It is useful to introduce the projection of the bubble \(U_{\delta,\xi}\) onto \(H^{1}_{0}(\Omega)\), namely the solution of the problem
\[-\Delta PU_{\delta,\xi}=-\Delta U_{\delta,\xi}\ \ \text{in}\ \Omega,\ PU_{\delta,\xi}=0\ \ \text{on}\ \partial\Omega.\]
Let \(G\) be the Green function of \(-\Delta\) on \(H^{1}_{0}(\Omega)\) and \(H\) be its regular part, i.e.
\[G(x,y)=\frac{1}{2\omega}\frac{1}{|x-y|^{2}}-H(x,y)\]
where \(\omega\) denotes the measure of the unit sphere \(S^{3}\subset\mathbb{R}^{4}\). The Robin function is defined as \(\tau(x)=H(x,x)\).
It is well known that
\[PU_{\delta,\xi}(x)=U_{\delta,\xi}(x)-\mathfrak{C}\delta H(x,\xi)+\mathcal{O} (\delta^{3}),\ \mathfrak{C}:=2\mathfrak{c}\omega\]
uniformly with respect to \(x\in\Omega\) and \(\xi\) in compact sets of \(\Omega\) and
\[PU_{\delta,\xi}(x)=\mathfrak{C}\delta G(x,\xi)+\mathcal{O}(\delta^{3})\]
uniformly with respect to \(x\) in compact sets of \(\Omega\setminus\{\xi\}\) and \(\xi\) in compact sets of \(\Omega\)
### Some background material
Let \(H^{1}_{0}(\Omega)\) be the Hilbert space equipped with the usual inner product and the usual norm
\[\langle u,v\rangle=\int_{\Omega}\nabla u\cdot\nabla v\text{ and }\|u\|:=\|u\|_{H^{1}_ {0}(\Omega)}=\left(\int_{\Omega}|\nabla u|^{2}\right)^{1/2}.\]
For \(r\in[1,+\infty)\) the space \(L^{r}(\Omega)\) is also equipped with the standard norm
\[\|u\|_{r}=\left(\int_{\Omega}|u|^{r}\right)^{\frac{1}{r}}.\]
Now, let us introduce \(\mathtt{i}^{*}:L^{4/3}(\Omega)\to H^{1}_{0}(\Omega)\) as the adjoint operator of the embedding \(\mathtt{i}:H^{1}_{0}(\Omega)\hookrightarrow L^{4}(\Omega)\), i.e. \(u=\mathtt{i}^{*}(f)\) if and only if
\[\langle u,\varphi\rangle=\int_{\Omega}f(x)\varphi(x)dx\text{ for all }\varphi \in H^{1}_{0}(\Omega)\]
or equivalently
\[-\Delta u=f\text{ in }\Omega,\ u=0\text{ on }\partial\Omega\]
The operator \(\mathtt{i}^{*}:L^{4/3}(\Omega)\to H^{1}_{0}(\Omega)\) is continuous as
\[\|\mathtt{i}^{*}(f)\|_{H^{1}_{0}(\Omega)}\leqslant S^{-1}\|f\|_{4/3}\]
where \(S\) is the best constant for the Sobolev embedding.
Therefore, the problem (1.2) can be rewritten as
\[u=\mathtt{i}^{*}(u^{3}+\varepsilon Vu),\quad u\in H^{1}_{0}(\Omega). \tag{2.1}\]
### The ansatz
Let \(k\geq 1\) be a fixed integer. We look for a solution to (1.2) of the form
\[u=U_{\delta,\xi}+\phi, \tag{2.2}\]
where \(U_{\delta,\xi}=PU_{\delta,\xi}\) whose blow-up rate is \(\delta=\delta(\varepsilon)\to 0\) and blow-up point is \(\xi=\xi(\varepsilon)\in\Omega\). Moreover, the lower order term \(\phi\) satisfies a set of orthogonal conditions. More precisely, let us consider the linear problem
\[-\Delta\psi=3U^{2}\psi\text{ in }\mathbb{R}^{4}.\]
It is well known [3] that the set of solutions is a \(5-\)dimensional linear space spanned by
\[\psi^{0}(x):=U(x)+\frac{1}{2}\nabla U(x)\cdot x=-\mathfrak{c}\frac{1-|x|^{2}} {(1+|x|^{2})^{2}}\]
and
\[\psi^{j}(x):=\frac{\partial U}{\partial x_{j}}(x)=-2\mathfrak{c}\frac{x_{j}}{ (1+|x|^{2})^{2}},\ j=1,\ldots,4.\]
For \(j=0,1,2,3,4\) we set
\[\psi^{j}_{\delta,\xi}(x)=\frac{1}{\delta}\psi^{j}\left(\frac{x-\xi}{\delta} \right).\]
We introduce their projections \(P\psi^{j}_{\delta,\xi}=\mathtt{i}^{*}(3U^{2}_{\delta,\xi}\psi^{j}_{\delta,\xi})\) onto \(H^{1}_{0}(\Omega)\), namely the solutions to the problem
\[-\Delta P\psi^{j}_{\delta,\xi}=-\Delta\psi^{j}_{\delta,\xi}=3U^{2}_{\delta, \xi}\psi^{j}_{\delta,\xi}\text{ in }\Omega,\ P\psi^{j}_{\delta,\xi}=0\text{ in }\partial\Omega.\]
Finally, we introduce the linear space
\[K_{\delta,\xi}=\text{span}\{P\psi^{j}_{\delta,\xi}\ |\ j=0,\cdots,4\}\]
and its orthogonal space
\[K^{\perp}_{\delta,\xi}=\{\phi\in H^{1}_{0}(\Omega):\langle\phi,P\psi^{j}_{ \delta,\xi}\rangle=0,\text{ for all }j=0,\cdots,4\}.\]
The function \(\phi\) belongs to the space \(K^{\perp}_{\delta,\xi}\).
It is useful to remind the well known properties
\[P\psi^{j}_{\delta,\xi}=\psi^{j}_{\delta,\xi}-\delta^{2}\mathfrak{C}\partial_{ \xi_{j}}H(x,\xi)+\mathcal{O}(\delta^{3}),\quad j=1,\ldots,4\]
uniformly for \(x\in\Omega\) and \(\xi\) in compact sets of \(\Omega\) and
\[P\psi^{0}_{\delta,\xi}=\psi^{0}_{\delta,\xi}-\delta\mathfrak{C}H(x,\xi)+ \mathcal{O}(\delta^{2})\]
uniformly for \(x\) in compact sets of \(\Omega\setminus\{\xi\}\) and \(\xi\) in compact sets of \(\Omega\).
### An equivalent system
Let us introduce the linear projection \(\Pi_{\delta,\xi}:K\to K\) and \(\Pi^{\perp}_{\delta,\xi}:K^{\perp}\to K^{\perp}\) which are defined by
\[\Pi_{\delta,\xi}(\phi)=\sum_{j=0,\cdots,4}\langle\phi,P\psi^{j}_{\delta,\xi} \rangle P\psi^{j}_{\delta,\xi}\qquad\text{and}\qquad\Pi^{\perp}_{\delta,\xi}( \phi)=\phi-\Pi_{\delta,\xi}(\phi).\]
Equation (2.1) can be rewritten as the following system of two equations
\[\Pi^{\perp}_{\delta,\xi}\big{[}\mathcal{L}_{\delta,\xi}(\phi)-\mathcal{E}_{ \delta,\xi}-\mathcal{N}_{\delta,\xi}(\phi)\big{]}=0 \tag{2.3}\]
and
\[\Pi_{\delta,\xi}\big{[}\mathcal{L}_{\delta,\xi}(\phi)-\mathcal{E}_{\delta, \xi}-\mathcal{N}_{\delta,\xi}(\phi)\big{]}=0, \tag{2.4}\]
where the linear operator \(\mathcal{L}_{\delta,\xi}\) is
\[\mathcal{L}_{\delta,\xi}(\phi)=\phi-\mathtt{i}^{*}\left(3\phi W^{2}_{\delta, \xi}+\varepsilon\phi\right),\]
the error term \(\mathcal{E}_{\delta,\xi}\) is
\[\mathcal{E}_{\delta,\xi}=\mathtt{i}^{*}\left(W^{3}_{\delta,\xi}+\varepsilon W _{\delta,\xi}\right)-W_{\delta,\xi}\]
and the nonlinear term \(\mathcal{N}_{\delta,\xi}\) is
\[\mathcal{N}_{\delta,\xi}(\phi)=\mathtt{i}^{*}\left(\phi^{3}+3\phi^{2}W_{ \delta,\xi}\right).\]
## 3. Solving equation (2.3)
Given \(\rho>0\) small, let \(\mathcal{O}_{\rho}=\left\{\xi\in\Omega\ |\ \operatorname{dist}(\xi,\partial\Omega)\geq\rho \right\}.\)
First of all, we estimate the error term \(\mathcal{E}_{\delta,\xi}.\)
**Lemma 3.1**.: _For any \(\rho>0\) small enough there exist \(\varepsilon_{0}>0\) and \(c>0\) such that for any \(\xi\in\mathcal{O}_{\rho}\) and for any \(\varepsilon,\delta\in(0,\varepsilon_{0})\) it holds_
\[\|\mathcal{E}_{\delta,\xi}\|\lesssim|\delta|^{2}+\varepsilon|\delta|.\]
Proof.: First of all, we remark that \(W_{\delta,\xi}=\mathtt{i}^{*}\left(U_{\delta,\xi}^{3}\right).\) Then by a straightforward computation
\[\|\mathcal{E}_{\delta,\xi}\|\lesssim\|(PU_{\delta,\xi})^{3}-U_{\delta,\xi}^{3 }\|_{\frac{4}{3}}+\varepsilon\|V\|_{\infty}\|PU_{\delta,\xi}\|_{\frac{4}{3}}\]
where
\[\|PU_{\delta,\xi}^{3}-U_{\delta,\xi}^{3}\|\lesssim\delta\|U_{\delta,\xi}^{2} \|_{\frac{4}{3}}+\delta^{2}\|U_{\delta,\xi}\|_{\frac{4}{3}}+\delta^{3} \lesssim\delta^{2}.\]
and \(\|PU_{\delta,\xi}\|_{\frac{4}{3}}=\mathcal{O}(\delta).\)
Next, we state the invertibility of the linear operator \(\mathcal{L}_{\delta,\xi}.\)
**Lemma 3.2**.: _For any \(\rho>0\) small enough there exist \(\varepsilon_{0}>0\) and \(c>0\) such that for any \(\varepsilon,\delta\in(0,\varepsilon_{0}),\)\(i=1,\cdots,k\), and for any \(\xi\in\mathcal{O}_{\rho}\)_
\[\|\left(\Pi_{\delta,\xi}^{\perp}\circ\mathcal{L}_{\delta,\xi}\right)(\phi)\| \geq c\|\phi\|\text{ for all }\phi\in K_{\delta,\xi}^{\perp}.\]
_Furthermore, the operator \(\Pi_{\delta,\xi}^{\perp}\circ\mathcal{L}_{\delta,\xi}\) is invertible and its inverse is continuous._
Proof.: We omit the proof because it is enough to apply the arguments used in the proof of Lemma 1.7 in [15] to the \(4-\)dimensional case.
Finally, using a classical fixed point argument, we can solve equation (2.3).
**Proposition 3.3**.: _For any \(\rho>0\) small enough there exists \(\varepsilon_{0}>0\) such that for any \(\varepsilon,\delta\in(0,\varepsilon_{0})\) and \(\xi\in\mathcal{O}_{\rho}\), there exists a unique \(\phi=\phi_{\delta,\xi}\in K_{\delta,\xi}^{\perp}\) solving (2.3), which also satisfies_
\[\|\phi\|\lesssim\delta^{2}+\varepsilon\delta. \tag{3.1}\]
## 4. Solving equation (2.4)
Let \(u=U_{\delta,\xi}+\phi\) (see (2.2)). By (2.3) we deduce that
\[-\Delta u-u^{3}-\epsilon Vu=\sum_{j=0,\ldots,4}c^{j}U_{\delta,\xi}^{2}\psi_{ \delta,\xi}^{j} \tag{4.1}\]
for some real numbers \(c^{j}=c^{j}(\delta,\xi)\). Then solving equation (2.4) is equivalent to find \((\delta,\xi)\) such that the \(c^{j}\)'s are zero.
First of all, we prove a sufficient condition which ensures that all the \(c^{j}\)'s in (4.1) are zero. Set \(\partial_{j}u:=\frac{\partial u}{\partial x_{j}}.\)
**Lemma 4.1**.: _If_
\[\int\limits_{\Omega}(-\Delta u-u^{3}-\epsilon Vu)\psi^{0}_{\delta,\xi}=0 \tag{4.2}\]
_and for some \(\eta>0\)_
\[\int\limits_{B(\xi,\eta)}(-\Delta u-u^{3}-\epsilon Vu)\partial_{j}u=0,\quad\text { for all }j=1,2,3,4 \tag{4.3}\]
_then \(c^{\ell}=0\) for any \(\ell=0,1,\ldots,4\)._
Proof.: By (4.2) and (4.3)
\[\sum\limits_{\ell=0}^{4}c^{\ell}\int\limits_{\Omega}U^{2}_{\delta,\xi}\psi^{ \ell}_{\delta,\xi}\psi^{0}_{\delta,\xi}=\sum\limits_{\ell=0}^{4}c^{\ell}\int \limits_{B(\xi,\eta)}U^{2}_{\delta,\xi}\psi^{\ell}_{\delta,\xi}\partial_{j}u= 0\quad\text{ for all }j=0,\ldots,4\]
and the linear system in the \(c^{\ell}\)'s is diagonally dominant. Indeed
\[\int\limits_{\Omega}U^{2}_{\delta,\xi}\psi^{\ell}_{\delta,\xi}\psi^{0}_{ \delta,\xi}=\begin{cases}a+\mathcal{O}(\delta^{4})&\text{if }\ell=0\\ \mathcal{O}(\delta^{5})&\text{if }\ell=1,\cdots,4\end{cases}\]
for some constant \(a\neq 0.\) Moreover,
\[\int\limits_{B(\xi,\eta)}U^{2}_{\delta,\xi}\psi^{\ell}_{\delta,\xi}\partial_{j }u=\begin{cases}\frac{1}{\delta}b+\mathcal{O}(\varepsilon)&\text{if }\ell=j\\ \mathcal{O}(\varepsilon)&\text{if }\ell\neq j\end{cases}\]
for some constant \(b\neq 0\), because by (2.2) and (3.1),
\[\int\limits_{B(\xi_{i},\eta_{i})}U^{2}_{\delta,\xi}\psi^{\ell}_{\delta,\xi} \partial_{j}\phi=\mathcal{O}(\|\phi\|)\left(\int_{B(\xi,\eta)}|U_{\delta,\xi} \psi^{\ell}_{\delta,\xi}|^{2}\right)^{1/2}=\mathcal{O}(\varepsilon)\quad\text {for all }\ell=0,\cdots,4\]
and
\[\int\limits_{B(\xi,\eta)}U^{2}_{\delta,\xi}\psi^{\ell}_{\delta,\xi}\partial_{j }PU_{\delta,\xi}=\begin{cases}\frac{1}{\delta}b+\mathcal{O}(\delta^{2})&\text {if }\ell=j\\ \mathcal{O}(\delta^{2})&\text{if }\ell\neq j\end{cases}\]
Next, we write the first order term of (4.2).
**Lemma 4.2**.: _For any \(\rho>0\) small enough_
\[\int\limits_{\Omega}(-\Delta u-u^{3}-\epsilon V(x)u)\psi^{0}_{\delta,\xi}= \mathfrak{C}^{2}\delta^{2}\tau(\xi)+\varepsilon\delta^{2}\ln\delta\mathfrak{c }^{2}\omega V(\xi)+o(\delta^{2}), \tag{4.4}\]
_as \(\varepsilon,\delta\to 0\), uniformly respect to \(\xi\in\mathcal{O}_{\rho}\)._
Proof.: We point out that
\[\int\limits_{\Omega}(-\Delta u-u^{3}-\epsilon Vu)\psi^{0}_{\delta, \xi} =\underbrace{\int\limits_{\Omega}\left(-\Delta U_{\delta,\xi}-U^{3}_{ \delta,\xi}-\epsilon U_{\delta,\xi}\right)\psi^{0}_{\delta,\xi}}_{=:I_{1}}\] \[+\underbrace{\int\limits_{\Omega}\left(-\Delta\phi-3U^{2}_{ \delta,\xi}\phi\right)\psi^{0}_{\delta,\xi}}_{=:I_{2}}\] \[+\underbrace{\int\limits_{\Omega}\left(\phi^{3}+3U_{\delta,\xi} \phi^{2}\right)\psi^{0}_{\delta,\xi}}_{=:I_{3}}\]
First of all, let us prove that
\[I_{1}=\delta^{2}\mathfrak{C}^{2}H(\xi,\xi)+\varepsilon\delta^{2}\ln\delta \mathfrak{c}^{2}\omega V(\xi)+o(\delta^{2}). \tag{4.5}\]
We observe that
\[I_{1} =\int_{\Omega}\left[U^{3}_{\delta,\xi}-PU^{3}_{\delta,\xi}- \varepsilon V(x)PU_{\delta,\xi}\right]\psi^{0}_{\delta,\xi}\] \[=\int_{\Omega}(U^{3}_{\delta,\xi}-PU^{3}_{\delta,\xi})\psi^{0}_ {\delta,\xi}-\varepsilon\int_{\Omega}V(x)PU_{\delta,\xi}\psi^{0}_{\delta,\xi}.\]
Now
\[\int_{\Omega}(U^{3}_{\delta,\xi}-PU^{3}_{\delta,\xi}) \psi^{0}_{\delta,\xi}=3\int_{B(\xi,\rho)}U^{2}_{\delta,\xi} \underbrace{\left(U_{\delta,\xi}-PU_{\delta,\xi}\right)}_{\delta\mathfrak{C}H( x,\xi)+\mathcal{O}(\delta^{3})}\psi^{0}_{\delta,\xi}+\mathcal{O}(\delta^{3})\] \[=3\delta\mathfrak{C}\int_{\Omega}H(x,\xi)U^{2}_{\delta,\xi}\psi^{ 0}_{\delta,\xi}+\mathcal{O}(\delta^{3})\] \[=3\delta^{4}\mathfrak{C}\mathfrak{c}^{3}\int_{B(\xi,\rho)}H(x, \xi)\frac{|x-\xi|^{2}-\delta^{2}}{\left(\delta^{2}+|x-\xi|^{2}\right)^{4}}+o( \delta^{2})\] \[=3\delta^{2}\mathfrak{C}\mathfrak{c}^{3}\int_{B(0,\rho/\delta)} \underbrace{H(\xi+\delta t,\xi)}_{H(\xi,\xi)+\mathcal{O}(\delta)}\frac{|t|^{2 }-1}{(1+|t|^{2})^{4}}+o(\delta^{2})\] \[=\delta^{2}\mathfrak{C}^{2}H(\xi,\xi)+o(\delta^{2})\]
because
\[\int_{\mathbb{R}^{4}}\frac{|t|^{2}-1}{(1+|t|^{2})^{4}}=\frac{\omega}{12},\]
and
\[\begin{split}\varepsilon\int_{\Omega}V(x)PU_{\delta,\xi}\psi^{0}_{ \delta,\xi}&=\varepsilon\delta^{2}\mathfrak{c}^{2}\int_{B(\xi, \rho)}V(x)\frac{|x-\xi|^{2}-\delta^{2}}{(\delta^{2}+|x-\xi|^{2})^{3}}+ \mathcal{O}(\delta^{3})\\ &=\varepsilon\delta^{2}\mathfrak{c}^{2}\int_{B(0,\rho/\delta)}V( \delta t+\xi)\frac{|y|^{2}-1}{(1+|y|^{2})^{3}}+\mathcal{O}(\delta^{3})\\ &=-\varepsilon\delta^{2}\ln\delta\mathfrak{c}^{2}\omega V(\xi)+o (\delta^{2}\varepsilon\ln\delta).\end{split}\]
Let us estimate the other terms. It is important to point out the estimate
\[\int_{\partial\Omega}|\partial_{\nu}\phi|^{2}=o(\delta^{2})\]
proved in [19]. Then, recalling that \(-\Delta\psi^{0}_{\delta,\xi}=3U_{i}^{2}\psi^{0}_{\delta,\xi}\), for all \(i=1\cdots,k\), we have
\[\begin{split}\int_{\Omega}(-\Delta\phi)\psi^{0}_{\delta,\xi}& =\underbrace{\int_{\Omega}\phi(-\Delta\psi^{0}_{\delta,\xi})}_{= 3\int_{\Omega}\phi U_{i}^{2}\psi^{0}_{\delta,\xi}}+\int_{\partial\Omega} \underbrace{\phi}_{=0}\nabla\psi^{0}_{\delta,\xi}\cdot\nu-\int_{\partial \Omega}\underbrace{\psi^{0}_{\delta,\xi}}_{=\mathcal{O}(\delta)}\nabla\phi \cdot\nu\\ &=3\int_{\Omega}\phi U_{i}^{2}\psi^{0}_{\delta,\xi}+o(|\delta|^{2} ).\end{split}\]
Therefore,
\[\begin{split}|I_{2}|&\lesssim 3\int_{\Omega}\phi |PU_{\delta,\xi}^{2}-U_{\delta,\xi}^{2}||\psi^{0}_{\delta,\xi}|+\varepsilon \int_{\Omega}|\phi||\psi^{0}_{\delta,\xi}|+o(|\delta|^{2})\\ &\lesssim\|\phi\|_{4}\|(PU_{\delta,\xi}^{2}-U_{i}^{2})\psi^{0}_{ \delta,\xi}\|_{4/3}+\varepsilon\|\phi\|_{4}\|\psi^{0}_{\delta,\xi}\|_{4/3}=o \left(\delta^{2}\right).\end{split} \tag{4.6}\]
and
\[\begin{split}|I_{3}|&\leqslant\int_{\Omega}\left[| \phi|^{3}+3|\phi|^{2}|PU_{\delta,\xi}|\right]|\psi^{0}_{\delta,\xi}|\\ &\lesssim\|\phi\|^{3}\|\psi^{0}_{\delta,\xi}\|_{4}+\|\phi\|^{2} \|PU_{\delta,\xi}\psi^{0}_{\delta,\xi}\|_{2}=o(\delta^{2}).\end{split} \tag{4.7}\]
Finally, (4.4) follows by (4.5), (4.6) and (4.7).
Finally, we write the first order term of (4.3).
**Lemma 4.3**.: _For any \(\rho>0,\) there exists \(\eta_{i}>0\) such that_
\[\int\limits_{B(\xi_{i},\eta_{i})}(-\Delta u-u^{3}-\epsilon Vu)\partial_{j}u=- \frac{1}{2}\delta^{2}\left[\mathfrak{c}^{2}\partial_{j}\tau(\xi)+(\varepsilon \ln\delta)\omega\mathfrak{c}^{2}\partial_{j}V(\xi)\right]+o\left(\delta^{2} \right),\]
_as \(\varepsilon,\delta\to 0\), uniformly respect to \(\xi\in\mathcal{O}_{\rho}\)._
Proof.: First of all, we point out that
\[\int\limits_{B(\xi_{i},\eta_{i})}(-\Delta u-u^{3})\partial_{j}u=\int\limits_{ \partial B(\xi_{i},\eta_{i})}\left(-\partial_{\nu}u\partial_{j}u+\frac{1}{2}| \nabla u|^{2}\nu_{j}-\frac{1}{4}u^{4}\nu_{j}\right) \tag{4.8}\]
By Lemma 4.4 and (3.1), we choose \(\eta_{i}\) such that
\[\int_{\partial B(\xi_{i},\eta_{i})}\left(|\nabla\phi|^{2}+|\phi|^{4}\right) \lesssim\delta^{4}. \tag{4.9}\]
Now, by (2.2)
\[U_{\delta,\xi}(x):=\frac{\delta\mathfrak{c}}{|x-\xi|^{2}}-\mathfrak{C}\delta H (x,\xi)+\mathcal{O}\left(|\delta|^{2}\right) \tag{4.10}\]
\(C^{1}-\)uniformly on \(\partial B(\xi,\eta).\) It is crucial to point out that the function \(H(x,\xi)\) is harmonic. Therefore, by (4.8), (4.9) and (4.10)
\[\int\limits_{B(\xi,\eta)}(-\Delta u-u^{3})\partial_{j}u\] \[= \delta^{2}\int\limits_{\partial B(\xi,\eta)}\left[-\partial_{ \nu}\left(\frac{\mathfrak{c}}{|x-\xi|^{2}}+\mathfrak{C}H(x,\xi)\right) \partial_{j}\left(\frac{\mathfrak{c}}{|x-\xi|^{2}}+\mathfrak{C}H(x,\xi)\right)\right.\] \[\left.+\delta^{2}\frac{1}{2}\left|\nabla\left(\frac{\mathfrak{c} }{|x-\xi_{i}|^{2}}+\mathfrak{C}H(x,\xi)\right)\right|^{2}\nu_{j}\right]+o \left(|\delta|^{2}\right)\] \[= -\delta^{2}\int_{\partial B(\xi,\eta)}\nabla\left(\frac{ \mathfrak{c}}{|x-\xi|^{2}}\right)\cdot\nu\partial_{x_{j}}H(x,\xi)+o(\delta^{2})\] \[= 2\mathfrak{C}\mathfrak{C}\delta^{2}\frac{1}{|\eta|^{3}}\int \limits_{\partial B(\xi,\eta)}\partial_{j}H(x,\xi)+o\left(\delta^{3}\right)= \mathfrak{C}^{2}\delta^{2}\partial_{j}H(x,\xi)\big{|}_{x=\xi}+o\left(\delta^{ 2}\right)\]
because \(H(x,\xi)\) is harmonic and also \(2\mathfrak{c}\omega=\mathfrak{C}\).
Finally, as \(\tau(x)=H(x,x)\) denotes the Robin's function, by
\[\partial_{\xi_{j}}\tau(\xi)=\left(\partial_{x_{j}}H(x,y)+\partial_{y_{j}}H(x, y)\right)|_{(x,y)=(\xi,\xi)}=2\partial_{x_{j}}H(x,y)|_{(x,y)=(\xi,\xi)}\]
follows that
\[\int_{B(\xi,\eta)}(-\Delta u-u^{3})\partial_{j}u=\frac{1}{2}\mathfrak{C}^{2} \delta^{2}\partial_{j}\tau(\xi)+o(\delta^{2}). \tag{4.11}\]
Arguing in a similar way, we also have
\[-\varepsilon\int_{B(\xi,\eta)}V(x)u\partial_{j}u =\varepsilon\frac{1}{2}\int_{B(\xi,\eta)}\left(\partial_{j}V(x) \right)u^{2}-\varepsilon\frac{1}{2}\int_{\partial B(\xi,\eta)}V(x)u^{2}\nu_{j} \tag{4.12}\] \[=-\frac{1}{2}\varepsilon\delta^{2}\omega\mathfrak{c}^{2}\partial _{j}V(\xi)\ln\delta+o(\delta^{2})\]
By (4.11) and (4.12) the claim follows.
**Lemma 4.4**.: _If there exists \(C_{1}>0\) such that_
\[\int_{B(\xi,\eta_{1})\setminus B(\xi,\eta_{2})}|f(x)|dx\leqslant C_{1}, \tag{4.13}\]
_then there exist \(C_{2}>0\) and \(\bar{\eta}\in(\eta_{1},\eta_{2})\) such that_
\[\int_{\partial B(\xi,\bar{\eta})}|f(x)|dx\leqslant C_{2}.\]
Proof.: Assume that for any \(\eta\in(\eta_{1},\eta_{2})\) and \(C>0\)
\[\int_{\partial B(\xi,\eta)}|f(x)|dx>C.\]
Then by the coarea formula
\[\int_{B(\xi,\eta_{1})\setminus B(\xi,\eta_{2})}|f(x)|dx=\int_{\eta_{1}}^{\eta _{2}}\left(\int_{\partial B(\xi,\eta)}|f(x)|dx\right)d\eta>C(\eta_{2}-\eta_{1}),\]
and a contradiction arises.
### Proof of Theorem 1.1: completed
If we set \(\delta=e^{-\frac{t}{\varepsilon}}\), with \(t>0\), by Lemma 4.1, Lemma 4.2 and Lemma 4.3, the problem reduces to find \(t>0\) and \(\xi\in\Omega\) such that
\[\begin{cases}c\tau(\xi)-tV(\xi)+o(1)=0\,\\ c\nabla\tau(\xi)-t\nabla V(\xi)+o(1)=0.\end{cases} \tag{4.14}\]
with \(c:=\frac{\mathfrak{C}^{2}}{\omega\epsilon^{2}}.\) Let \(\xi_{0}\in\Omega\) be a non-degenerate critical point of the function \(f(\xi):=\frac{\tau(\xi)}{V(\xi)}\) with \(V(\xi_{0})>0\). Then the point \((t_{0},\xi_{0})\), \(t_{0}:=c\frac{\tau(\xi_{0})}{V(\xi_{0})}\), is an isolated zero of the function \(F(t,\xi):(0,+\infty)\times\Omega\to\mathbb{R}\times\mathbb{R}^{4}\) defined by
\[F(t,\xi):=\Big{(}c\tau(\xi)-tV(\xi),c\nabla\tau(\xi)-t\nabla V(\xi)\Big{)}.\]
We claim that the local degree
\[\texttt{degloc}\Big{(}F,(t_{0},\xi_{0})\Big{)}\neq 0. \tag{4.15}\]
Indeed,
\[F^{\prime}(t_{0},x_{0})=\left(\begin{array}{cc}-t_{0}V(\xi_{0})&-t_{0}\nabla V (\xi_{0})\\ c\nabla\tau(\xi_{0})-t_{0}\nabla V(\xi_{0})&c\mathcal{D}^{2}\tau(\xi_{0})-t_{0 }\mathcal{D}^{2}V(\xi_{0})\end{array}\right)\]
where
\[c\nabla\tau(\xi_{0})-t_{0}\nabla V(\xi_{0})=-c\tau(\xi_{0})\nabla f(\xi_{0})=0,\]
so
\[\texttt{det}\ F^{\prime}(t_{0},x_{0}) =c^{2}\frac{\tau(\xi_{0})}{V(\xi_{0})}\left(-V(\xi_{0})\mathcal{D }^{2}\tau(\xi_{0})+\tau(\xi_{0})\mathcal{D}^{2}V(\xi_{0})\right)\] \[=-c^{2}V^{3}(\xi_{0})\tau(\xi_{0})\texttt{det}\ \mathcal{D}^{2}f(\xi_{0})\neq 0.\]
Finally, (4.15) implies that if \(\epsilon\) is small enough the system (4.14) has a solution \(t_{\epsilon},\xi_{\epsilon}\) such that \(t_{\epsilon}\to t_{0}\) and \(\xi_{\epsilon}\to\xi_{0}\) as \(\epsilon\to 0.\) That concludes the proof. |
2305.15352 | Optimal Rates for Bandit Nonstochastic Control | Linear Quadratic Regulator (LQR) and Linear Quadratic Gaussian (LQG) control
are foundational and extensively researched problems in optimal control. We
investigate LQR and LQG problems with semi-adversarial perturbations and
time-varying adversarial bandit loss functions. The best-known sublinear regret
algorithm of \cite{gradu2020non} has a $T^{\frac{3}{4}}$ time horizon
dependence, and its authors posed an open question about whether a tight rate
of $\sqrt{T}$ could be achieved. We answer in the affirmative, giving an
algorithm for bandit LQR and LQG which attains optimal regret (up to
logarithmic factors) for both known and unknown systems. A central component of
our method is a new scheme for bandit convex optimization with memory, which is
of independent interest. | Y. Jennifer Sun, Stephen Newman, Elad Hazan | 2023-05-24T17:02:30Z | http://arxiv.org/abs/2305.15352v3 | # Optimal Rates for Bandit Nonstochastic Control
###### Abstract
Linear Quadratic Regulator (LQR) and Linear Quadratic Gaussian (LQG) control are foundational and extensively researched problems in optimal control. We investigate LQR and LQG problems with semi-adversarial perturbations and time-varying adversarial bandit loss functions. The best-known sublinear regret algorithm of Gradu et al. (2020) has a \(T^{\frac{3}{4}}\) time horizon dependence, and its authors posed an open question about whether a tight rate of \(\sqrt{T}\) could be achieved. We answer in the affirmative, giving an algorithm for bandit LQR and LQG which attains optimal regret (up to logarithmic factors) for both known and unknown systems. A central component of our method is a new scheme for bandit convex optimization with memory, which is of independent interest.
## 1 Introduction
Linear-Quadratic Regulator (LQR) and the more general Linear-Gaussian (LQG) control problems have been extensively studied in the field of control theory due to their wide range of applications and admittance of analytical solutions by the seminal works of Bellman (1954) and Kalman (1960). LQR and LQG control problems study the design of a feedback control policy for a linear dynamical system with the goal of minimizing cumulative, possibly time-varying quadratic costs. The discrete version of the problem studies the control of the following linear dynamical system governed by dynamics \((A,B,C)\)1:
Footnote 1: The LQR/LQG dynamics can be generalized to time-varying linear dynamical systems. Here we restrict ourselves to linear time-invariant systems for simplicity.
\[\mathbf{x}_{t+1}=A\mathbf{x}_{t}+B\mathbf{u}_{t}+\mathbf{w}_{t}\;,\;\mathbf{ y}_{t}=C\mathbf{x}_{t}+\mathbf{e}_{t}\,,\]
where at time \(t\), \(\mathbf{x}_{t}\) represents the system's state, \(\mathbf{u}_{t}\) represents the control exerted on the system, and \(\{\mathbf{w}_{t}\}_{t=1}^{T}\) represents a sequence of i.i.d. centered Gaussian perturbations injected to the system. In the generality of LQG, the system's states are not accessible. Instead, the algorithm has access to a linear function of state noised by a sequence \(\{\mathbf{e}_{t}\}_{t=1}^{T}\) of i.i.d. centered Gaussian noises. Costs are a function of both observed state and the control exerted. The goal in LQR/LQG problems is to find a control policy \(\pi\) that minimizes the cumulative cost over a finite time horizon \(T\). With \(\mathbf{y}_{t}^{\pi},\mathbf{u}_{t}^{\pi}\) denoting the observation and control at time \(t\) resulted from executing policy \(\pi\), the objective is formally given by
\[\operatorname*{minimize}_{\pi}\;\;J_{T}(\pi)\stackrel{{\text{ def}}}{{=}}\sum_{t=1}^{T}c_{t}(\mathbf{y}_{t}^{\pi},\mathbf{u}_{t}^{\pi})= \sum_{t=1}^{T}{\mathbf{y}_{t}^{\pi}}^{\top}Q_{t}\mathbf{y}_{t}^{\pi}+\mathbf{ u}_{t}^{\pi}{}^{\top}R_{t}\mathbf{u}_{t}^{\pi}.\]
Variations of this problem have garnered considerable interest. In the recent literature of online nonstochastic control, several setting-based generalizations to the linear-control framework have been explored, including
* Adversarially chosen cost functions that are not known in advance (Agarwal et al. (2019)). This generalization is important for a variety of real-world applications with model-external negative feedback, including zero-sum game-playing and defending against adversarial learning (Lowd and Meek, 2005) in applications.
* Adversarial perturbations in the dynamics (Agarwal et al. (2019)), which permit the modeling of misspecification and nonstochastic noise (Ghai et al. (2022)).
* The more challenging case of _bandit control_(Gradu et al. (2020), Cassel and Koren (2020)), where only the cost incurred may be observed, rather than its gradient. Recently, bandit control has seen its applications in model-free RL and meta optimizaton (Chen and Hazan (2023)).
Taken together, these settings give rise to a general setting in differentiable reinforcement learning that strictly contains a variety of classical problems in optimal and robust control. Naturally, when adversarial costs and perturbations are considered, an optimal solution is not defined a priori. Instead, the primary performance metric is _regret_: the difference between the total cost of a control algorithm and that of the best controller from a specific policy class in hindsight.
This general setting of bandit online control was considered in the recent work of Gradu et al. (2020), whose proposed Bandit Perturbation Controller (BPC) algorithm has a provable regret guarantee of \(\tilde{O}(T^{\frac{3}{4}})\) when compared with the policy class of disturbance action controllers for fully observed systems. Similar setting has also been studied by Cassel and Koren (2020), who established an optimal regret up to logarithmic factor of \(\tilde{O}(\sqrt{T})\) for fully observable systems under stochastic perturbations and adversarially chosen cost functions. However, bandit control for partially observable systems (e.g. LQG) is less understood. Thus, these developments in the search for efficient, low-regret bandit online control algorithms leave a central open question (also stated by by Gradu et al. (2020)):
Can we achieve optimal regret \(O(\sqrt{T})\) with bandit LQG and nonstochastic noise?
Our work answers this question up to logarithmic factors. Our novel Ellipsoidal Bandit Perturbation Controller (EBPC) achieves a \(\tilde{O}(\sqrt{T})\) regret guarantee in the presence of semi-adversarial perturbations in bandit LQG problems, with the additional generality of possibly unknown system dynamics. By Shamir (2013), this is asymptotically optimal up to logarithmic factors, as bandit optimization over quadratics reduces to bandit control with quadratic losses under \(A=0,B=I\). Our work therefore resolves the upper-bound/lower-bound gap for this generalization of LQR/LQG. The following table gives a comprehensive comparison between the regret guarantee of EBPC and existing results in literature.
### Related work
Online Nonstochastic Control and Online LQR.In the last decade, much research has been devoted to the intersection of learning and control. Abbasi-Yadkori and Szepesvari (2011) and Ibrahimi et al. (2012) considered the problem of learning a controller in LQR for known quadratic
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Algorithm & Noise & Observation & Feedback & System & Regret \\ \hline Agarwal et al. (2019) & Adversarial & full & full & known & \(O(\sqrt{T})\) \\ \hline Agarwal et al. (2019) & Stochastic & full & full & known & \(\tilde{O}(1)\) \\ \hline Foster and Simchowitz (2020) & Adversarial & full & full & known & \(\tilde{O}(1)\) \\ \hline Simchowitz et al. (2020) & Semi-Adv. & partial & full & known & \(\tilde{O}(1)\) \\ \hline Simchowitz et al. (2020) & Semi-Adv. & partial & full & unknown & \(O(\sqrt{T})\) \\ \hline Gradu et al. (2020) & Adversarial & full & bandit & unknown & \(\tilde{O}(T^{\frac{3}{4}})\) \\ \hline Cassel and Koren (2020) & Stochastic & full & bandit & known & \(\tilde{O}(\sqrt{T})\) \\ \hline Cassel and Koren (2020) & Adversarial & full & bandit & known & \(\tilde{O}(T^{\frac{3}{4}})\) \\ \hline
**Theorem 4.1** & **Semi-Adv.** & **partial** & **bandit** & **known** & \(\tilde{O}(\sqrt{T})\) \\ \hline
**Theorem 4.2** & **Semi-Adv.** & **partial** & **bandit** & **unknown** & \(\tilde{O}(\sqrt{T})\) \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of previous results to our contributions.
cost functions and stochastic/martingale difference perturbation sequence when the dynamics of the system is unknown, and achieved \(\tilde{O}(\sqrt{T})\)-regret in this case. Dean et al. (2018) provided the first provable low-regret, efficient algorithm to solve LQR problems with known cost functions and stochastic perturbations. Cohen et al. (2018) extended this result to changing quadratic costs with stochastic perturbations and provided a regret guarantee of \(O(\sqrt{T})\). Dale et al. (2021) consider the LQG problem with stochastic noise and unknown systems.
More recently, interest has turned to _nonstochastic control_, in which the cost functions and the perturbations can be adversarially chosen Agarwal et al. (2019). A broad spectrum of control problems were reconsidered from the nonstochastic perspective, and several different generalizations were derived. To highlight a few:
* Agarwal et al. (2019) showed \(O(\mathrm{poly}(\log T))\)-regret for adversarially chosen strongly convex cost functions and stochastic noises, Hazan et al. (2020) extended the setting of Agarwal et al. (2019) to unknown systems and achieved \(O(T^{\frac{2}{3}})\)-regret, and Simchowitz (2020) tightened this bound to \(\tilde{O}(\sqrt{T})\) and \(O(\mathrm{poly}(\log T))\) for known systems. These approaches in studying the control of an unknown system depend on oracle access to a linear stabilizing controller.
* Chen and Hazan (2021) relaxed this assumption and provided the first efficient, low-regret algorithm for online nonstochastic control under the assumption that the system is controllable.
* Cassel et al. (2020); Simchowitz and Foster (2020); Plevrakis and Hazan (2020); Cassel and Koren (2021) showed an \(\Omega(\sqrt{T})\) regret lower bound for unknown systems in LQR with full cost feedback.
See Hazan and Singh (2022) for a comprehensive text detailing these results.
Online Bandit Convex Optimization with Memory.A classical approach to control of stable/stabilizable linear dynamical systems is to reduce control problems to online convex optimization with memory. In our setting, the learner iteratively plays a decision \(x_{t}\) in a convex set \(\mathcal{K}\subseteq\mathbb{R}^{d}\) and suffers an adversarially chosen loss \(F_{t}(x_{t-H+1:t})\), where \(x_{t-H+1:t}\) is the sequence of points \(x_{t-H+1},...,x_{t}\). In particular, the loss depends on the last \(H\) points played by the algorithm, and the only information revealed to the learner is the scalar loss that they incurred. The goal is to minimize _regret_, the difference between the loss actually suffered and the loss suffered under the best single play in hindsight:
\[\text{Regret}_{T}\stackrel{{\text{def}}}{{=}}\sum_{t=H}^{T}F_{t} (x_{t-H+1:t})-\min_{x\in\mathcal{K}}\sum_{t=H}^{T}F_{t}(x,\dots,x).\]
Since the loss function is unknown to the learner in the bandit setting, online bandit convex optimization algorithms make use of low-bias estimators of the true gradient or Hessian. Therefore, it is standard to measure the algorithm's performance by _expected regret_ over the stochasticity injected when creating such estimators.
Online convex optimization with memory in the full information setting, where the loss function is known to the learner, was proposed by Anava et al. (2015). The work of Agarwal et al. (2019), was the first to connect this to control, and to give a regret bound for online control with adversarial perturbations.
In the bandit setting, Gradu et al. (2020) used bandit convex optimization with memory to derive regret bounds for online control with bandit feedback. Their work builds upon the bandit convex optimization method of Flaxman et al. (2005) to obtain a \(\tilde{O}(T^{\frac{1}{4}})\) regret bound for general convex loss functions.
We focus on the bandit LQR/LQG setting, where the loss functions are strongly convex and smooth. It is thus natural to use the techniques of Hazan and Levy (2014), who obtained a \(\tilde{O}(\sqrt{T})\) regret guarantee for bandit convex optimization without memory. This bound is tight up to logarithmic factors as proved by Shamir (2013).
Online Learning with Delay.One technical difficulty in extending OCO with memory to the bandit setting arises from the requirement of independence between every play and the noises injected from
the recent \(H\) steps. We resolve this issue by adapting online learning with delay to the subroutine algorithm used in our BCO algorithm. Online learning with delay was introduced by Quanrud and Khashabi (2015). In particular, Flaspohler et al. (2021) relates online learning with delay to online learning with optimism and established a sublinear regret guarantee for mirror descent algorithms. A similar delay scheme was seen in (Gradu et al., 2020).
### Notations and organization
Notation.For convenience, we denote \(\bar{H}\stackrel{{\text{def}}}{{=}}H-1\). We use lowercase bold letters (e.g. \(\mathbf{x},\mathbf{y},\mathbf{u},\mathbf{w},\mathbf{e}\)) to denote the states, observations, controls, and noises of the dynamical system, and \(d_{\mathbf{x}},d_{\mathbf{u}},d_{\mathbf{y}}\) to denote their corresponding dimensions. For a differentiable function \(F:(\mathbb{R}^{n})^{H}\to\mathbb{R}\), we denote the gradient of \(F\) with respect to its \(i\)th argument vector by \(\nabla_{i}F(\cdot)\). \(\rho(\cdot)\) acting on a square matrix measures the spectral radius of the matrix. For a sequence \(M=(M^{[i]})_{i\in I}\), we use \(\|M\|_{\ell_{1},\mathrm{op}}\) to denote the sum of the operator norm: \(\|M\|_{\ell_{1},\mathrm{op}}\stackrel{{\text{def}}}{{=}}\sum_{i \in I}\|M^{[i]}\|_{\mathrm{op}}\). We use \(O(\cdot)\) to hide all universal constants, \(\tilde{O}(\cdot)\) to hide \(\mathrm{poly}(\log T)\) terms, and \(\mathcal{O}(\cdot)\) to hide all natural parameters.
Organization.Our method has two main components: a novel algorithm for BCO with memory (EBCO-M), and its application to building a novel bandit perturbation controller (EBPC). Section 2 describes our problem setting. Section 3 gives EBCO-M and its near-optimal regret guarantee. Section 4 introduces EBPC and its regret guarantees to both known and unknown systems.
## 2 The Bandit LQG Problem
In this section we provide necessary background and describe and formalize the main problem of interest. We consider control of linear time-invariant dynamical systems of the form
\[\mathbf{x}_{t+1}=A\mathbf{x}_{t}+B\mathbf{u}_{t}+\mathbf{w}_{t}\,\ \mathbf{y}_{t}=C \mathbf{x}_{t}+\mathbf{e}_{t}\.\]
with dynamics matrices \(A\in\mathbb{R}^{d_{\mathbf{x}}\times d_{\mathbf{x}}},B\in\mathbb{R}^{d_{ \mathbf{x}}\times d_{\mathbf{u}}},C\in\mathbb{R}^{d_{\mathbf{y}}\times d_{ \mathbf{x}}}\). Here, consistent with previous notations, \(\mathbf{x}_{t}\in\mathbb{R}^{d_{\mathbf{x}}}\) is the state of the system at time \(t\), \(\mathbf{u}_{t}\in\mathbb{R}^{d_{\mathbf{u}}}\) is the control applied at time \(t\), and \(\mathbf{w}_{t}\in\mathbb{R}^{d_{\mathbf{x}}},\mathbf{e}_{t}\in\mathbb{R}^{d_{ \mathbf{y}}}\) are the system and measurement perturbations. At each timestep, the learner may observe \(\mathbf{y}_{t}\in\mathbb{R}^{d_{\mathbf{y}}}\), which usually represents a possibly noisy projection of the state \(\mathbf{x}_{t}\) onto some low-dimensional space.
In the online bandit setting, the learner is asked to perform a control \(\mathbf{u}_{t}\) at time \(t\). After the control is performed, the adversary chooses a quadratic cost function \(c_{t}(\mathbf{y}_{t},\mathbf{u}_{t})\). The learner observes the scalar \(c_{t}(\mathbf{y}_{t},\mathbf{u}_{t})\in\mathbb{R}_{+}\) and the signal \(\mathbf{y}_{t}\), but no additional information about \(c_{t}(\cdot,\cdot)\), \(\mathbf{x}_{t}\), \(\mathbf{w}_{t}\), or \(\mathbf{e}_{t}\). The goal of the learner is to minimize _expected regret_. _Regret_ against the controller class \(\Pi\) is defined as
\[\text{Regret}_{T}\stackrel{{\text{def}}}{{=}}\sum_{t=1}^{T}c_{t}( \mathbf{y}_{t},\mathbf{u}_{t})-\min_{\pi\in\Pi}\sum_{t=1}^{T}c_{t}(\mathbf{y}_ {t}^{\pi},\mathbf{u}_{t}^{\pi})\]
where \(\mathbf{u}_{t}^{\pi}\) is the control exerted at time \(t\) by policy \(\pi\) and \(\mathbf{y}_{t}^{\pi}\) is the time-\(t\) observation that would have occurred against the same costs/noises if the control policy \(\pi\) were carried out from the beginning.
In controlling linear dynamical systems with partial observations, we often make use of the system's counterfactual signal had no controls been performed since the beginning of the instance:
**Definition 2.1** (Nature's \(y\)).: _Nature's \(y\) at time \(t\), denoted by \(\mathbf{y}_{t}^{\mathbf{nat}}\), is the signal that the system would have generated at time \(t\) under \(\mathbf{u}_{1:t}=0\). We may compute this as_
\[\mathbf{x}_{t+1}^{\mathbf{nat}}=A\mathbf{x}_{t}^{\mathbf{nat}}+\mathbf{w}_{t} \,\ \ \mathbf{y}_{t}^{\mathbf{nat}}=C\mathbf{x}_{t}^{\mathbf{nat}},\]
_or, equivalently, \(\mathbf{y}_{t}^{\mathbf{nat}}=\mathbf{e}_{t}+\sum_{i=1}^{t-1}CA^{t-i-1} \mathbf{w}_{i}\)._
Critically, this may be calculated via the Markov operator:
**Definition 2.2** (Markov operator).: _The Markov operator \(G=[G^{[i]}]_{i\geq 0}\) is a sequence of matrices in \(\mathbb{R}^{d_{\mathbf{y}}\times d_{\mathbf{u}}}\) as \(G^{[i]}\stackrel{{\text{def}}}{{=}}CA^{i-1}B,\ G^{[0]}\stackrel{{ \text{def}}}{{=}}\mathbf{0}_{d_{\mathbf{y}}\times d_{\mathbf{u}}}\)._
It follows immediately that \(\mathbf{y}_{t}^{\mathbf{nat}}\) may be computed from observations as \(\mathbf{y}_{t}^{\mathbf{nat}}=\mathbf{y}_{t}-\sum_{i=1}^{t}G^{[i]}\mathbf{u} _{t-i}\).
### Assumptions
We impose four core assumptions on the problem:
**Assumption 2.3** (Stable system).: _We assume the system is stable: the spectral radius \(\rho(A)<1\)._
Note that this assumption is trivially generalized to the standard assumption that that the system has a known stabilizing controller \(K\), as we may reformulate our system as stable via \(A^{\prime}=A+BK,B^{\prime}=B\). This generalized assumption is standard in literature, and has the following important consequence:
**Remark 2.4** (Decay of stable systems).: _That the system is stable implies that \(\exists P\succ\mathbf{0}_{d_{\mathbf{x}}\times d_{\mathbf{x}}}\), \(P\in\mathrm{Sym}(d_{\mathbf{x}})\) such that \(rP\succeq A^{\top}PA\) for some \(0\leq r<1\), and therefore that \(\exists\kappa\) depending on \(\|B\|_{\mathrm{op}},\|C\|_{\mathrm{op}},\sigma_{\min}(P)\) such that \(\|G^{[i]}\|_{\mathrm{op}}\leq\kappa r^{i-1}\). Then with \(H=O(\log T)\), we can assume that \(\|G\|_{\ell_{1},\mathrm{op}}=\sum_{i=0}^{\infty}\|G^{[i]}\|_{\mathrm{op}} \leq R_{G}\) and \(\psi_{G}(H)\stackrel{{\text{\tiny{def}}}}{{=}}\sum_{i=H}^{ \infty}\|G^{[i]}\|_{\mathrm{op}}\leq\frac{R_{G}}{T}\)._
**Assumption 2.5** (Noise model).: _The perturbations \(\{\mathbf{w}_{t},\mathbf{e}_{t}\}_{t=1}^{T}\) are assumed to be semi-adversarial: \(\mathbf{w}_{t},\mathbf{e}_{t}\) decompose as sums of adversarial and stochastic components \(\mathbf{w}_{t}=\mathbf{w}_{t}^{\mathrm{adv}}+\mathbf{w}_{t}^{\mathrm{stoch}}\) and \(\mathbf{e}_{t}=\mathbf{e}_{t}^{\mathrm{adv}}+\mathbf{e}_{t}^{\mathrm{stoch}}\). The stochastic components of the perturbations are assumed to come from distributions satisfying \(\mathbb{E}[\mathbf{w}_{t}^{\mathrm{stoch}}]=\mathbb{E}[\mathbf{e}_{t}^{ \mathrm{stoch}}]=0\), \(\mathbb{E}\left[\mathbf{w}_{t}^{\mathrm{stoch}}\mathbf{w}_{t}^{\mathrm{stoch} \top}\right]\succeq\sigma_{\mathbf{w}}^{2}I\), \(\mathbb{E}\left[\mathbf{e}_{t}^{\mathrm{stoch}}\mathbf{e}_{t}^{\mathrm{stoch} \top}\right]\succeq\sigma_{\mathbf{e}}^{2}I\), \(\sigma_{\mathbf{e}}>0\). \(\{\mathbf{w}_{t},\mathbf{e}_{t}\}_{t=1}^{T}\) are bounded such that \(\|\mathbf{y}_{t}^{\mathbf{nat}}\|_{2}\leq R_{\mathrm{nat}}\), \(\forall t\), for some parameter \(R_{\mathrm{nat}}\)._
The bound on \(\mathbf{y}^{\mathbf{nat}}\) is implied by bounded noise, which is a standard assumption in literature, and the stability of the system. The semi-adversarial assumption is also seen in prior work (Simchowitz et al., 2020), and is a necessary condition for our analysis: we depend on the regret guarantee of a bandit online convex optimization with memory algorithm which requires the strong convexity of the expected loss functions conditioned on all but the \(\Theta(\mathrm{poly}(\log T))\) most recent steps of history. This assumption is essentially equivalent to the adversarial assumption in applications: in almost all systems, noise is either endemic or may be injected. We also emphasize that this assumption is much weaker than that of previous optimal-rate work: even in the known-state, known-dynamic case, all previous optimal guarantees depended on _no_ adversarial perturbation (see Table 1).
**Assumption 2.6** (Cost model).: _The cost functions \(c_{t}(\cdot,\cdot)\) are assumed to be quadratic, \(\sigma_{c}\)-strongly convex, \(\beta_{c}\)-smooth, i.e. \(c_{t}(\mathbf{y},\mathbf{u})=\mathbf{y}^{\top}Q_{t}\mathbf{y}+\mathbf{u}^{ \top}R_{t}\mathbf{u}\) with \(\beta_{c}I\geq Q_{t},R_{t}\succeq\sigma_{c}I\), \(\forall t\). They are also assumed to obey the following Lipschitz condition: \(\forall(\mathbf{y},\mathbf{u}),(\mathbf{y}^{\prime},\mathbf{u}^{\prime})\in \mathbb{R}^{d_{\mathbf{y}}+d_{\mathbf{u}}}\),_
\[|c_{t}(\mathbf{y},\mathbf{u})-c_{t}(\mathbf{y}^{\prime},\mathbf{u}^{\prime})| \leq L_{c}(\|(\mathbf{y},\mathbf{u})\|_{2}\vee\|(\mathbf{y}^{\prime},\mathbf{ u}^{\prime})\|_{2})\|(\mathbf{y}-\mathbf{y}^{\prime},\mathbf{u}-\mathbf{u}^{ \prime})\|_{2}. \tag{2.1}\]
These conditions are relatively standard for bandit convex optimization algorithms, and are needed for the novel BCO-with-memory algorithm which underpins our control algorithm.
**Assumption 2.7** (Adversary).: \(\{c_{t}(\cdot,\cdot),\mathbf{w}_{t}^{\mathrm{adv}},\mathbf{e}_{t}^{\mathrm{adv }}\}_{t=1}^{T}\) _is chosen by the adversary ahead of time._
The oblivious adversary assumption is standard in literature (see Simchowitz et al. (2020), Gradu et al. (2020)).
### Disturbance Response Controllers
Regret compares the excess cost from executing our proposed control algorithm with respect to the cost of the best algorithm _in hindsight_ from a given policy class. In particular, low regret against a rich policy class is a very strong near-optimality guarantee. We take the comparator policy class \(\Pi\) to be the set of disturbance response controllers (DRC), formally given by the following definition.
**Definition 2.8** (Disturbance Response Controllers).: _A disturbance response controller \(\pi_{M}\) of length \(H\in\mathbb{Z}_{++}\) for stable systems is parameterized by \(M=(M^{[j]})_{j=0}^{\bar{H}}\), a sequence of \(H\) matrices in \(\mathbb{R}^{d_{\mathbf{u}}\times d_{\mathbf{y}}}\) s.t. the control at time \(t\) given by \(\pi_{M}\) is \(\mathbf{u}_{t}^{\pi_{M}}=\sum_{j=0}^{\bar{H}}M^{[j]}\mathbf{y}_{t-j}^{\mathbf{ nat}}\). We shorthand \(\mathbf{u}_{t}^{M}\stackrel{{\text{\tiny{def}}}}{{=}}\mathbf{u}_{t}^{\pi_{M}}\)._
_The DRC policy class is the set of all disturbance response controller with bounded length \(H\in\mathbb{Z}_{++}\) and norm \(R\in\mathbb{R}_{+}\): \(\mathcal{M}(H,R)=\{M=(M^{[j]})_{j=0}^{\bar{H}}\mid\|M\|_{\ell_{1},\mathrm{op}} =\sum_{j=0}^{\bar{H}}\|M^{[j]}\|_{\mathrm{op}}\leq R\}\)._
Previous works have demonstrated the richness of the DRC policy class. In particular, Theorem 1 from Simchowitz et al. (2020) has established that the DRC policy class generalizes the state-of-art benchmark class of stabilizing linear dynamic controllers (LDC) with error \(e^{-\Theta(H)}\).
### Approach and Technical Challenges
The classical approach in online nonstochastic control of stable/stabilizable systems is to reduce to a problem of online convex optimization with memory. This insight relies on the exponentially decaying effect of past states and controls on the present, which allows approximating the cost functions as functions of the most recent controls.
A core technical challenge lies in the bandit convex optimization problem obtained from the bandit control problem. In the bandit setting, no gradient information is given to the learner, and thus the learner needs to construct a low-bias gradient estimator. Previous work uses the classical spherical gradient estimator proposed by Flaxman et al. (2004), but the regret guarantee is suboptimal. We would like to leverage the ellipsoidal gradient estimator proposed by Hazan and Levy (2014). However, when extending to loss functions with memory, there is no clear mechanism for obtaining a low-bias bound for general convex functions. We exploit the quadratic structure of the LQR/LQG cost functions to build EBCO-M (Algorithm 1), which uses ellipsoidal gradient estimators. We note that even outside of the control applications, EBCO-M may be of independent interests in bandit online learning with memory.
## 3 BCO with Memory: Quadratic and Strongly Convex Functions
As with previous works, our control algorithm will depend crucially on a generic algorithm for bandit convex optimization with memory (BCO-M). We present a new online bandit convex optimization with memory that explores the structure of quadratic costs to achieve near-optimal regret.
### Setting and working assumptions
In the BCO-M setting with memory length \(H\), we consider an algorithm playing against an adversary. At time \(t\), the algorithm is asked to play its choice of \(y_{t}\) in the convex constraint set \(\mathcal{K}\). The adversary chooses a cost function \(F_{t}:\mathcal{K}^{H}\to\mathbb{R}_{+}\) which takes as input the algorithm's current play as well as its previous \(\bar{H}\) plays. The algorithm then observes a cost \(F_{t}(y_{t-\bar{H}},\ldots,y_{t})\) (and no other information about \(F_{t}(\cdot)\)) before it chooses and plays the next action \(y_{t+1}\). The goal is to minimize regret, the excessive loss incurred by the algorithm compared to the best fixed decision in \(\mathcal{K}\):
\[\text{Regret}_{T}\stackrel{{\text{def}}}{{=}}\sum_{t=H}^{T}F_{t} (y_{t-\bar{H}},\ldots,y_{t})-\min_{x\in\mathcal{K}}\sum_{t=H}^{T}F_{t}(x,\ldots,x).\]
For notation convenience, we will at times shorthand \(y_{t-\bar{H}:t}\stackrel{{\text{def}}}{{=}}(y_{t-\bar{H}},\ldots,y_{t})\in\mathcal{K}^{H}\).
#### 3.1.1 BCO-M assumptions
We make the following assumptions on the loss functions \(\{F_{t}\}_{t=H}^{T}\) and the constraint set \(\mathcal{K}\).
**Assumption 3.1** (Constraint set).: \(\mathcal{K}\) _is convex, closed, and bounded with non-empty interior. \(\text{diam}(\mathcal{K})=\sup_{z,z^{\prime}\in\mathcal{K}}\lVert z-z^{\prime} \rVert_{2}\leq D\)._
**Assumption 3.2** (Loss functions).: _The loss functions chosen by the adversary obeys the following regularity and curvature assumptions:_
* \(F_{t}:\mathcal{K}^{H}\to R_{+}\) _is quadratic and_ \(\beta\)_-smooth:_
* _Quadratic:_ \(\exists W_{t}\in\mathbb{R}^{nH\times nH},b_{t}\in\mathbb{R}^{nH},c_{t}\in \mathbb{R}\) _such that_ \(F_{t}(w)=w^{\top}W_{t}w+b_{t}^{\top}w+c_{t},\,\forall w\in\mathcal{K}^{H}\)_._
* _Smooth:_ \(W_{t}\preceq\beta I_{nH\times nH}\)_._
* \(F_{t}:\mathcal{K}^{H}\to\mathbb{R}_{+}\) _is_ \(\sigma\)_-strongly convex in its induced unary form:_ \(f_{t}:\mathcal{K}\to\mathbb{R}_{+}\) _with_ \(f_{t}(z)=F_{t}(z,\ldots,z)\) _is_ \(\sigma\)_-strongly convex, i.e._ \(f_{t}(z)\geq f_{t}(z^{\prime})+\nabla f_{t}(z^{\prime})^{\top}(z-z^{\prime})+ \frac{\sigma}{2}\lVert z-z^{\prime}\rVert_{2}^{2}\)_.,_ \(\forall z,z^{\prime}\in\mathcal{K}\)_._
* \(F_{t}\) _satisfies the following diameter and gradient bound on_ \(\mathcal{K}\)_:_ \(\exists B,L>0\) _such that_ \[B=\sup_{w,w^{\prime}\in\mathcal{K}^{H}}|F_{t}(w)-F_{t}(w^{\prime})|,\,\,\,L= \sup_{w\in\mathcal{K}^{H}}\lVert\nabla F_{t}(w)\rVert_{2}.\]
In the online control problems, when formulating the cost function \(c_{t}\) as a function \(F_{t}\) of the most recent \(H\) controls played, the function \(F_{t}\) itself may depend on the entire history of the algorithm through step \(t-H\). Therefore, it is essential to analyze the regret guarantee of our BCO-M algorithm when playing against an adversary that can be \(t-H\)-adaptive, giving rise to the following assumption.
**Assumption 3.3** (Adversarial adaptivity).: _The adversary chooses \(F_{t}\) independently of the noise \(u_{t-\bar{H}:t}\) which is drawn by the algorithm in the \(H\) most recent steps, but possibly not independently of earlier noise._
Note that Assumption 3.3 is minimal for BCO: if this fails, then in the subcase of a delayed loss, the adversary may fully control the agent's observations, resulting in no possibility of learning.
Self-concordant barrier.The algorithm makes use of a _self-concordant barrier_\(R(\cdot)\) of \(\mathcal{K}\) as the regularization function in the updates.
**Definition 3.4** (Self-concordant barrier).: _A \(C^{3}\) function \(R(\cdot)\) over a closed convex set \(\mathcal{K}\subset\mathbb{R}^{n}\) with non-empty interior is a \(\nu\)-self-concordant barrier of \(\mathcal{K}\) if it satisfies the following two properties:_
1. _(Boundary property) For any sequence_ \(\{x_{n}\}_{n\in\mathbb{N}}\subset\mathrm{int}(\mathcal{K})\) _such that_ \(\lim_{n\to\infty}x_{n}=x\in\partial\mathcal{K}\)_,_ \(\lim_{n\to\infty}R(x_{n})=\infty\)_._
2. _(Self-concordant)_ \(\forall x\in\mathrm{int}(\mathcal{K})\)_,_ \(h\in\mathbb{R}^{n}\)_,_ 1. \(|\nabla^{\!\!\mathrm{B}}R(x)[h,h,h]|\leq 2|\nabla^{\!\!\mathrm{B}}R(x)[h,h]|^{3/2}\)_._ 2. \(|\langle\nabla\!\!R(x),h\rangle|\leq\sqrt{\nu}|\nabla^{\!\!\mathrm{B}}R(x)[h,h] |^{1/2}\)_._
### Algorithm specification and regret guarantee
We present EBCO-M (Algorithm 1) for online bandit convex optimization with memory. The key novelty is the use of an ellipsoidal gradient estimator. It is difficult to establish a low-bias guarantee for ellipsoidal gradient estimator for general convex loss functions. However, thanks to the quadratic structure of the loss functions in LQR/LQG problems, we can show provable low bias for the ellipsoidal gradient estimator, and therefore achieve optimal regret.
```
1:Input: Convex, closed set \(\mathcal{K}\subseteq\mathbb{R}^{n}\) with non-empty interior, time horizon \(T\), memory length \(H\), step size \(\eta\), \(\nu\)-self-concordant barrier \(R(\cdot)\) over \(\mathcal{K}\), convexity strength parameter \(\sigma\).
2:Initialize \(x_{t}=\arg\min_{x\in\mathcal{K}}R(x)\), \(\forall t=1,\ldots,H\).
3:Compute \(A_{t}=(\nabla^{\!\!\mathrm{B}}R(x_{t})+\eta\sigma tI)^{-1/2}\), \(\forall t=1,\ldots,H\).
4:Sample \(u_{1},\ldots,u_{H}\sim S^{n-1}\) i.i.d. uniformly at random.
5:Set \(y_{t}=x_{t}+A_{t}u_{t}\), \(\forall t=1,\ldots,H\).
6:Set \(g_{t}=0\), \(\forall t=1,\ldots,\bar{H}\).
7:Play \(y_{1},\ldots,y_{\bar{H}}\).
8:for\(t=H,\ldots,T\)do
9:Play \(y_{t}\), suffer loss \(F_{t}(y_{t-\bar{H}:t})\).
10:Store \(g_{t}=nHF_{t}(y_{t-\bar{H}:t})\sum_{i=0}^{H}A_{t-i}^{-1}u_{t-i}\).
11:Set \(x_{t+1}=\arg\min_{x\in\mathcal{K}}\sum_{s=H}^{t}\left(g_{s-\bar{H}}^{\top}x+ \frac{\sigma}{2}\|x-x_{s-\bar{H}}\|^{2}\right)+\frac{1}{\eta}R(x)\).
12:Compute \(A_{t+1}=(\nabla^{\!\!\mathrm{B}}R(x_{t+1})+\eta\sigma(t+1)I)^{-1/2}\).
13:Sample \(u_{t+1}\sim S^{n-1}\) uniformly at random.
14:Set \(y_{t+1}=x_{t+1}+A_{t+1}u_{t+1}\).
15:endfor
```
**Algorithm 1** Ellipsoidal BCO with memory (EBCO-M)
Before analyzing the regret, we first make note of two properties of Algorithm 1.
**Remark 3.5** (Delayed dependence).: _In Algorithm 1, \(x_{t}\) is independent of \(u_{t-\bar{H}:t}\), \(\forall t\), and therefore \(A_{t}\) is independent of \(u_{t-\bar{H}:t}\) as \(A_{t}\) is determined by \(x_{t}\)._
**Remark 3.6** (Correctness).: \(y_{t}\) played by Algorithm 1 lies in \(\mathcal{K}\): \(\|y_{t}-x_{t}\|_{\nabla^{\!\!\mathrm{B}}R(x_{t})}^{2}=\|A_{t}u_{t}\|_{\nabla^{ \!\!\mathrm{B}}R(x_{t})}^{2}\leq\|u_{t}\|_{2}^{2}=1\), and by Proposition C.1, the Dikin ellipsoid centered at \(x_{t}\) is contained in \(\mathcal{K}\)._
**Theorem 3.7** (EBCO-M regret with strong convexity).: _For any sequence of cost functions \(\{F_{t}\}_{t=H}^{T}\) satisfying Assumption 3.2, constraint set \(\mathcal{K}\) satisfying Assumption 3.1, adversary satisfying Assumption 3.3, and \(H=\mathrm{poly}\left(\log T\right)\), Algorithm 1 satisfies the expected regret bound_
\[\mathbb{E}\left[\text{Regret}_{T}(\texttt{EBCO-M})\right]\leq\tilde{\mathcal{ O}}\left(\frac{\beta n}{\sigma}\sqrt{T}\right),\]
_where expectation is taken over the randomness of the exploration noises, with \(\tilde{\mathcal{O}}\) hiding all natural parameters (\(B,D,L\)) and logarithmic dependence on \(T\)._
**Corollary 3.8** (EBCO-M regret with conditional strong convexity).: _Suppose Algorithm 1 is run on \(\mathcal{K}\) satisfying Assumption 3.1 against an adversary satisfying Assumption 3.3 with a sequence of cost functions \(\{F_{t}\}_{t=H}^{T}\) such that_
1. \(F_{t}\) _is quadratic, convex,_ \(\beta\)_-smooth, has diameter bound_ \(B\) _and gradient bound_ \(L\)_._
2. \(F_{t}\) _is conditionally_ \(\sigma\)_-strongly convex in its induced unary form:_ \(\bar{f}_{t}(z)\stackrel{{\text{def}}}{{=}}\mathbb{E}[f_{t}(z)\mid u _{1:t-H},f_{H:t-H}]\) _is_ \(\sigma\)_-strongly convex._
_Then, Algorithm 1 satisfies the same effective expected regret bound attained in Theorem 3.7, i.e._
\[\mathbb{E}\left[\text{Regret}_{T}(\texttt{EBCO-M})\right]\leq\tilde{\mathcal{ O}}\left(\frac{\beta n}{\sigma}\sqrt{T}\right).\]
## 4 Bandit Controller: Known and Unknown Systems
We will now use our BCO-with-memory algorithm to find an optimal controller (as in Gradu et al. (2020)), arguing that regret in choice of controller transfers into the setting discussed in the previous section. We first consider the case where the system is known, and then reduce the unknown system case to the known system case.
### Known systems
Applying Algorithm 1 to predict controllers2 with losses given by control losses, we obtain Algorithm 2.
Footnote 2: Notation: while our controller \(M\) is typically a tensor, it should be thought of as the output vector of Algorithm 1. As such, the relevant vector and matrix operations in that algorithm will correspond to tensor operations here, and the notation reflects that correspondence. In particular, the inner product on line 11 is an all-dimension tensor dot product and \(A\) is a square “matrix” which acts on tensors of shape \((H,d_{\mathbf{u}},d_{\mathbf{y}})\).
**Theorem 4.1** (Known system control regret).: _Consider a linear dynamical system governed by known dynamics \((A,B,C)\) and the interaction model with adversarially chosen cost functions and perturbations satisfying Assumption 2.3, 2.5, 2.6, 2.7. Then running Algorithm 2 with \(H=\Theta(\mathrm{poly}(\log T))\), \(\sigma=\sigma_{c}(\sigma_{\mathbf{e}}^{2}+\sigma_{\mathbf{w}}\frac{\sigma_{ \min}(C)}{1+\|A\|_{\mathrm{app}}^{2}})\), and \(\eta=\Theta\left(\frac{1}{d_{\mathbf{u}}d_{\mathbf{y}}L_{c}H^{3}\sqrt{T}}\right)\) guarantees_
\[\mathbb{E}[\text{Regret}_{T}(\texttt{EBPC})]\leq\tilde{\mathcal{O}}\left(\frac {\beta_{c}d_{\mathbf{u}}d_{\mathbf{y}}}{\sigma_{c}}\sqrt{T}\right),\]
_where expectation is taken over the exploration noises of the algorithm as well as the stochastic components of the perturbations, and \(\tilde{\mathcal{O}}(\cdot)\) hides all universal constants, natural parameters, and logarithmic dependence on \(T\)._
### Unknown systems: control after estimation
Note that EBPC (Algorithm 2) relies on the access to the system's Markov operator \(G\), which is available if and only if the system dynamics \((A,B,C)\) are known. When the system dynamics is unknown, we can identify the system using a system estimation algorithm, obtain an estimated Markov operator \(\hat{G}\), and run EBPC with \(G\leftarrow\hat{G}\). Algorithm 3 outlines the estimation method of system dynamics via least squares.
```
1:Input: Time horizon \(T\), memory length \(H\), Markov operator \(G\). BCO-M parameters \(\sigma,\eta\). Self-concordant barrier \(R(\cdot)\) over \(\mathcal{M}(H,R)\subset\mathbb{R}^{H\times d_{\mathbf{u}}\times d_{\mathbf{y}}}\).
2:Initialize \(M_{1}=\cdots=M_{H}=\underset{M\in\mathcal{M}(H,R)}{\arg\min}\ R(M)\).
3:Compute \(A_{i}=(\nabla^{\!2}R(M_{i})+\eta\sigma I)^{-1/2},\forall i=1,\ldots,H\).
4:Sample \(\varepsilon_{1},\ldots,\varepsilon_{H}\sim S^{H\times d_{\mathbf{u}}\times d_{ \mathbf{y}}-1}\) i.i.d. uniformly at random.
5:Set \(\widetilde{M}_{i}=M_{i}+\varepsilon_{i}\), \(\forall i=1,\ldots,H\). Set \(g_{i}=0\), \(\forall i=1,\ldots,\bar{H}\).
6:Play control \(\mathbf{u}_{i}=0\), incur cost \(c_{i}(\mathbf{y}_{i},\mathbf{u}_{i})\), \(\forall i=1,\ldots,\bar{H}\).
7:for\(t=H,\ldots,T\)do
8: Play control \(\mathbf{u}_{t}=\mathbf{u}_{t}(\widetilde{M}_{t})=\sum_{i=0}^{\bar{H}} \widetilde{M}_{t}^{[i]}\mathbf{y}_{t-i}^{\mathbf{nat}}\), incur cost \(c_{t}(\mathbf{y}_{t},\mathbf{u}_{t})\).
9: Observe \(\mathbf{y}_{t+1}\) and compute signal \(\mathbf{y}_{t+1}^{\mathbf{nat}}=\mathbf{y}_{t+1}-\sum_{i=1}^{t}G^{[i]}\mathbf{ u}_{t-i}\).
10: Store \(g_{t}=d_{\mathbf{u}}d_{\mathbf{y}}H^{2}c_{t}(\mathbf{y}_{t},\mathbf{u}_{t}) \sum_{i=0}^{\bar{H}}A_{t-1}^{-\varepsilon_{t}-i}\).
11: Update \(M_{t+1}=\underset{M\in\mathcal{M}(H,R)}{\arg\min}\sum_{s=H}^{t}\left((g_{s-\bar {H}},M)+\frac{\sigma}{2}\|M-M_{s-\bar{H}}\|^{2}\right)+\frac{1}{\eta}R(M)\).
12: Compute \(A_{t+1}=(\nabla^{\!2}R(M_{t+1})+\eta\sigma(t+1)I)^{-1/2}\).
13: Sample \(\varepsilon_{t+1}\sim S^{H\times d_{\mathbf{u}}\times d_{\mathbf{y}}-1}\) uniformly at random. Set \(\widetilde{M}_{t+1}=M_{t+1}+A_{t+1}\varepsilon_{t+1}\).
14:endfor
```
**Algorithm 2** Ellipsoidal Bandit Perturbation Controller (EBPC)
**Theorem 4.2** (Unknown system control regret).: _Consider a linear dynamical system governed with unknown dynamics \((A,B,C)\) and the interaction model with adversarially chosen cost functions and perturbations satisfying Assumption 2.3, 2.5, 2.6, 2.7. Suppose we obtain an estimated Markov operator \(\hat{G}\) from Algorithm 3 with \(N=\lceil\sqrt{T}\rceil\) and \(H=\Theta(\operatorname{poly}\log T)\). Then Algorithm 2 with \(G\leftarrow\hat{G}\), \(H\gets 3H\), \(\sigma=\frac{1}{8}\sigma_{c}\sigma_{\mathbf{e}}^{2}\), and \(\eta=\Theta\left(\frac{1}{d_{\mathbf{u}}d_{\mathbf{y}}L_{c}H^{3}\sqrt{T}}\right)\) guarantees_
\[\mathbb{E}[\text{Regret}_{T}(\texttt{EBPC})]\leq\tilde{\mathcal{O}}\left( \frac{\beta_{c}d_{\mathbf{u}}d_{\mathbf{y}}}{\sigma_{c}}\sqrt{T}\right),\]
_where expectation is taken over the exploration noises in Algorithm 2, the sampled Gaussian controls in Algorithm 3, and the stochastic components of the perturbations, and \(\tilde{\mathcal{O}}(\cdot)\) hides all universal constants, natural parameters, and logarithmic dependence on \(T\)._
## 5 Discussion and conclusion
We solve the open problem put forth by Gradu et al. (2020) on the optimal rate for online bandit control for the case of LQR/LQG control, improving to regret \(\tilde{O}(\sqrt{T})\) from \(\tilde{O}(T^{\frac{3}{2}})\) in the semi-adversarial noise model and for strongly convex LQR/LQG cost functions. Our method builds upon recent advancements in bandit convex optimization for quadratic functions, providing the first near-optimal regret algorithm for bandit convex optimization with memory in a nonstochastic setting.
It would be interesting to investigate (1) whether the results can be extended to fully adversarial noise, (2) whether a similar stable controller recovery as seen in (Chen and Hazan, 2021) for fully observable systems can be established for partially observable systems, and whether that can be incorporated to extend our result to stabilizable systems even without access to a stabilizing controller. |
2307.14635 | Two-dimensional lattice with an imaginary magnetic field | We introduce a two-dimensional non-Hermitian lattice model with an imaginary
magnetic field and elucidate various unique features which are absent in
Hermitian lattice models with real magnetic fields. To describe the imaginary
magnetic field, we consider both the Landau gauge and the symmetric gauge,
which are related by a generalized gauge transformation, changing not only the
phase but also the amplitude of the wave function. We discuss the complex
energy spectrum and the non-Hermitian Aharonov-Bohm effect as examples of
properties which are due to the imaginary magnetic field independent of the
generalized gauge transformation. We show that the energy spectrum does not
converge as the lattice size is made larger, which comes from the intrinsic
nonperiodicity of the model. However, we have found that the energy spectrum
does converge if one fixes the length of one side and makes the other side
longer; this asymptotic behavior can be understood in the framework of the
non-Bloch band theory. We also find an analog of the Aharonov-Bohm effect; the
net change of the norm of the wave function upon adiabatically forming a closed
path is determined by the imaginary magnetic flux enclosed by the path, which
provides an experimentally observable feature of the imaginary magnetic field. | Tomoki Ozawa, Tomoya Hayata | 2023-07-27T05:50:24Z | http://arxiv.org/abs/2307.14635v2 | # Two dimensional lattice with an imaginary magnetic field
###### Abstract
We explore gauge-independent properties of two-dimensional non-Hermitian lattice systems with an imaginary magnetic field. We find that the energy spectrum under the open boundary conditions is an example of such gauge-independent properties. We discuss how to obtain the asymptotic continuum energy spectrum upon increasing length of one side using the framework of the non-Bloch band theory. We also find an analog of the Aharonov-Bohm effect; the net change of the norm of the wavefunction upon adiabatically forming a closed path is determined by the imaginary magnetic flux enclosed by the path.
+
Footnote †: preprint: RIKEN-iTHEMS-Report-23
Physics of a charged particle in an external magnetic field has been of fundamental importance in condensed matter physics. In two dimensions, a charged particle in a magnetic field forms the equally spaced energy spectrum called the Landau level, which is directly responsible for phenomena such as the Landau diamagnetism, de Haas-van Alphen effect [1], and the integer and fractional quantum Hall effects [2; 3]. A charged particle on a two-dimensional lattice under a magnetic field is described by the Harper-Hofstadter model [4; 5], which is the paradigmatic model of the Chern insulator [6].
Recently, there is an increasing interest in non-Hermitian physics [7; 8]. Also in non-Hermitian quantum mechanics, the effect of vector potentials has played significant roles. For example, the Hatano-Nelson model is the one dimensional lattice model under an imaginary vector potential [9; 10], and has been of fundamental importance showing the non-Hermitian skin effect and non-trivial point gap topology [11; 12]. The imaginary vector potential has also been crucial in understanding the Landau-Zener transition [13; 14].
With the recent experimental development of non-Hermitian quantum mechanics, one can now realize a variety of non-Hermitian models under control, and there is an increasing interest in experimentally realizing two or higher dimensional non-Hermitian models [15; 16; 17; 18]. Albeit this rapid progress in non-Hermitian physics and the important role of magnetic fields played in condensed matter physics, there is little study on properties of the imaginary magnetic fields in two dimensions, namely, properties of non-Hermitian lattice systems where magnetic fields are imaginary, analogous to the imaginary vector potential in the Hatano-Nelson model.
In this paper, we explore basic properties of two-dimensional lattices with an imaginary magnetic field. We first elucidate the meaning of gauge invariance in non-Hermitian settings, in order to distinguish between properties which are intrinsically due to the imaginary magnetic field and gauge-dependent properties depending on specific realizations and setups. We find that certain spectral properties are gauge invariant, and discuss that the asymptotic energy spectrum as one makes the length of the system larger, fixing the other length, can be nicely understood within the framework of the non-Bloch band theory. Especially, even though the system is non-Hermitian, the asymptotic spectrum under the open boundary condition can be related to the spectrum under the periodic boundary condition satisfying certain conditions. We also find an analog of the Aharonov-Bohm effect for imaginary magnetic fields. Upon making a wavepacket move to form a closed trajectory in real space, the overall change of the norm of the wavefunction is related to the imaginary magnetic flux enclosed by the trajectory. Our work lays foundation to understand gauge invariant properties in the setup of imaginary magnetic fields, generalizing the concept of magnetic fields to two-dimensional non-Hermitian settings.
_Model.--_ We consider a two-dimensional square lattice with an imaginary magnetic field. We label the lattice sites by coordinates \((x,y)\), where \(x\) and \(y\) are both integers. We let \(\psi_{x,y}\) denote the amplitude of the wavefunction at site \((x,y)\). The Schrodinger equation governing the dynamics of the system is
\[i\frac{d\psi_{x,y}}{dt}= J\left(e^{i\theta_{X}(x-1,y)}\psi_{x-1,y}+e^{-i\theta_{X}(x,y)} \psi_{x+1,y}\right.\] \[\left.+e^{i\theta_{Y}(x,y-1)}\psi_{x,y-1}+e^{-i\theta_{Y}(x,y)} \psi_{x,y+1}\right), \tag{1}\]
where \(t\), and \(J\) are the time, and hopping parameter, respectively. In this paper, we consider two gauge choices: the Landau gauge and the symmetric gauge. The Landau gauge is defined by \((\theta_{X},\theta_{Y})=(0,Bx)\), and the symmetric gauge is defined by \((\theta_{X},\theta_{Y})=(-By/2,Bx/2)\). When \(B\) is real, these gauges correspond to the ordinary Landau and symmetric gauges with a real magnetic field. In this paper, however, we take \(B\) to represent a purely imaginary magnetic field, \(B=i\mathcal{B}\), with \(\mathcal{B}\) being a real number. We note that, when \(B\) is imaginary, the factors \(e^{i\theta_{X}(x,y)}\) and \(e^{i\theta_{Y}(x,y)}\) can have modulus different from one, implying non-Hermiticity.
_Gauge transformation.--_ Landau and symmetric gauges are equivalent for a real magnetic field because
they are related by a gauge transformation. We first review the gauge transformation in Hermitian setups, and then extend the concept to non-Hermitian settings. The gauge transformation is to consider a state which is related to the original state by a position-dependent phase factor \(\psi^{\prime}_{x,y}=e^{i\chi(x,y)}\psi_{x,y}\), where \(\chi(x,y)\) is a real function. This transformation amounts to applying a local unitary transformation to wavefunction. If we take the Hamiltonian \(H\) in the Hermitian Landau gauge, by choosing the gauge transformation \(e^{i\chi(x,y)}=e^{iBxy/2}\), the Hamiltonian \(H^{\prime}\) transformed under the gauge transformation is in the Hermitian symmetric gauge.
Now we extend the concept of gauge transformations to non-Hermitian setups of imaginary magnetic fields. An important feature of imaginary magnetic fields is that the Landau gauge and the symmetric gauge cannot be connected via an ordinary gauge transformation \(e^{i\chi(x,y)}\) determined by a real function \(\chi(x,y)\). Instead, it is more appropriate to consider a generalized gauge transformation, \(\psi^{\prime}_{x,y}=f(x,y)\psi_{x,y}\) with \(f(x,y)\) being a nonzero complex function, which does not just multiply a phase factor but also allows scale change for the wavefunction. The Hamiltonian changes under this generalized gauge transformation, not by a unitary transformation but by a local (diagonal) similarity transformation. The Landau and symmetric gauges are related via the generalized gauge transformation \(f(x,y)=e^{iBxy/2}=e^{-Bxy/2}\). Since this generalized gauge transformation is a similarity transformation, the energy spectrum is invariant. Furthermore, upon the generalized gauge transformation, the product of hopping amplitudes as one goes around a plaquette of the square lattice does not change, implying that the imaginary magnetic field is also invariant.
There are various realizations of non-Hermitian Hamiltonians, and what is observable depends on individual system that one works on. Upon studying properties of imaginary magnetic fields, one should thus make clear distinction between what are universal properties of imaginary magnetic fields and what are gauge- and system-specific features which depend on particular realizations. We consider properties intrinsic to imaginary magnetic fields to be those invariant under the generalized gauge transformation.
_Energy spectrum.--_ As well known in the study of non-Hermitian Hamiltonians, energy spectrum under periodic and open boundary conditions can take drastically different values [9; 10; 12; 19]. We should therefore analyze the energy spectrum together with the boundary conditions. We first note that, unlike the case of real magnetic fields, lattice models with imaginary magnetic fields cannot be made periodic in both \(x\) and \(y\) directions. For the Landau gauge, we can make the lattice periodic in the \(y\)-direction, but not in the \(x\) direction, and for the symmetric gauge we cannot make the Hamiltonian periodic in either direction. In this paper, we call the Landau gauge with the periodic boundary condition in the \(y\) direction a _cylindrical configuration_.
We first consider the open boundary conditions. As we have seen, the energy spectrum under the Landau and symmetric gauges are the same because they are related by the generalized gauge transformation. We also note that the energy spectrum is invariant upon the change of the origin of the coordinate: the spectrum is invariant under changing \(x\) to \(x+x_{0}\) and \(y\) to \(y+y_{0}\) in the hopping factors \(\theta_{X}\) and \(\theta_{Y}\). This invariance can be shown, for example for the symmetric gauge, by noting that the shift \(x\to x+x_{0}\) can be realized by \(f(x,y)=e^{-\mathcal{B}x_{0}y/2}\) and the shift \(y\to y+y_{0}\) by \(f(x,y)=e^{-\mathcal{B}x_{0}y/2}\). Since we do not want the imaginary magnetic fields and their properties to depend on the origin of the coordinates, these transformation properties are desirable.
In Fig. 1(a,b), we plot the energy spectrum in the complex plane for a lattice of size \(N_{x}\times N_{y}\) with \(N_{x}=N_{y}=40\) and the imaginary magnetic field of \(B=0.001i\), and \(0.01i\). The spread of the energy spectrum along the real axis is from \(-4J\) to \(4J\) similar to the case of real magnetic fields. On the other hand, the spread of the energy along imaginary axis varies with the strength of the imaginary magnetic fields as well as the size of the system.
We next consider the case of the periodic boundary condition along the \(y\)-direction under the Landau gauge, namely the cylindrical configuration. The energy spectrum can be obtained by performing the Fourier transformation in the \(y\) direction and diagonalizing the Hamiltonian for each momentum separately. Writing \(\psi_{x,y}=\psi_{x}e^{ik}\), the equation to solve is
\[E\psi_{x}=J\left\{\psi_{x-1}+\psi_{x+1}+\left(e^{-x\mathcal{B}-ik}+e^{x \mathcal{B}+ik}\right)\psi_{x}\right\}. \tag{2}\]
Figure 1: (a,b) Energy spectrum under the open boundary conditions with \(N_{x}=N_{y}=40\), under \(B=0.001i\) for (a) and \(B=0.01i\) for (b). (c,d) Energy spectrum under the periodic boundary condition along the \(y\) direction (cylindrical configuration) with \(N_{x}=40\) and \(B=0.01i\). The \(x\) coordinates are labeled as \(x=0,1,2,\cdots,40\) for (c) and \(x=-20,-19,\cdots,19\) for (d). All the axes are in units of \(J\).
We note that this is an analog of the Harper equation for the imaginary magnetic field [4]. In Fig. 1(c,d), we plot the energy spectrum in the cylindrical configuration with \(B=0.01i\) and two different ways to choose the origin of \(x=0\). We find an unexpected feature that the energy spectrum depends on the origin of the coordinates. Under the open boundary conditions, we saw that shifting of \(x\to x+x_{0}\) is achieved by the generalized gauge transformation of \(f(x,y)=e^{-\mathcal{B}x_{0}y}\). However, this gauge transformation is not periodic in the \(y\) direction and thus it is not compatible with the cylindrical configuration.
_Asymptotic spectrum and non-Bloch band theory.--_ We now examine properties of the asymptotic energy spectrum under the open boundary conditions. We consider fixing the magnetic field and increasing the system size. We first take \(N_{x}=N_{y}=N\), namely keeping the same length in both directions, and making \(N\) large. We find that the energy spectrum does not converge as \(N\) becomes large. (Details are given in Appendix.) This is in stark contrast to any two-dimensional Hermitian system with a periodic structure in which increasing the system size makes the energy spectrum converge to a continuous band structure. The origin of the non-convergence of our energy spectrum is because, even though the imaginary magnetic field is fixed and constant over the entire lattice, the hopping strength such as \(e^{-\mathcal{B}x}\) keeps increasing in the \(x\) direction, and the energy spectrum is not bounded in the imaginary direction.
Even though the spectrum does not converge keeping \(N_{x}=N_{y}\), we find that the spectrum does converge as one fixes the size of one side and makes the other side become longer. In Fig. 2, we plot the energy spectrum under the open boundary conditions when \(B=0.01i\) fixing \(N_{x}=40\) and choosing \(N_{y}=50\), \(100\), \(200\). Together with the spectrum when \(N_{y}=40\) plotted in Fig. 1(a), one sees that the overall shape tends to stabilize as \(N_{y}\) becomes large. We can understand this asymptotic energy spectrum in the limit of large \(N_{y}\) by means of the non-Bloch band theory [20; 21]. The non-Bloch band theory is a formalism to obtain the continuous energy spectrum of non-Hermitian systems under the open boundary condition. To understand the asymptotic behavior of fixing \(N_{x}\) and making \(N_{y}\to\infty\), we now regard the index \(x\) to be an internal index of a one dimensional system elongated along the \(y\) direction.
In order to apply the non-Bloch band theory, we perform the Fourier transformation along the \(y\) direction, as done in Eq. (2) above. In the non-Bloch band theory, we replace \(e^{ik}\) by a general complex number \(\beta\), and solve the above eigenvalue equation for a given value of \(E\). Writing the above eigenvalue equation as \(E\vec{\psi}_{X}=H_{X}(\beta)\vec{\psi}_{X}\), where \(\vec{\psi}_{X}\) is a vector whose element is \(\psi_{x}\), solutions to the eigenvalue equation for a given value of \(E\) are given by the solutions of \(\det[H_{X}(\beta)-E]=0\). This equation is an algebraic equation for \(\beta\) with degree \(2N_{x}\), and thus we generally have \(2N_{x}\) solutions of \(\beta\). Writing \(2N_{x}\) solutions of \(\beta\) in the ascending order of their magnitudes and labeling them as \(\beta_{1}\), \(\beta_{2}\), \(\cdots\), the eigenvalue \(E\) belongs to a continuum of energy band if and only if \(|\beta_{N_{x}}|=|\beta_{N_{x}+1}|\)[20]. The corresponding values of \(\beta_{N_{x}}\) and \(\beta_{N_{x}+1}\) form the generalized Brillouin zone in the complex plane. We find that the generalized Brillouin zone coincides with the ordinary Brillouin, namely \(\beta=e^{ik}\) for real \(k\), when the \(x\) coordinate is labeled so that \(x=0\) is in the center of the system. (See Appendix for detailed derivation.) This implies that the solutions of Eq. (2) for real \(k\), which are nothing but the energy spectrum of the cylindrical configuration, are the asymptotic spectrum when fixing \(N_{x}\) and making \(N_{y}\) large under the open boundary conditions. The fact that the generalized Brillouin zone coincides with the ordinary Brillouin zone implies that there is no non-Hermitian skin effect. This absence of the non-Hermitian skin effect is related to the \(\mathcal{PT}\)-symmetry present in the system [22].
In Fig. 2(d), we show the continuum bands obtained from the energy spectrum of a cylindrical configuration, taking \(x=0\) to be at the center. We see that the spectra in Fig. 2(a-c) indeed approaches that of Fig. 2(d). With different values of \(\mathcal{B}\) and \(N_{x}\), we find that there is a general structure of a continuous spectrum along the real axis and several oval structures spread along the imaginary direction, but the exact number of ovals and the spread along the imaginary direction depend on specific values of the parameters. We stress that the energy spectrum under the open boundary conditions does not depend on how the coordinates are chosen. Nevertheless, the asymptotic spectrum coincides with the energy spectrum in the cylindrical configuration where the coordinates are chosen in a symmetric manner.
_Aharonov-Bohm effect for imaginary magnetic fields.--_ We now discuss an effect analogous to the
Figure 2: (a,b,c) Asymptotic energy spectrum fixing \(N_{x}=40\) under the open boundary conditions for \(B=0.01i\). (a) \(N_{y}=50\). (b) \(N_{y}=100\). (c) \(N_{y}=150\). (d) Asymptotic energy spectrum predicted from the non-Bloch band theory. All the axes are in units of \(J\).
Aharonov-Bohm effect [23] for imaginary magnetic fields. The non-Hermitian Aharonov-Bohm effect in a parameter space has been experimentally observed for synthetic mechanical metamaterials [24; 25], but, to our knowledge, it has never been observed in real space. We consider the setup where we start from a wavepacket around the center of the lattice, and then add external forces to make the wavepacket move. As the trajectory of the wavepacket forms a closed path, the change of the magnitude of the wavefunction is precisely related to the imaginary magnetic flux enclosed by the path. We now numerically demonstrate the effect.
As an initial state, we choose a normalized Gaussian wavepacket \(\psi_{x,y}\propto e^{-\{(x-x_{0})^{2}+(y-y_{0})^{2}\})/(2\sigma^{2})}\) centered around the point \((x_{0},y_{0})\) with the spread \(\sigma=5\). We apply a force changing sinusoidally in time, created by a potential \(V_{x,y}=E_{x}\sin(2\pi t/T)x+E_{y}\sin(2\pi t/T)y\), so that the wavepacket makes a rectangular trajectory either in the counter-clockwise or the clockwise direction. For the counter-clockwise trajectory, we apply \((E_{x},E_{y})=(2,0)\), \((0,1)\), \((-2,0)\), and \((0,-1)\) for \(0\leq t\leq T\), \(T\leq t\leq 2T\), \(2T\leq t\leq 3T\), and \(3T\leq t\leq 4T\), respectively. For the clockwise trajectory, we instead apply \((E_{x},E_{y})=(0,1)\), \((2,0)\), \((0,-1)\), and \((-2,0)\) for \(0\leq t\leq T\), \(T\leq t\leq 2T\), \(2T\leq t\leq 3T\), and \(3T\leq t\leq 4T\), respectively. We use \(T=5/J\) in the numerical simulation.
In the following numerical simulation, we choose a lattice of size \(N_{x}=N_{y}=50\) with an imaginary magnetic field \(B=0.001i\) under the open boundary conditions. We use coordinates \(x=0,1,2,\cdots,49\) and \(y=0,1,2,\cdots,49\) for numerical calculation. Starting from a wavepacket centered around \((x_{0},y_{0})=(25,25)\), and evolving in time until \(t=4T\), the center of the wavepacket forms rectangles as plotted in Fig. 3(a) for the counter-clockwise trajectory. We performed simulation in both Landau and symmetric gauges, and the trajectory of the center of the wavepacket slightly differs for the two gauges. In Fig. 3(b), we plot the modulus of the wavefunction, \(|\vec{\psi}|\equiv\sqrt{\sum_{x,y}|\psi_{x,y}|^{2}}\), as a function of time for both gauges for the counter-clockwise trajectory. The same quantity for the clockwise trajectory is also plotted in Fig. 3(c). We see that during the time evolution \(|\vec{\psi}|\) is generally different between the two gauges, but after the closed trajectory is formed, they coincide. The decay of the norm of the wavefunction is related to the Aharonov-Bohm factor times the so-called dynamical phase. The Aharonov-Bohm factor is determined by \(e^{iBA_{\text{Area}}}=e^{-\mathcal{B}A_{\text{Area}}}\approx 0.973\) for the counter-clockwise trajectory, where we used that the area enclosed by the trajectory is \(A_{\text{Area}}\approx 27.4\) for both gauges. There is another contribution to the change of \(|\vec{\psi}|\), which is the dynamical phase factor depending on the growth/decay of the wavefunction due to the complex instantaneous eigenvalues. It turns out that, for the situation corresponding to our situation the dynamical phase factor is almost negligible, and the final values of Fig. 3(b,c) agree well with the Aharonov-Bohm factor. To be more precise, in order to extract solely the effect of the Aharonov-Bohm factor, we calculate the ratio of the counter-clockwise and clockwise trajectories, in which the dynamical phase contribution should cancel. We plot the result in Fig. 3(d), where we see a perfect agreement between the final value and the Aharonov-Bohm factor \(e^{i2BA_{\text{Area}}}=e^{-2\mathcal{B}A_{\text{Area}}}\approx 0.947\).
_Conclusion.--_ We have studied spectral and geometrical properties of two-dimensional lattices under a uniform imaginary magnetic field. Our results unveil features of imaginary magnetic fields which are intrinsically different from real magnetic fields, such as impossibility to take periodic boundary condition in both directions and non-convergence of the energy spectrum in the limit when both sides are taken large. On the other hand, there also are similarities to the real magnetic field, such as description in terms of the Harper equation and the analog of the Aharonov-Bohm effect. Although we focused on the cases of purely imaginary magnetic fields, general results presented in the paper, such as the non-Bloch band theory when increasing the length of one direction and the non-Hermitian Aharonov-Bohm effect, should be valid
Figure 3: Simulation of the Aharonov-Bohm effect under imaginary magnetic fields for the lattice size of \(50\times 50\) with \(B=0.001i\) under the open boundary conditions. (a) The trajectory of the mean position of the wavefunction, starting from \((x_{0},y_{0})=(25,25)\). (b) \(|\vec{\psi}|\) as a function of time (in units of \(1/J\)) for the counter-clockwise trajectory. The horizontal dotted line is the theoretical value of the Aharonov-Bohm factor \(e^{iBA_{\text{Area}}}=e^{-BA_{\text{Area}}}\approx 0.973\). (c) \(|\vec{\psi}|\) as a function of time for the clockwise trajectory. The horizontal dotted line is at \(e^{8BA_{\text{Area}}}\approx 1.028\). (d) The ratio of \(|\vec{\psi}|\) of counter-clockwise trajectory to that of clockwise trajectory for each time. The horizontal dotted line is at \(e^{-28A_{\text{Area}}}\approx 0.947\). In (b,c,d), the solid lines are for the Landau gauge whereas the dashed lines are for the symmetric gauge.
also for more general complex magnetic fields including both real and imaginary components.
Our results provide a starting point toward the research field of non-Hermitian magnetic fields. We have focused on physics on lattices; properties under an imaginary magnetic field in continuous two dimensional systems is also an open field of study. Understanding properties under more general gauge fields such as complex electromagnetic fields and non-Abelian gauge fields (e.g. spin-orbit coupling) is also left for future study.
_Acknowledgements.--_ The authors would like to thank Shuichi Murakami for helpful discussion on the non-Bloch band theory. This work is supported by JSPS KAKENHI Grant No. JP20H01845, Grant No. JP21H01007, Grant No. JP21H01084, and JST CREST Grant No.JPMJCR19T1.
|
2301.01808 | MessageNet: Message Classification using Natural Language Processing and
Meta-data | In this paper we propose a new Deep Learning (DL) approach for message
classification. Our method is based on the state-of-the-art Natural Language
Processing (NLP) building blocks, combined with a novel technique for infusing
the meta-data input that is typically available in messages such as the sender
information, timestamps, attached image, audio, affiliations, and more. As we
demonstrate throughout the paper, going beyond the mere text by leveraging all
available channels in the message, could yield an improved representation and
higher classification accuracy. To achieve message representation, each type of
input is processed in a dedicated block in the neural network architecture that
is suitable for the data type. Such an implementation enables training all
blocks together simultaneously, and forming cross channels features in the
network. We show in the Experiments Section that in some cases, message's
meta-data holds an additional information that cannot be extracted just from
the text, and when using this information we achieve better performance.
Furthermore, we demonstrate that our multi-modality block approach outperforms
other approaches for injecting the meta data to the the text classifier. | Adar Kahana, Oren Elisha | 2023-01-04T20:11:00Z | http://arxiv.org/abs/2301.01808v1 | # MessageNet: Message Classification using Natural Language Processing and Meta-data
###### Abstract
In this paper we propose a new Deep Learning (DL) approach for message classification. Our method is based on the state-of-the-art Natural Language Processing (NLP) building blocks, combined with a novel technique for infusing the meta-data input that is typically available in messages such as the sender information, timestamps, attached image, audio, affiliations, and more. As we demonstrate throughout the paper, going beyond the mere text by leveraging all available channels in the message, could yield an improved representation and higher classification accuracy. To achieve message representation, each type of input is processed in a dedicated block in the neural network architecture that is suitable for the data type. Such an implementation enables training all blocks together simultaneously, and forming cross channels features in the network. We show in the Experiments Section that in some cases, message's meta-data holds an additional information that cannot be extracted just from the text, and when using this information we achieve better performance. Furthermore, we demonstrate that our multi-modality block approach outperforms other approaches for injecting the meta data to the the text classifier.
Message classification Meta data injection Deep learning Natural language processing
## 1 Introduction
Many real world applications require message classification and regression, such as handling spam emails [1], ticket routing [2], article sentiment review [3] and more. Accurate message classification could improve critical scenarios such as in call centers (routing tickets based on topic) [2], alert systems (flagging highly important alert messages) [4], and categorizing incoming messages (automatically unclutter emails) [1, 5]. The main distinction between text and message classification is the availability of additional attributes, such as the sender information, timestamps, attached image, audio, affiliations, and more. New message classification contests often appear in the prominent platforms (i.e., Kaggle [6]), showing how this topic is sought after. There are already many data-sets to explore in this field, but no clear winner algorithm that fits all scenarios with high accuracy, efficiency and simplicity (in terms of implementation and interpretation).
A notable advancement in the field of NLP is the attention based transformers architecture [7]. This family of methods excels in finding local connections between words, and better understanding the meaning of a sentence. A leading example is the Bidirectional Encoder Representations from Transformers (BERT) [8] as well as its variations [9, 10, 11], winning certain benchmarks [12, 13]. Several packages, such as Huggingface Transformers [14], make such models accessible and easy to use as well as provide pre-trained versions. In addition, one can use transfer learning [15] to further train BERT on their on data, creating a tailored model for the specific task at hand.
BERT, and often other transformer based models, are designed to handle text. They operate on the words of a given text by encoding them into tokens, and by the connections between the tokens they learn the context of sentences. This approach is limited, since sometimes more information can be extracted and used, not necessarily textual. Throughout this paper we refer to this information as meta-data to distinguish it from the main stream of textual content (though one may recognize it as the core data, depending on the application). For example, a meta-data could be the time stamp of when the text was written, sent, published, etc. Another example is the writer of the text, when dealing with a small
list of writers of a corpus. There have been some attempts to incorporate these into BERT models, for example by assigning artificial tokens for writers or for temporal segments (token per month for example) [16]. This approach is limited since not all meta-data entries are suitable for encoding by tokenization. In the example of temporal segments, more segments introduce more tokens, leading to large computational resources consumption, and less segments cause loss of information. Another approach is to concatenate the embeddings, created by the transformer module, with the outputs of an embedding module for the meta-data. In this approach, a transformer for the text is trained (using direct or transfer learning) on the text, and other separate modules (time series embedding, senders embeddings, etc.) are used to embed the meta-data. All the embeddings are then concatenated and used as inputs to a classification network. A drawback of this approach is that the internal network features are not trained from a combination of diffident input streams, and therefore avoid cross dependent features (e.g. the importance of an email is not only determined by its content, but also by who sent it, when, to whom else, attachments, etc.).
#### Novelty
To bridge these gaps, we implement a transformer based model that is able to train with both the text (transformer architecture) and meta-data. We create a new architecture of a blocks based network. Each block handles different kind of inputs. Splitting to blocks enables the flexibility to handle different kind of inputs. In addition, compared to the standard practices that suggest separate training and implementation of a "voting between classifiers" method, the proposed approach trains on the text and meta-data simultaneously, such that the language model (transformer) block weights are adjusted based on the information passing through the meta-data classification block weights and vice versa. We present results of the method with a main block based on a transformer that handles the text, and an additional block that handles the pre-processed meta-data inputs individually. This method can be extended to support more complex blocks, such as an advanced DL model for images [17], a temporal analysis block to extract information from temporal meta-data [18], additional transformer blocks for multiple text inputs (for example, subject and body of an email), categorical data, and more. To demonstrate the performance of the method we run multiple experiments on publicly available data-sets (Amazon [19], Yelp [20], Reddit [21] and Enron [5]) to show the advantages of using the block architecture, and compare them to the benchmarks in the literature (reviewed in the related work in section 2), which are based on the transformer benchmark (BERT), Random Forest (RF) classifier, and Multi-Layer Perceptron (MLP) networks. We achieve competitive results, and in most cases lead those benchmarks, showcasing that there is much to extract from the meta-data compared to just using text for classification tasks.
## 2 Related work
**Natural language processing tasks.** The publication of BERT [8] has been a turning point in the text classification domain. The authors demonstrated high accuracy on complicated tasks such as question and answer, named entity recognition, and textual entailment [13]. Since then, many authors investigated improved architectures and variations such as RoBERTa [9], ALBERT [10], DistilBERT [11], and more. Some focus on better performance on the benchmark tasks, and some create lighter versions of the model that reduce the computational demands while preserving competitive accuracy. Other propositions, like XLNet [22] and GPT-3 [23], introduce competing architectures to BERT (also using transformers). The benchmarks for these models are commonly GLUE, SuperGLUE [13], SQuAD 2.0 [12], and more. Text classification is a less common benchmark, but the models can be used for this task as shown in this paper.
**Accessibility of transformers.** Another contributing factor to the growing popularity of transformers is the variety of open-source code bases that make it easy for data-scientist to experiment with different architectures and then use it in their applications. The Huggingface transformers package [24] is a Python library that can be used to train and fine-tune models, with a large variety of base models to choose from, and straightforward implementation. The GPT-3 [23] has been published as open source and, similar to several other implementations, offers a convenient application programming interface (API). We mention that many libraries that do not use machine-learning for text classification exist such as NLTK [25], spaCy [26], and more. These are also easily accessible and offer advanced NLP feature extraction and other text analysis tools.
**Text classification.** There are many tasks in text classification, and each may be considered as a field of study. A popular one is sentiment analysis, aiming to classify texts as positive or negative. The survey in [3] presents the challenges in this domain and the latest innovations. Another example is the Spam or Ham task, where one tries to differentiate a relevant email from irrelevant ones (like advertisements, phishing attempts, etc.) [1]. In this work we investigate multi-label classification of messages. For example, classifying the category of a product based on purchase review, the category of a thread based on the posts, the culinary specialty of a restaurant from customer reviews, the type of product from purchase feedback, the category of an incoming email, and so on. For each of these tasks, publicly
available data-sets exist and are used in this work to quantify the success of the proposed method. In addition, there are many competitions in Kaggle [6] and other machine-learning research benchmark websites, using these data-sets.
**Message classification with meta-data.** There are two commonly used method to incorporate meta-data with textual information for message categorization when using transformers. The first is concatenating the embeddings, computed by the transformer, with the outputs of other embedding systems that are built specifically for the meta-data. In [27], the authors use this approach for visual and audio meta-data. In [28] the authors use properties of the text as meta-data and show that this approach can also work for the German language. In [29] the meta-data is the layout information of scanned documents, and the authors propose an innovative architecture to extract information from both text and layout information. There are many other studies exploring this approach. While it is simple to implement, this strategy has several drawbacks. The training is usually done independently for the text and the meta-data, and the decisions are made as a "voting between classifiers" approach. This may lead to conflicts, since the response of the text to the label may be different from the response of the meta-data and the label, resulting in very low confidence predictions. We compare the performance of the proposed method to this one in the results section. The second popular approach is to assign tokens to the meta-data, and add this to the tokenized input of the transformer. In [16], in addition to a hierarchy of labels, the authors introduce a method to inject multiple meta-data inputs with varying types (web, references, etc.) as tokens to the embedding vector. Due to the simplicity of embedding the information using tokens, in terms of algorithm and implementation, developers use this approach in their codes and it appears in many online notebooks and blogs (open source codes) as well. The main drawback of these methods are the robustness to the input data. For example, representing an image as a series of tokens is either done by encoding the image which usually faces loss of information, or by utilizing a large number of tokens that exponentially highers the computational cost. In the numerical experiments presented here, we do not compare to this method since the feature extraction we are using has a varying and potentially high number of features, which lead to computational resource exhaustion when using this method. In this work, we propose a method that can address the issues of the two popular methods, as described in the next section.
## 3 Approach
We propose a method based on blocks to train a linguistic model with meta-data for a specific text classification task. By splitting each type of meta-data input into different blocks, one can use state-of-the-art deep-learning architectures to handle each meta-data type uniquely and more efficiently. In addition, the training is done using all block and in a unified training loop, adjusting all the weights of all blocks in every optimizer step, so all information from the text and meta-data sources contributes to the learning process.
### Blocks architecture
The transformer models, including BERT, can be used for text classification with the input text and corresponding output labels. However, we claim that a lot of information can be found in the meta-data of the text. as can be seen in Figure 1, we use the transformer model as a single block of a neural network. Then, we can add additional blocks for dealing with the meta-data inputs.
With the recent developments in deep-learning, there are many advanced method of extracting information from input signals for classification. For example, in [30] the authors discuss ways to use deep-learning for analysing time series data. Messages typically have a temporal element, such as the time of arrival of an email, the time when a review has been posted, a paper has been published, etc. We propose to utilize these advancements together for better model training.
In Figure 1, an overall schematic view of the proposed approach is presented. In the first row, a standard transformer architecture is illustrated [24]. The inputs are the tokenized messages, followed by the transformer layers that are initially pre-trained. These layers are further trained (using transfer learning) to produce the embeddings, and a classification layer is used to predict the categories of the messages. The transformer layers in this illustration are presented using dashed lines to express the transfer learning process. The second row presents a meta-data extractor using a deep-neural network. The green layers express layers trained for classification (for example, fully-connected network architecture). The third row presents another transfer learning architecture. Each row expresses a different block of a neural-network that handles different meta-data inputs.
#### 3.1.1 Representation
Using the propose approach, the expected representation of the data is much different than the one achieved by the prominent approaches mentioned in section 2. Since the blocks are trained simultaneously, information from the meta-data may impact the training of the core textual block (the transformer block), and vice-versa. Therefore, a new
and different representation is achieved. As a toy illustrative example, let us say that a single sample has text suggesting a specific class but meta-data suggesting another, the textual block would train differently (and the textual representation would be different, taking this in account), and the weights of the textual block would be able to capture this difference. We illustrate the different in Figure 2. In Figure 1(a) we illustrate using tokens to inject the meta-data into the transformer layers, producing an embedding (e1-eM1). In Figure1(b) we demonstrate concatenation of the embedding produced by the transformer layers (e1-eM2) with a vector representation (v) of the meta-data. In Figure 1(c) the proposed blocks approach is illustrated. The trainable blocks are wrapped in green to emphasize that training is done together, creating a unified representation of the message, compared to 1(b) where the transformer layers and the dense layers are trained independently, and the output representations are concatenated. We emphasize that the blocks may have different architectures and may produce different vector representations, to produce a better representation.
#### 3.1.2 Combine
After the blocks, we propose to take a combination of the layers (e.g. averaging the outputs, summing the outputs, concatenating the outputs, etc.). This combination may also be trained as part of the training loop. From the experimental tests we find that the best performance was achieved by having the combine step as a weighted concatenation of the outputs, producing an output sized as the number of classes times the number of blocks. We then use a few dense layers to produce an output sized as the number of classes, and apply softmax activation for the classification.
## 4 Experiments
For the numerical experiments we train the network of the proposed approach with two blocks: a) the transformer block operating on the main text, and b) the meta-data block which is a fully-connected block operating on a one dimensional meta-data vector. We emphasize that more advanced blocks can be used, but even this simple architecture provided results that demonstrated the value from the meta-data and the blocks approach. In addition, it is simple to distinguish
Figure 1: A sketch diagram of the block method. The first row illustrates a transformer architecture to handle textual input, while the second and third rows illustrate neural networks for handling a specific meta-data input. All the modules are then combines to create a unified prediction. Blue tiles are static and green tiles are trainable layers. The dashed green tiles illustrate the transfer learning layers
between the transformer only architecture, and the transformer and meta-data architecture in this way, giving us better explainability of the contribution of the meta-data.
### Data-sets
**Amazon reviews.** The Amazon product reviews data-set [19] contains a large number of product reviews, varying by product category (label). The main text is the description of the review. We extract meta-data from the Amazon data-set comes from the reviewer field (sender), the review creation date and time (timestamp), and the overall satisfaction (enumerated). We expect correlations such as a user often reviews a specific category of products, or products more frequently bought at a specific time of the day. These hold information about the product that is not available from the text, and enhance the classification capabilities. This data-set has in total 82.83 million product reviews. We sub-sample a subset of the reviews, by choosing the first 100,000 reviews of each category and then selecting the ones with the longest texts. We eventually save 4,687 reviews for training, 521 for validation, and 2,064 for testing (eliminating text-less messages). The final data-set is close to balanced (roughly the same number of reviews per category).
**Yelp Open Data-set.** The Yelp restaurant reviews data-set [20] contains restaurant reviews, varying by culinary class (label). The main text is the restaurant review. We extract meta-data from the reviewer field (sender), the review creation date and time (timestamp), is the review useful/funny/cool (enumerated), and the number of stars (enumerated). The expected correlations here are reviewers that focus on specific restaurants, breakfast and dinner have different timestamps, and properties of the review itself can suggest a rate for the experience. We clean the data-set similar to the Amazon one, and save 4,612 reviews for training, 513 for validation, and 2,050 for testing, roughly balanced as well.
**Reddit.** The Reddit data-set [21] contains a large volume of Reddit posts, each belongs to a specific sub-Reddit (label). Specifically, we use the data-set of version 2 from 2010. The main text is the post content. We extract meta-data from the post uploader field (sender), the post upload date and time (timestamp), whether the post has comments (enumerated), and whether it has attachments (enumerated). The expected correlations are that people post usually in the same sub-Reddits, different going times can be due to certain events in time, and the comment and attachment properties often suggest the importance of the comment. We clean the data-set as well and save 4,950 posts for training, 550 for validation, and 2,200 for testing. We focus on 12 sub-Reddits (12 unique labels) and the data-set is roughly balanced.
Figure 2: Overview of the two common methods (a, b) and the proposed approach (c), emphasizing that the blocks are trained together, so information from one block may affect the training of the other blocks (as opposed to (b) for example, where the blocks are trained separately and concatenated)
**Enron Email Data-set.** The Enron email data-set [5] contains a corpus of emails. Although this data-set is publicly available, we had access to a limited internal version that has been tagged based on email category (label). The main text is the email body. We extract meta-data from the email sender field (sender), and the email reception date and time (timestamp). The expected correlations are that certain senders send emails in certain topics (colleagues send emails about work, meetings, etc., while friends send emails about personal matters), and the time of arrival of the email is usually correlated with the subject. We save 4,500 emails for training, 500 for validation, and 2,000 for testing. This data-set is not balanced.
### Feature extractor
To conduct the numerical experiments we first discuss a genuine feature extractor we use to extract the meta-data information from the available fields of the data-set. For each type of meta-data we extract corresponding features. Since different data-sets have different fields, we use generic modules to extract information from different fields of the same purpose, such as sender field (in emails data-set) or reviewer field (in restaurant reviews data-set). An overview of the meta-data features we extract from the data-set is given in Table 1. Note, that the different data-sets have different fields. Some of them do not have an enumerated field, and some have more than one.
### Training details
We use a simple network architecture so that we can clearly show the advantages of the method. We emphasize that with more advanced handling of the meta-data and introduction of more blocks, higher accuracy is expected. The model used for the text embedding is a pre-trained BERT model from the HuggingFace [14] library (bert-base-uncased). We use transfer learning to further train the language model with the meta-data, given the new task (new data). In addition, we used the ktrain package, specifically the text classification modules, and re-wrote them to have a message classification class. For the meta-data classification block we use two fully-connected layers, the first is of the size of the input meta-data and the size of the second is the number of classes, both use the Rectified Linear Unit (ReLU) activation. After the blocks we either use a Random Forest classifier (with 250 trees, unconstrained depth, up to one sample per leaf, and splitting criterion based on the Gini index) or combine the outputs into one (with averaging or using trainable weights), depending on the test scenario. In the case of output weighting, we use one fully connected layer at the end, its size is the number of classes and it is followed by a softmax activation (for the classification).
### Results
We use the data-sets mentioned in Section 4.1. We compare multiple methods in terms of accuracy:
1. Pre-trained BERT with an additional fully-connected layer.
2. Pre-trained BERT with a random forest classifier.
3. Pre-trained BERT with concatenated meta-data and an additional fully-connected layer.
4. Pre-trained BERT with concatenated meta-data and a random forest classifier.
5. Transfer learning BERT with an additional fully-connected layer.
6. Transfer learning BERT with a random forest classifier.
7. Transfer learning BERT with concatenated meta-data and an additional fully-connected layer.
8. Transfer learning BERT with concatenated meta-data and a random forest classifier.
9. The proposed method (BERT and meta-data) with output averaging.
10. The proposed method (BERT and meta-data) with output weighting.
Methods 1-4 use embeddings from a pre-trained BERT model (without transfer learning), and train either an extra fully-connected layer or a random forest classifier [31] for the classification. We do this either without (1-2) or with (3-4) concatenation of the meta-data to the embeddings, following the common methods in the literature. Methods 5-8 are similar, but the BERT transformers layers are further trained on the specific task data. Methods 3,4,7,8 are the reference methods of concatenating the BERT model embedding with a meta-data embedding from the literature [16]. Methods 9-10 are the proposed ones, exploring the effect of combining the transformer output with the meta-data block output, once with averaging the block outputs and once with a fully-connected layer to learn the weights after concatenating the output representations of each block. The latter involves training to learn this weight, which is implemented so that it happens in the same training process as all other weights in the network. The results are given in Table 2. By observing the results, we see that the proposed method is competitive with all other methods and even outperforms most of them.
An interesting result is observed for the Enron emails data-set. We notice that the pre-trained methods performed better than the transfer-learning based methods. This can be explained by the mismatches between the textual information and the meta-data information. While the text suggests one class, the meta-data may suggest a different one. Methods 1-8, in this case, act as voting between classifiers. However, as mentioned in section 3.1.1, the proposed method is influenced by both text and meta-data and learns a better representation that can take into account these differences, performing better than all other methods.
## 5 Conclusion
In this paper we proposed a new framework for training classification models. The proposition relies on the availability of additional, not necessarily textual, data channels such as attached images, audio, sender and timestamp, etc. We proposed an architecture with which one can utilize the aforementioned additional information using different blocks, along with the text, and train a neural network to perform message classification more accurately. We demonstrate the strength of this method using a set examples, varying by data-set and classification algorithm, and show that the proposed method outperforms the reference related works.
\begin{table}
\begin{tabular}{|c||c|c|c|c|} \hline Method\# & Amazon reviews & Yelp Open Data-set & Reddit & Enron Emails \\ \hline
1 & 0.66 & 0.29 & 0.56 & 0.49 \\
2 & 0.61 & 0.22 & 0.52 & 0.49 \\
3 & 0.65 & 0.24 & 0.46 & 0.48 \\
4 & 0.61 & 0.22 & 0.5 & 0.5 \\
5 & 0.74 & 0.39 & 0.61 & 0.47 \\
6 & 0.73 & 0.38 & 0.6 & 0.47 \\
7 & 0.71 & 0.38 & **0.62** & 0.47 \\
8 & 0.73 & 0.39 & 0.6 & 0.47 \\
9 & 0.7 & 0.3 & **0.62** & 0.47 \\
10 & **0.77** & **0.4** & **0.62** & **0.53** \\ \hline \end{tabular}
\end{table}
Table 2: Results table |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.